Tuesday, April 18, 2023

Secure ChatGPT API Client in Scala

Target audience: Beginner
Estimated reading time: 3' 

This post introduces the OpenAI ChatGPT API and describes an implementation in Scala for which the API key is encrypted.

Table of contents
ChatGPT REST API
       HTTP parameters
       Request body
       Response body
Follow me on LinkedIn
Notes: 
  • The code associated with this article is written using Scala 2.12.15
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

ChatGPT REST API

The ChatGPT functionality is accessible through a REST API [ref 1]. The client for the HTTP POST can be implemented
  • as an application to generate the request including the JSON content and process the response (Curl, Postman, ...)
  • programmatically by defining the request and response as self-contained data structure (class, data or case class, ...) to manage the remote service.

HTTP parameters

The connectivity parameters for the HTTP post are
  • OPEN_AI_KEY=xxxxx
  • URL= "https://api.openai.com/v1/chat/completions
  • CONTENT_TYPE="application/json"
  • AUTHORIZATION=s"Bearer ${OPEN_AI_KEY}" 

Request body

The parameters of the POST content:

  • model: Identifier of the model (i.e. gpt-3.5-turbo)
  • messages: Text of the conversation 
  • user: Identifier for the user
  • roleRole of the user {system|user|assistant}
  • content: Content of the message or request
  • nameName of the author
  • temperature: Hyper-parameter that controls the "creativity" of the language model by adjusting the distribution (softmax) for the prediction of the next word/token. The higher the value (> 0) the more diverse the prediction (default: 0)
  • top_p: Sample the tokens with top_p probability. It is an alternative to temperature (default: 1)
  • n: Number of solutions/predictions (default 1)
  • max_tokens: Limit the number of tokens used in the response (default: Infinity)
  • presence_penalty: Penalize new tokens which appear in the text so far if positive. A higher value favors most recent topics (default: 0)
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency if the value is positive (default: 0)
  • logit_bias: Map specific tokens to the likelihood of their appearance in the message

Note: Hyper-parameters in red are mandatory


Response body

  • id: Conversation identifier
  • object: Payload of the response
  • created: Creation date
  • usage.prompt_tokens:  Number of tokens used in the prompt
  • usage.completion_tokens: Number of tokens used in the completion
  • usage.total_tokens Total number of tokens
  • choices
  • choices.message.role Role used in the request
  • choices.message.content Response content
  • choices.finish_reason Description of the state of completion of the request.

Scala client

The first step in implementing the client to ChatGPT API is to define the POST request and response data structure. We define two types of requests
  • Simple user request, ChatGPTUserRequest 
  • Request with model hyper-parameters ChatGPTDevRequest which includes most of the parameters defined in the introduction.
We will be using the Apache Jackson serialization/deserialization library [ref 2to convert request structure to JSON and JSON response into response object.
The handle to the serialization/deserialization, mapper, is defined and configured in the singleton ChatGPTRequest
The toJson method convert any chatGPT request into a JSON string that is then converted into an array of bytes in method toJsonBytes

trait ChatGPTRequest {
   import ChatGPTRequest._ 
    
   def toJson: String = mapper.writeValueAsString(this)
   def toJsonBytes: Array[Byte] = toJson.getBytes
}


object ChatGPTRequest {
        // Instantiate a singleton for the Jackson serializer/deserializer
    val mapper = JsonMapper.builder().addModule(DefaultScalaModule).build()
    mapper.registerModule(DefaultScalaModule)
    mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
}

The basic end user and comprehensive request structure implement the request parameters defined in the first section.
case class ChatGPTMessage(role: String, content)

case class ChatGPTUserRequest(model: String, messages: Seq[ChatGPTMessage]) 
      extends ChatGPTRequest

case class ChatGPTDevRequest(
    model: String,
    user: String,
    prompt: String,
    temperature: Double,
    max_tokens: Int,
    top_p: Int = 1,
    n: Int = 1,
    presence_penalty: Int = 0,
    frequency_penalty: Int = 1) extends ChatGPTRequest


The response, ChatGPTResponse reflects the REST API specification described in the first section. These classes have to be implemented as case classes to support serialization of object through the class loader.

case class ChatGPTChoice(
   message: ChatGPTMessage, 
   index: Int, 
   finish_reason: String)

case class ChatGPTUsage(
   prompt_tokens: Int, 
   completion_tokens: Int, 
   total_tokens: Int)

case class ChatGPTResponse(
    id: String,  
    `object`: String,
    created: Long,
    model: String,
    choices: Seq[ChatGPTChoice],
    usage: ChatGPTUsage)


For convenience we implemented two constructors for the ChatGPTClient class
  • A fully defined constructor taking 3 arguments: complete request url, timeout and an encrypted API key, encryptedAPIKey
  • A simplified constructor with JSON timeout and encryptedAPIKey  as arguments.
The connection is implemented by the HttpURLConnection java class and use an encrypted API key. The secure access to the ChatGPT service is defined by the authorization property. 
The request implementing the ChatGPTRequest trait is converted into bytes, pushed into the outputStream. In our example, we just throw an IllegalStateException exception in case the HTTP error code is not 200. A recovery handler  (httpCode: Int) => ChatGPTRequest would be more useful.

import com.fasterxml.jackson.databind.json.JsonMapper
import com.fasterxml.jackson.databind.DeserializationFeature
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import java.io.{BufferedReader, InputStreamReader, OutputStream}
import java.net.{HttpURLConnection, URL}


class ChatGPTClient(
   url: String, 
   timeout: Int,
   encryptedAPIKey: String
) {
   import ChatGPTClient._

  def apply(chatGPTRequest: ChatGPTRequest): Option[ChatGPTResponse] = {
     var outputStream: Option[OutputStream] = None

     try {
        EncryptionUtil.unapply(encryptedAPIKey).map(
          apiKey => {
            // Create and initialize the HTTP connection
            val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
            connection.setRequestMethod("POST")
            connection.setRequestProperty("Content-Type", "application/json")
            connection.setRequestProperty("Authorization", s"Bearer $apiKey")
            connection.setConnectTimeout(timeout)
            connection.setDoOutput(true)

            // Write into the connection output stream
            outputStream = Some(connection.getOutputStream)
            outputStream.foreach(_.write(chatGPTRequest.toJsonBytes))

               // If request failed.... just throw an exception for now.
            if(connection.getResponseCode != HttpURLConnection.HTTP_OK)
              throw new IllegalStateException(
                   s"Request failed with HTTP code ${connection.getResponseCode}"
              )

             // Retrieve the JSON string from the connection input stream
            val out = new BufferedReader(new InputStreamReader(connection.getInputStream))
                .lines
                .reduce(_ + _)
                .get
            // Instantiate the response from the Chat GPT output
            ChatGPTResponse.fromJson(out)
        }
      )
    }
    catch {
      case e: java.io.IOException =>
        logger.error(e.getMessage)
        None
      case e: IllegalStateException =>
        logger.error(e.getMessage)
        None
    }
    finally {
      outputStream.foreach(os => {
        os.flush
        os.close
      })
    }
  }
}

The request/response to/from ChatGPT service is implemented by the method apply which returns an optional ChatGPTResponse if successful, None otherwise

val model = "text-davinci-003"
val user = "system"
val encryptedAPIKey = "8df109aed"
val prompt = "What is the color of the moon"
val timeout = 200
val request = ChatGPTUserRequest(model, user, prompt)
// 
// Simply instantiate, invoke ChatGPT and print its output
val chatGPTClient = ChatGPTClient(timeout, encryptedAPIKey)
chatGPTClient(request).foreach(println(_))

with the following output....
The color of the moon is typically described as a pale gray or white. However, the appearance of the moon's color can vary depending on various factors such as atmospheric conditions, the moon's position in the sky,...


A simple encryption..

As mentioned in the introduction, a simple and easy way to secure access any remote service is to encrypt credentials such as password, keys...so they do not appear in clear text in configuration, properties files, docker files or source code.
Encryption is the process of applying a key to plain text that transforms that plain text into unintelligible (cipher) text. Only programs with the key to turn the cipher text back to original text can decrypt the protected information.
The javax.crypto java package [ref 3] provides developers with classes, interfaces and algorithms for cryptography.

Our code snippet relies on a cypher with a basic AES encryption scheme. The Apache commons code package is used to implement the base 64 encoder.
The method apply, in the following code snippet, initializes the cypher in encryption model to encrypt the raw API key prior its encoding in base64.

import javax.crypto.spec.{IvParameterSpec, SecretKeySpec}
import javax.crypto.Cipher
import org.apache.commons.codec.binary.Base64.{decodeBase64, encodeBase64String}


  // Define the parameter of the cypher using AES encoding schemer
final val AesLabel = "AES"
final val EncodingScheme = "UTF-8"
final val key = "aesEncryptorABCD"
final val initVector = "aesInitVectorABC"

  // Instantiate the Cypherr
val iv = new IvParameterSpec(initVector.getBytes(EncodingScheme))
val keySpec = new SecretKeySpec(key.getBytes(), AesLabel)
val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")


  /**
   * Encrypt a string or content using AES and Base64 bytes representation
   * @param credential String to be encrypted
   * @return Optional encrypted string
   */
def apply(clearCredential: String): Option[String] =  {
   cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv)

   val encrypted = cipher.doFinal(clearCredential.getBytes)
   Some(encodeBase64String(encrypted))
} 

The decryption method, unapply, reverses the steps used in the encryption
  1. initialize the cipher in decryption model
  2. decode encrypted API key 
  3. apply the cipher (doFinal)
def unapply(encryptedCredential: String): Option[String] = {
   cipher.init(Cipher.DECRYPT_MODE, keySpec, iv)
 
   val decrypted = cipher.doFinal(decodeBase64(encryptedCredential)
   Some(new String(decrypted))
}


Thank you for reading this article. For more information ...

References

Environments: Scala 2.12.15,   JDK 11,  Apache Commons Text 1.9,  FasterXML Jackson 2.13.1


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Tuesday, March 21, 2023

ChatGPT API Python Client

Target audience: Beginner
Estimated reading time: 3' 

This post describes a generic implementation of a client to the OpenAI ChatGPT REST web service. This implementation has been written and tested with Python 3.9. Comments in the source code are omitted for the sake of clarity.

ChatGPT client application in Python

Table of contents
ChatGPT API overview
       HTTP parameters
       Request body
Follow me on LinkedIn
Notes
  • Environments:  Python 3.9.16,  ChatGPT 3.5
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

ChatGPT API overview

Let's review the parameters for OpenAI Chat Completion REST API

HTTP Parameters

The connectivity parameters for the HTTP post are
  • OPEN_AI_KEY=xxxxx
  • URLhttps://api.openai.com/v1/chat/completions
  • CONTENT_TYPE=application/json
  • AUTHORIZATION="Bearer ${OPEN_AI_KEY}" 

Request body

The parameters of the POST content:

  • model: Identifier of the model (i.e. gpt-3.5-turbo)
  • messagesText of the conversation 
  • user: Identifier for the user
  • role:  Role of the user {system|user|assistant}
  • content: Content of the message or request
  • name:  Name of the author
  • temperatureHyper-parameter that controls the "creativity" of the language model by adjusting the distribution (softmax) for the prediction of the next word/token. The higher the value (> 0) the more diverse the prediction (default: 0)
  • top_pSample the tokens with top_p probability. It is an alternative to temperature (default: 1)
  • nNumber of solutions/predictions (default 1)
  • max_tokens: Limit the number of tokens used in the response (default: Infinity)
  • presence_penalty: Penalize new tokens which appear in the text so far if positive. A higher value favors most recent topics (default: 0)
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency if the value is positive (default: 0)
NoteOpenAI models are non-deterministic, as identical requests may yield different answers. Setting temperature = 0 will make the outputs mostly deterministic.

Chat completion request

The first step is to implement the data structure for the content of the request as described in the previous section. The content of the request, ChatGPTRequest is a read-only and therefore implemented as a data class. The class members reflect the semantic of the OpenAI chat completion API.
  • modelIdentifier of the model (i.e. gpt-3.5-turbo, code-davinci-002 ...)
  • role Role of the user as system, user or assistant
  • temperatureHyper-parameter that adjusts the distribution for the prediction of the next token
  • max_tokensLimit the number of tokens used in the response 
  • top_p: Sample the tokens with p highest probability (default 1)
  • n Number predictions/choices (default 1)
  • presence_penalty: Penalize new tokens which appear in the text so far 
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency
from dataclasses import dataclass

@dataclass
class ChatGPTRequest:
   model: str
   role: str
   temperature: float
   max_tokens: int
   top_p: int
   n: int
   presence_penalty: int
   frequency_penalty: int


ChatGPT client

There are two constructors for the client of type ChatGPTClient 
  • Default constructor for a fully customizable request ChatGPTRequest as argument.
  • Constructor build for simple request that required only model, role and temperature (evaluated with values 0 and 1) with all other parameters using the default values specified in the openAI API, openapi.ChatCompletion, documentation [ref 1]. We use the annotated type, InstanceType to specify the type of instance to create [ref 2]
from typing import Type, TypeVar, Callable

Instancetype = TypeVar('Instancetype', bound='ChatGPTClient')


class ChatGPTClient(object):
    import constants

      # static variable for the API key and the default maximum 
      # number of tokens returned
    openai.api_key = constants.openai_api_key
    default_max_tokens = 1024

    def __init__(self, chatGPTRequest: ChatGPTRequest):
       self.chatGPTRequest = chatGPTRequest

   @classmethod
   def build(cls, model: str, role: str, temperature: float) -> Instancetype: 
       chatGPTRequest = ChatGPTRequest( 
           model, 
           role, 
           temperature,
           ChatGPTClient.default_max_tokens, 
           1, 
           1, 
           0, 
           0)
       return cls(chatGPTRequest)


Post

Let's start with a simple version of the invocation that returns only the answer without any explanation, status or usage. The only argument of the HTTP post is the user prompt. The response is extracted from the message of the first choice of the answer.

def post(self, prompt: str) -> str:     
   import logging

   try:
       response = openai.ChatCompletion.create(  
           model=self.chatGPTRequest.model,
           messages=[{'role': self.chatGPTRequest.user, 'content': prompt}],
           temperature=self.chatGPTRequest.temperature,
           max_tokens=self.chatGPTRequest.max_tokens
       )
       return response['choices'][0].message.content
   
   except  openai.error.AuthenticationError as e:
       logging.error(f'Failed as {str(e)}')


Some advanced client applications may require processing and evaluating some metadata included in the response. In this case, the JSON content of the Chat completion answer is objectified.
The key variables of the ChatCompletion response are
  • idConversation identifier
  • object: Payload of the response
  • createdCreation date
  • usage.prompt_tokens:  Number of tokens used in the prompt
  • usage.completion_tokens: Number of tokens used in the completion
  • usage.total_tokens Total number of tokens
  • choices
  • choices.message.role Role used in the request
  • choices.message.content Response content
  • choices.finish_reason Description of the state of completion of the request
As with the request content, the difference components of the answer are implemented as data classes to reflect the structure of the JSON response from the ChatCompletion API.
The previous implementation of the post method is upgraded by adding a conversion of the JSON response into ChatGPTResponse.

@dataclass
class ChatGPTChoice:
   messages: []
   index: int
   finish_reason: str

@dataclass
class ChatGPTUsage:
   prompt_tokens: int
   completion_tokens: int
   total_tokens: int

@dataclass
class ChatGPTResponse:
   id: str
   object: str
   created: int
   model: int
   choices: []
   usage: ChatGPTUsage


def post_dev(self, prompt: str) -> ChatGPTResponse:
   import json
   import logging
        
   try:
      response = openai.ChatCompletion.create(
          model=self.chatGPTRequest.model,
          messages=[{'role': self.chatGPTRequest.user, 'content': prompt}],
          temperature=self.chatGPTRequest.temperature,
          max_tokens=self.chatGPTRequest.max_tokens
      )
      return json.loads(response)
 
   except  openai.error.AuthenticationError as e:
       logging.error(f'Failed as {str(e)}')

Note: We are using the built-in json Python library [ref 3] to decode the ChatGPT response. The list of alternative libraries include Orjson [ref 4] and SimpleJson [ref 5]

References

[4Orjson




---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3