Tuesday, March 21, 2023

Generate Code with ChatGPT Reusable Prompt Patterns

Target audience: Advanced
Estimated reading time: 4'  

Wouldn't it be fantastic if ChatGPT could generate source code? If it could, what level of quality could we expect from the generated output?

This post introduces the idea of employing reusable prompt patterns with large language models, focusing on their application in generating Python code. 
To illustrate the concept, we present a straightforward use case: generating a PostgreSQL database application that includes update and query functionalities, showcasing the effectiveness of these prompt patterns


Table of contents
Follow me on LinkedIn

Overview

Strategically crafting prompts for interacting with conversational large language models plays a vital role in enhancing the quality of their responses and suggestions.

The OpenAI documentation states: "Designing your prompt is essentially how you “program” the model, usually by providing some instructions or a few examples. This is different from most other NLP services which are designed for a single task, such as sentiment classification or named entity recognition. Instead, the completions and chat completions endpoint can be used for virtually any task including content or code generation, summarization, expansion, conversation, creative writing, and style transfer.."[ref 1]

Notes:

Reusable patterns

Since 1995, the field of software engineering has greatly benefited from the introduction and evolution of design patterns [ref 4]. These design patterns serve to name, abstract, and identify key elements of common design structures, offering software developers reusable solutions. Over time, various design patterns, such as factory, template, composite, and observer, have been applied, assessed, and documented.
It's logical to apply a similar approach to the emerging discipline of prompt engineering, considering the existing literature that identifies patterns within conversational prompts. 

This article draws upon the framework outlined in the seminal paper titled "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" [ref 5]. Prompt patterns, much like their counterparts in software engineering, provide reusable solutions to specific problems. They are categorized based on attributes such as Category (or purpose), Intent and context, Motivation, Contextual statements, Examples, and Consequences.

Here a subset of prompt patterns for the catalog

Pattern Purpose Contextual statements
Persona Output customization
Assign a persona/role/domain expert to LLM. The persona can be expressed as a role, job description, title, historical or known figure.
  • I would like you to ask me questions to achieve X
  • You should ask questions until X is achieved or the condition X is met. Ask me questions regarding X one at the time.
Template Output customization
Ensure output follows a precise template (i.e. format, URL, example…). This pattern instructs LLM to use a unique format in specific portion of the answer.
  • I am going to provide you with a template for your output
  • X is my place holder for content
  • Try to fit the output into one or more of the placeholders that I listed
  • Please preserve the formatting and template I provided
  • Here is the template to follow: PATTERN with PLACEHOLDER.
Cognitive verifier Prompt Improvement
Quality of LLM answers improves if the initial questions is broken into additional sub-questions.
  • When you asked a question, follow these rules
  • Generate several additional questions that would help more accurately answer the original question
  • Combine the answers to the individual questions to produce the final aggregate answer to the overall question.
Fact check list Error identification
Request LLM to provide/append a list of facts/assumptions to the answer, so the user may perform due-diligence.
  • Generate a set of facts that are contained in the output
  • The set of facts should be inserted/appended to the output
  • The set of facts should be the fundamental facts that could undermine the veracity of the output if any of them are incorrect.
Output automater Output customization
Having LLM generate a script or automated task that can be execute any steps the LLM recommends.
  • Whenever you produce an output with steps, always do this. Produce an executable artifact of type X that will automate these steps.
Reflection Error identification
Ask LLM to automatically explain the rationale behind a given answers to the user. The pattern clarifies any points of confusion, underlying assumptions, gaps in knowledge….
  • Whenever you generate an answer, explain the reasoning and assumptions behind your answer, so I can improve my question.
Visualization Generator Output customization
Use generated text to create a visualization as complex concepts are easier to grasp with diagrams and images. The LLM output should create a pathway for the tool to produce imagery.
  • Generate an X that I can provide to tool Y to visualize it.


Use case

To assess the influence of different prompt patterns on the quality of ChatGPT responses, we employ a specific use case: building a basic Python application for interacting with a PostgreSQL database, encompassing functionalities for access, update, and querying.

The evaluation procedure involves initiating a conversation with ChatGPT using an initial request or prompt. Subsequently, we iteratively enhance the prompt by applying various patterns, including Persona, Template, Visualization generator, and Reflection patterns.

Each interaction or prompt within the conversation includes:
  1. The prompt's objective.
  2. The prompt message itself.
  3. ChatGPT's response, accompanied by comments and source code.
  4. Our subsequent analysis.

Initial request

Prompt

"I would like you to generate Python code for create the table ’predictions’ with the following fields, ‘id’, ‘num_entries’, ‘’variable’, ‘mean’’ and variance’ for the PostgreSQL database ‘EHR’, then write a query for large number of rows."

Answer

ChatGPT code
Initialization of connection, creation of table and insertion of random values.
import psycopg2
from psycopg2 import sql
import random

  # Define connection parameters
db_name = "EHR"
db_user = "username"
db_password = "password"
db_host = "localhost"
db_port = "5432"

   # Connect to the PostgreSQL database
connection = psycopg2.connect(
  dbname=db_name,
  user=db_user,
  password=db_password,
  host=db_host,
  port=db_port
)

   # Create a cursor object
cursor = connection.cursor()

   # Create the 'predictions' table
cursor.execute("""CREATE TABLE predictions (
  id SERIAL PRIMARY KEY,
  num_entries INTEGER,
  variable VARCHAR(255),
  mean FLOAT,
  variance FLOAT);"""
)
connection.commit()

   # Insert a large number of rows into the 'predictions' table
for i in range(10000):      # adjust this number as necessary
  cursor.execute("""INSERT INTO predictions (
    num_entries, variable, mean, variance)VALUES (%s, %s, %s, %s
    )""", 
    (
      random.randint(1, 100),     # num_entries
      f'variable_{i}',                     # variable
      random.uniform(0, 100),    # mean
      random.uniform(0, 10)      # variance
    )
  )
connection.commit()

  # Close the cursor and connection
cursor.close()
connection.close()

Implementation of the query
# Re-establish connection and cursor
connection = psycopg2.connect(
    dbname=db_name,
    user=db_user,
    password=db_password,
    host=db_host,
    port=db_port
)
cursor = connection.cursor()

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows from the query
rows = cursor.fetchall()
# Print each row
for row in rows:
    print(row)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code functions as intended. ChatGPT accurately infers that PostgreSQL is locally deployed and employs randomly generated values for code evaluation. The only minor issue is that the response assumes the absence of a table creation check ('CREATE TABLE IF NOT EXIST').
  • In the second code snippet, the query is implemented with a straightforward print statement for each row, serving as a validation measure. To prevent excessive standard output, these print statements could have been applied to a subset of rows.
  • ChatGPT doesn't include an exception handling mechanism, as it wasn't explicitly required. Software developers should anticipate incorporating proper error handling in the generated source code for safety.

Persona Pattern

The objective is to assess ChatGPT's ability to identify an erroneous input 'VARCHAR2(128)' and utilize the context with minimal latency. The modifications made to the original prompt are indicated in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR2(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for 2000000 rows.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In PostgreSQL, VARCHAR2 is not a valid datatype. We use VARCHAR instead.
For querying 2,000,000 rows efficiently, it's recommended to use a server-side cursor. This will allow you to fetch the rows in manageable chunks, rather than trying to load all the rows into memory at once, which may not be possible with this volume of data."

ChatGPT source code update:
# Re-establish connection and cursor
connection = psycopg2.connect(
   dbname=db_name,
   user=db_user,
   password=db_password,
   host=db_host,
   port=db_port
)

# Create a server-side cursor
cursor_name = "my_cursor"
cursor = connection.cursor(cursor_name)

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch rows in manageable chunks
while True:
   rows = cursor.fetchmany(size=1000)  # adjust this number as necessary
   if not rows:
      break

    # Process each row
   for row in rows:
      print(row)

# Close the cursor and connection
cursor.close()
connection.close()

Analysis

  • ChatGPT detected the incorrect type VARCHAR2 for the variable.  
  • The constraint of low latency query is enforced by fetching a batch of 1000 rows at the time.

Template Pattern

We introduce the template pattern to our initial prompt, instructing ChatGPT to adhere to a specific format for its response. The modification made to the preceding prompt is marked in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In this code, a server-side cursor is used to fetch rows from the 'predictions' table, and the fetched rows are stored in a pandas DataFrame. This DataFrame is then converted to HTML and inserted into the HTML template. The HTML template is then saved to an HTML file named output.html."

ChatGPT code update:
import pandas as pd

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows into pandas DataFrame
df = pd.read_sql_query("SELECT * FROM predictions;", connection)

# Convert DataFrame to HTML
html = df.to_html()

# HTML template
html_output = f"""
<!DOCTYPE html>
<html>
<head>
    <title>MYTITLE</title>  <!-- replace with your actual title -->
    <style>
    body {{
        background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
    }}
    table {{
        width: 100%;
        border-collapse: collapse;
    }}
    th, td {{
        border: 1px solid black;
        padding: 8px;
        text-align: left;
    }}
    </style>
</head>
<body>
    {html}
</body>
</html>
"""

# Save the HTML output
with open("output.html", "w") as file:
    file.write(html_output)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code executes as intended
  • ChatGPT generates the HTML output by dumping the results set into a panda dataset and invoking to_htm() method as we did not specify an approach in our request.
  • However, ChatGPT did not parameterize html_output, merely suggesting replacing the place holder as an HTML comment.

Visualization generator

As explained in the introduction, we utilize this pattern to instruct ChatGPT to generate a script that will render HTML output in a specified web browser.
.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments: "The resulting HTML output of the query will be displayed in a new tab in your default web browser. If you wish to open it specifically in Chrome, you'll need to provide the path to your Chrome executable and uncomment the relevant lines."

ChatGPT code update
import webbrowser

# Open the web page in the web browser
filepath = 'file://' + os.path.realpath(filename)

# To open in your default web browser
webbrowser.open_new_tab(filepath)

# Or to specifically open in Chrome, uncomment the following lines
# and replace the path_to_chrome with the path to your Chrome executable
# path_to_chrome = r'/path/to/chrome'  
# update path as per your configuration
webbrowser.get(path_to_chrome).open_new_tab(filepath)


Analysis
  • We didn't specify the programming language for the visualization generator/script in our request. Consequently, ChatGPT correctly assumes that Python is the language to be used for rendering the HTML output of the query. 
  • It's worth noting that ChatGPT provided two statements for launching and opening the browser: one for the default browser and another tailored for Chrome.
  • Similar to the previous response, ChatGPT did not include error handling code.

Reflection pattern

Up to this point, ChatGPT has not incorporated error handling code into its responses. Consequently, we've opted to include it in our existing set of contextual statements within the prompt. In order to demonstrate the application of the reflection pattern, we've asked ChatGPT to provide its rationale for each component of its response.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Please add exception handling in connecting to the database
Can you explain your reasoning and assumption behind your answer
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments
"In the context of your request, here is the reasoning and assumptions:
PostgreSQL Connection: You mentioned that the PostgreSQL database is hosted on a remote service with the URL "mypostgresql". This implies that the Python script will require network access and the necessary credentials (username, password) to access the database.

Data Query: You want to query a large number of rows. This implies that the 'predictions' table will hold a large amount of data. For large result sets, it's good practice to use a server-side cursor, which fetches and processes the rows in manageable chunks. This reduces memory consumption and can potentially speed up the response times of your queries.

Table Creation: You want to create a table named "predictions" with specific fields and data types. This implies that you have the required permissions to create tables in the database.

HTML Output: You want to format the query output in HTML with a specific title and background color. The pandas library's to_html() method is ideal for this. It converts a DataFrame into an HTML table, which can then be customized with additional HTML and CSS.

Browser Display: You want the HTML output to be displayed in the Chrome browser. The Python webbrowser module can open URLs in new browser windows or tabs. However, it uses the system's default browser by default. If you specifically want to open the page in Chrome, you can do so by providing the path to the Chrome executable.
Assumptions:

Libraries: This solution assumes that you have the necessary Python libraries (psycopg2 and pandas) installed. If not, you can install them using pip (pip install psycopg2-binary pandas).

Permissions: The solution assumes that you have the necessary permissions to create tables and query data in the specified PostgreSQL database.

Data Existence: The solution assumes that there is data in the 'predictions' table when querying. If the table is empty, the HTML output will only include the table headers.

Server Performance: Handling large number of rows depends on the capabilities of your database server. The server needs to have enough resources to handle such large data sets efficiently.

Browser: This solution assumes that Chrome browser is installed on your system. If it's not, the webbrowser module's open methods will not work as expected. The exact path to Chrome will also depend on your operating system and the specific location where Chrome is installed.

Now, with this reasoning and assumptions, I'll provide the Python script in a follow-up response."

ChatGPT final source code
import psycopg2
import pandas as pd
import webbrowser
import os

# Define connection parameters
db_name = "EHR"
db_user = "username"  # replace with your actual username
db_password = "password"  # replace with your actual password
db_host =  "localhost"  # replace with your actual host
db_port = "5432"  # default PostgreSQL port

try:
    # Connect to the PostgreSQL database
  connection = psycopg2.connect(
     dbname=db_name,
     user=db_user,
     password=db_password,
     host=db_host,
     port=db_port
  )

    # Create a cursor object
  cursor = connection.cursor()

    # Create the 'predictions' table
  cursor.execute("""
     CREATE TABLE predictions (
         id SERIAL PRIMARY KEY,
         num_entries INT,
         variable VARCHAR(128),
         mean FLOAT,
         variance FLOAT
     );""")
  connection.commit()

    # Close the cursor
  cursor.close()

    # Re-establish cursor with server-side cursor for fetching large amount of rows
  cursor = connection.cursor("my_cursor")

    # Execute a SELECT query
  cursor.execute("SELECT * FROM predictions;")

    # Fetch all rows into pandas DataFrame
  df = pd.read_sql_query("SELECT * FROM predictions;", connection)

    # Convert DataFrame to HTML
  html = df.to_html()

    # HTML template
  html_output = f"""
    <!DOCTYPE html>
    <html>
    <head>
        <title>MYTITLE</title>  <!-- replace with your actual title -->
        <style>
        body {{
            background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
        }}
        table {{
            width: 100%;
            border-collapse: collapse;
        }}
        th, td {{
            border: 1px solid black;
            padding: 8px;
            text-align: left;
        }}
        </style>
    </head>
    <body>
        {html}
    </body>
    </html>
    """

    # Save the HTML output
    filename = "output.html"
    with open(filename, "w") as file:
        file.write(html_output)

    # Open the web page in the web browser
    filepath = 'file://' + os.path.realpath(filename)

    # Or to specifically open in Chrome, uncomment the following lines
    # and replace the path_to_chrome with the path to your Chrome executable
    # path_to_chrome = r'/path/to/chrome'  # update path as per your configuration
    # webbrowser.get(path_to_chrome).open_new_tab(filepath)

except psycopg2.Error as error:
    print(f'An error occurred while connecting to the PostgreSQL database {error}")
finally:
      # Always close the cursor and connection, whether an error occurred or not
    if 'cursor' in locals():
        cursor.close()
    if 'connection' in locals():
        connection.close()

Analysis
  • In accordance with our specific request, ChatGPT has incorporated exception handling into the most recent version of the code.
  • Furthermore, ChatGPT offers insights into its choice of using Pandas for HTML generation and makes assumptions about the PostgreSQL installation status, table condition, query size, and other aspects.
  • It's worth noting that the comment regarding server/database performance could have included additional guidance on multi-threaded execution, creating views, and manipulating cursors within the Python code.


Conclusion

We've showcased that the quality of the generated code significantly improves when we enhance an initial, basic prompt with successive patterns.
The synergy or integration of these prompt patterns enhances the overall quality of the generated Python code, sometimes even more so than the patterns individually contribute.
It's worth noting that there are numerous other patterns that are worth exploring [ref 4].


Thank you for reading this article. For more information ...

References

[3Secure ChatGPT API client in Scala
[4] Design Patterns: Elements of reusable Object-Oriented Software E. Gamma, R. Helm, R. Johnson, J. Vlissides - Addison-Wesley professional computing series - 1995
[5A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, D. C. Douglas



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3


Wednesday, February 15, 2023

Secure ChatGPT API Client in Scala

Target audience: Beginner
Estimated reading time: 3' 

This post introduces the OpenAI ChatGPT API and describes an implementation in Scala for which the API key is encrypted.

Table of contents
ChatGPT REST API
       HTTP parameters
       Request body
       Response body
Follow me on LinkedIn
Notes: 
  • The code associated with this article is written using Scala 2.12.15
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

ChatGPT REST API

The ChatGPT functionality is accessible through a REST API [ref 1]. The client for the HTTP POST can be implemented
  • as an application to generate the request including the JSON content and process the response (Curl, Postman, ...)
  • programmatically by defining the request and response as self-contained data structure (class, data or case class, ...) to manage the remote service.

HTTP parameters

The connectivity parameters for the HTTP post are
  • OPEN_AI_KEY=xxxxx
  • URL= "https://api.openai.com/v1/chat/completions
  • CONTENT_TYPE="application/json"
  • AUTHORIZATION=s"Bearer ${OPEN_AI_KEY}" 

Request body

The parameters of the POST content:

  • model: Identifier of the model (i.e. gpt-3.5-turbo)
  • messages: Text of the conversation 
  • user: Identifier for the user
  • roleRole of the user {system|user|assistant}
  • content: Content of the message or request
  • nameName of the author
  • temperature: Hyper-parameter that controls the "creativity" of the language model by adjusting the distribution (softmax) for the prediction of the next word/token. The higher the value (> 0) the more diverse the prediction (default: 0)
  • top_p: Sample the tokens with top_p probability. It is an alternative to temperature (default: 1)
  • n: Number of solutions/predictions (default 1)
  • max_tokens: Limit the number of tokens used in the response (default: Infinity)
  • presence_penalty: Penalize new tokens which appear in the text so far if positive. A higher value favors most recent topics (default: 0)
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency if the value is positive (default: 0)
  • logit_bias: Map specific tokens to the likelihood of their appearance in the message

Note: Hyper-parameters in red are mandatory


Response body

  • id: Conversation identifier
  • object: Payload of the response
  • created: Creation date
  • usage.prompt_tokens:  Number of tokens used in the prompt
  • usage.completion_tokens: Number of tokens used in the completion
  • usage.total_tokens Total number of tokens
  • choices
  • choices.message.role Role used in the request
  • choices.message.content Response content
  • choices.finish_reason Description of the state of completion of the request.

Scala client

The first step in implementing the client to ChatGPT API is to define the POST request and response data structure. We define two types of requests
  • Simple user request, ChatGPTUserRequest 
  • Request with model hyper-parameters ChatGPTDevRequest which includes most of the parameters defined in the introduction.
We will be using the Apache Jackson serialization/deserialization library [ref 2to convert request structure to JSON and JSON response into response object.
The handle to the serialization/deserialization, mapper, is defined and configured in the singleton ChatGPTRequest
The toJson method convert any chatGPT request into a JSON string that is then converted into an array of bytes in method toJsonBytes

trait ChatGPTRequest {
   import ChatGPTRequest._ 
    
   def toJson: String = mapper.writeValueAsString(this)
   def toJsonBytes: Array[Byte] = toJson.getBytes
}


object ChatGPTRequest {
        // Instantiate a singleton for the Jackson serializer/deserializer
    val mapper = JsonMapper.builder().addModule(DefaultScalaModule).build()
    mapper.registerModule(DefaultScalaModule)
    mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
}

The basic end user and comprehensive request structure implement the request parameters defined in the first section.
case class ChatGPTMessage(role: String, content)

case class ChatGPTUserRequest(model: String, messages: Seq[ChatGPTMessage]) 
      extends ChatGPTRequest

case class ChatGPTDevRequest(
    model: String,
    user: String,
    prompt: String,
    temperature: Double,
    max_tokens: Int,
    top_p: Int = 1,
    n: Int = 1,
    presence_penalty: Int = 0,
    frequency_penalty: Int = 1) extends ChatGPTRequest


The response, ChatGPTResponse reflects the REST API specification described in the first section. These classes have to be implemented as case classes to support serialization of object through the class loader.

case class ChatGPTChoice(
   message: ChatGPTMessage, 
   index: Int, 
   finish_reason: String)

case class ChatGPTUsage(
   prompt_tokens: Int, 
   completion_tokens: Int, 
   total_tokens: Int)

case class ChatGPTResponse(
    id: String,  
    `object`: String,
    created: Long,
    model: String,
    choices: Seq[ChatGPTChoice],
    usage: ChatGPTUsage)


For convenience we implemented two constructors for the ChatGPTClient class
  • A fully defined constructor taking 3 arguments: complete request url, timeout and an encrypted API key, encryptedAPIKey
  • A simplified constructor with JSON timeout and encryptedAPIKey  as arguments.
The connection is implemented by the HttpURLConnection java class and use an encrypted API key. The secure access to the ChatGPT service is defined by the authorization property. 
The request implementing the ChatGPTRequest trait is converted into bytes, pushed into the outputStream. In our example, we just throw an IllegalStateException exception in case the HTTP error code is not 200. A recovery handler  (httpCode: Int) => ChatGPTRequest would be more useful.

import com.fasterxml.jackson.databind.json.JsonMapper
import com.fasterxml.jackson.databind.DeserializationFeature
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import java.io.{BufferedReader, InputStreamReader, OutputStream}
import java.net.{HttpURLConnection, URL}


class ChatGPTClient(
   url: String, 
   timeout: Int,
   encryptedAPIKey: String
) {
   import ChatGPTClient._

  def apply(chatGPTRequest: ChatGPTRequest): Option[ChatGPTResponse] = {
     var outputStream: Option[OutputStream] = None

     try {
        EncryptionUtil.unapply(encryptedAPIKey).map(
          apiKey => {
            // Create and initialize the HTTP connection
            val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
            connection.setRequestMethod("POST")
            connection.setRequestProperty("Content-Type", "application/json")
            connection.setRequestProperty("Authorization", s"Bearer $apiKey")
            connection.setConnectTimeout(timeout)
            connection.setDoOutput(true)

            // Write into the connection output stream
            outputStream = Some(connection.getOutputStream)
            outputStream.foreach(_.write(chatGPTRequest.toJsonBytes))

               // If request failed.... just throw an exception for now.
            if(connection.getResponseCode != HttpURLConnection.HTTP_OK)
              throw new IllegalStateException(
                   s"Request failed with HTTP code ${connection.getResponseCode}"
              )

             // Retrieve the JSON string from the connection input stream
            val out = new BufferedReader(new InputStreamReader(connection.getInputStream))
                .lines
                .reduce(_ + _)
                .get
            // Instantiate the response from the Chat GPT output
            ChatGPTResponse.fromJson(out)
        }
      )
    }
    catch {
      case e: java.io.IOException =>
        logger.error(e.getMessage)
        None
      case e: IllegalStateException =>
        logger.error(e.getMessage)
        None
    }
    finally {
      outputStream.foreach(os => {
        os.flush
        os.close
      })
    }
  }
}

The request/response to/from ChatGPT service is implemented by the method apply which returns an optional ChatGPTResponse if successful, None otherwise

val model = "text-davinci-003"
val user = "system"
val encryptedAPIKey = "8df109aed"
val prompt = "What is the color of the moon"
val timeout = 200
val request = ChatGPTUserRequest(model, user, prompt)
// 
// Simply instantiate, invoke ChatGPT and print its output
val chatGPTClient = ChatGPTClient(timeout, encryptedAPIKey)
chatGPTClient(request).foreach(println(_))

with the following output....
The color of the moon is typically described as a pale gray or white. However, the appearance of the moon's color can vary depending on various factors such as atmospheric conditions, the moon's position in the sky,...


A simple encryption..

As mentioned in the introduction, a simple and easy way to secure access any remote service is to encrypt credentials such as password, keys...so they do not appear in clear text in configuration, properties files, docker files or source code.
Encryption is the process of applying a key to plain text that transforms that plain text into unintelligible (cipher) text. Only programs with the key to turn the cipher text back to original text can decrypt the protected information.
The javax.crypto java package [ref 3] provides developers with classes, interfaces and algorithms for cryptography.

Our code snippet relies on a cypher with a basic AES encryption scheme. The Apache commons code package is used to implement the base 64 encoder.
The method apply, in the following code snippet, initializes the cypher in encryption model to encrypt the raw API key prior its encoding in base64.

import javax.crypto.spec.{IvParameterSpec, SecretKeySpec}
import javax.crypto.Cipher
import org.apache.commons.codec.binary.Base64.{decodeBase64, encodeBase64String}


  // Define the parameter of the cypher using AES encoding schemer
final val AesLabel = "AES"
final val EncodingScheme = "UTF-8"
final val key = "aesEncryptorABCD"
final val initVector = "aesInitVectorABC"

  // Instantiate the Cypherr
val iv = new IvParameterSpec(initVector.getBytes(EncodingScheme))
val keySpec = new SecretKeySpec(key.getBytes(), AesLabel)
val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")


  /**
   * Encrypt a string or content using AES and Base64 bytes representation
   * @param credential String to be encrypted
   * @return Optional encrypted string
   */
def apply(clearCredential: String): Option[String] =  {
   cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv)

   val encrypted = cipher.doFinal(clearCredential.getBytes)
   Some(encodeBase64String(encrypted))
} 

The decryption method, unapply, reverses the steps used in the encryption
  1. initialize the cipher in decryption model
  2. decode encrypted API key 
  3. apply the cipher (doFinal)
def unapply(encryptedCredential: String): Option[String] = {
   cipher.init(Cipher.DECRYPT_MODE, keySpec, iv)
 
   val decrypted = cipher.doFinal(decodeBase64(encryptedCredential)
   Some(new String(decrypted))
}


Thank you for reading this article. For more information ...

References

Environments: Scala 2.12.15,   JDK 11,  Apache Commons Text 1.9,  FasterXML Jackson 2.13.1


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3