Showing posts with label Code generation. Show all posts
Showing posts with label Code generation. Show all posts

Tuesday, March 21, 2023

Generate Code with ChatGPT Reusable Prompt Patterns

Target audience: Advanced
Estimated reading time: 4'  

Wouldn't it be fantastic if ChatGPT could generate source code? If it could, what level of quality could we expect from the generated output?

This post introduces the idea of employing reusable prompt patterns with large language models, focusing on their application in generating Python code. 
To illustrate the concept, we present a straightforward use case: generating a PostgreSQL database application that includes update and query functionalities, showcasing the effectiveness of these prompt patterns


Table of contents
Follow me on LinkedIn

Overview

Strategically crafting prompts for interacting with conversational large language models plays a vital role in enhancing the quality of their responses and suggestions.

The OpenAI documentation states: "Designing your prompt is essentially how you “program” the model, usually by providing some instructions or a few examples. This is different from most other NLP services which are designed for a single task, such as sentiment classification or named entity recognition. Instead, the completions and chat completions endpoint can be used for virtually any task including content or code generation, summarization, expansion, conversation, creative writing, and style transfer.."[ref 1]

Notes:

Reusable patterns

Since 1995, the field of software engineering has greatly benefited from the introduction and evolution of design patterns [ref 4]. These design patterns serve to name, abstract, and identify key elements of common design structures, offering software developers reusable solutions. Over time, various design patterns, such as factory, template, composite, and observer, have been applied, assessed, and documented.
It's logical to apply a similar approach to the emerging discipline of prompt engineering, considering the existing literature that identifies patterns within conversational prompts. 

This article draws upon the framework outlined in the seminal paper titled "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" [ref 5]. Prompt patterns, much like their counterparts in software engineering, provide reusable solutions to specific problems. They are categorized based on attributes such as Category (or purpose), Intent and context, Motivation, Contextual statements, Examples, and Consequences.

Here a subset of prompt patterns for the catalog

Pattern Purpose Contextual statements
Persona Output customization
Assign a persona/role/domain expert to LLM. The persona can be expressed as a role, job description, title, historical or known figure.
  • I would like you to ask me questions to achieve X
  • You should ask questions until X is achieved or the condition X is met. Ask me questions regarding X one at the time.
Template Output customization
Ensure output follows a precise template (i.e. format, URL, example…). This pattern instructs LLM to use a unique format in specific portion of the answer.
  • I am going to provide you with a template for your output
  • X is my place holder for content
  • Try to fit the output into one or more of the placeholders that I listed
  • Please preserve the formatting and template I provided
  • Here is the template to follow: PATTERN with PLACEHOLDER.
Cognitive verifier Prompt Improvement
Quality of LLM answers improves if the initial questions is broken into additional sub-questions.
  • When you asked a question, follow these rules
  • Generate several additional questions that would help more accurately answer the original question
  • Combine the answers to the individual questions to produce the final aggregate answer to the overall question.
Fact check list Error identification
Request LLM to provide/append a list of facts/assumptions to the answer, so the user may perform due-diligence.
  • Generate a set of facts that are contained in the output
  • The set of facts should be inserted/appended to the output
  • The set of facts should be the fundamental facts that could undermine the veracity of the output if any of them are incorrect.
Output automater Output customization
Having LLM generate a script or automated task that can be execute any steps the LLM recommends.
  • Whenever you produce an output with steps, always do this. Produce an executable artifact of type X that will automate these steps.
Reflection Error identification
Ask LLM to automatically explain the rationale behind a given answers to the user. The pattern clarifies any points of confusion, underlying assumptions, gaps in knowledge….
  • Whenever you generate an answer, explain the reasoning and assumptions behind your answer, so I can improve my question.
Visualization Generator Output customization
Use generated text to create a visualization as complex concepts are easier to grasp with diagrams and images. The LLM output should create a pathway for the tool to produce imagery.
  • Generate an X that I can provide to tool Y to visualize it.


Use case

To assess the influence of different prompt patterns on the quality of ChatGPT responses, we employ a specific use case: building a basic Python application for interacting with a PostgreSQL database, encompassing functionalities for access, update, and querying.

The evaluation procedure involves initiating a conversation with ChatGPT using an initial request or prompt. Subsequently, we iteratively enhance the prompt by applying various patterns, including Persona, Template, Visualization generator, and Reflection patterns.

Each interaction or prompt within the conversation includes:
  1. The prompt's objective.
  2. The prompt message itself.
  3. ChatGPT's response, accompanied by comments and source code.
  4. Our subsequent analysis.

Initial request

Prompt

"I would like you to generate Python code for create the table ’predictions’ with the following fields, ‘id’, ‘num_entries’, ‘’variable’, ‘mean’’ and variance’ for the PostgreSQL database ‘EHR’, then write a query for large number of rows."

Answer

ChatGPT code
Initialization of connection, creation of table and insertion of random values.
import psycopg2
from psycopg2 import sql
import random

  # Define connection parameters
db_name = "EHR"
db_user = "username"
db_password = "password"
db_host = "localhost"
db_port = "5432"

   # Connect to the PostgreSQL database
connection = psycopg2.connect(
  dbname=db_name,
  user=db_user,
  password=db_password,
  host=db_host,
  port=db_port
)

   # Create a cursor object
cursor = connection.cursor()

   # Create the 'predictions' table
cursor.execute("""CREATE TABLE predictions (
  id SERIAL PRIMARY KEY,
  num_entries INTEGER,
  variable VARCHAR(255),
  mean FLOAT,
  variance FLOAT);"""
)
connection.commit()

   # Insert a large number of rows into the 'predictions' table
for i in range(10000):      # adjust this number as necessary
  cursor.execute("""INSERT INTO predictions (
    num_entries, variable, mean, variance)VALUES (%s, %s, %s, %s
    )""", 
    (
      random.randint(1, 100),     # num_entries
      f'variable_{i}',                     # variable
      random.uniform(0, 100),    # mean
      random.uniform(0, 10)      # variance
    )
  )
connection.commit()

  # Close the cursor and connection
cursor.close()
connection.close()

Implementation of the query
# Re-establish connection and cursor
connection = psycopg2.connect(
    dbname=db_name,
    user=db_user,
    password=db_password,
    host=db_host,
    port=db_port
)
cursor = connection.cursor()

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows from the query
rows = cursor.fetchall()
# Print each row
for row in rows:
    print(row)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code functions as intended. ChatGPT accurately infers that PostgreSQL is locally deployed and employs randomly generated values for code evaluation. The only minor issue is that the response assumes the absence of a table creation check ('CREATE TABLE IF NOT EXIST').
  • In the second code snippet, the query is implemented with a straightforward print statement for each row, serving as a validation measure. To prevent excessive standard output, these print statements could have been applied to a subset of rows.
  • ChatGPT doesn't include an exception handling mechanism, as it wasn't explicitly required. Software developers should anticipate incorporating proper error handling in the generated source code for safety.

Persona Pattern

The objective is to assess ChatGPT's ability to identify an erroneous input 'VARCHAR2(128)' and utilize the context with minimal latency. The modifications made to the original prompt are indicated in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR2(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for 2000000 rows.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In PostgreSQL, VARCHAR2 is not a valid datatype. We use VARCHAR instead.
For querying 2,000,000 rows efficiently, it's recommended to use a server-side cursor. This will allow you to fetch the rows in manageable chunks, rather than trying to load all the rows into memory at once, which may not be possible with this volume of data."

ChatGPT source code update:
# Re-establish connection and cursor
connection = psycopg2.connect(
   dbname=db_name,
   user=db_user,
   password=db_password,
   host=db_host,
   port=db_port
)

# Create a server-side cursor
cursor_name = "my_cursor"
cursor = connection.cursor(cursor_name)

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch rows in manageable chunks
while True:
   rows = cursor.fetchmany(size=1000)  # adjust this number as necessary
   if not rows:
      break

    # Process each row
   for row in rows:
      print(row)

# Close the cursor and connection
cursor.close()
connection.close()

Analysis

  • ChatGPT detected the incorrect type VARCHAR2 for the variable.  
  • The constraint of low latency query is enforced by fetching a batch of 1000 rows at the time.

Template Pattern

We introduce the template pattern to our initial prompt, instructing ChatGPT to adhere to a specific format for its response. The modification made to the preceding prompt is marked in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In this code, a server-side cursor is used to fetch rows from the 'predictions' table, and the fetched rows are stored in a pandas DataFrame. This DataFrame is then converted to HTML and inserted into the HTML template. The HTML template is then saved to an HTML file named output.html."

ChatGPT code update:
import pandas as pd

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows into pandas DataFrame
df = pd.read_sql_query("SELECT * FROM predictions;", connection)

# Convert DataFrame to HTML
html = df.to_html()

# HTML template
html_output = f"""
<!DOCTYPE html>
<html>
<head>
    <title>MYTITLE</title>  <!-- replace with your actual title -->
    <style>
    body {{
        background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
    }}
    table {{
        width: 100%;
        border-collapse: collapse;
    }}
    th, td {{
        border: 1px solid black;
        padding: 8px;
        text-align: left;
    }}
    </style>
</head>
<body>
    {html}
</body>
</html>
"""

# Save the HTML output
with open("output.html", "w") as file:
    file.write(html_output)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code executes as intended
  • ChatGPT generates the HTML output by dumping the results set into a panda dataset and invoking to_htm() method as we did not specify an approach in our request.
  • However, ChatGPT did not parameterize html_output, merely suggesting replacing the place holder as an HTML comment.

Visualization generator

As explained in the introduction, we utilize this pattern to instruct ChatGPT to generate a script that will render HTML output in a specified web browser.
.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments: "The resulting HTML output of the query will be displayed in a new tab in your default web browser. If you wish to open it specifically in Chrome, you'll need to provide the path to your Chrome executable and uncomment the relevant lines."

ChatGPT code update
import webbrowser

# Open the web page in the web browser
filepath = 'file://' + os.path.realpath(filename)

# To open in your default web browser
webbrowser.open_new_tab(filepath)

# Or to specifically open in Chrome, uncomment the following lines
# and replace the path_to_chrome with the path to your Chrome executable
# path_to_chrome = r'/path/to/chrome'  
# update path as per your configuration
webbrowser.get(path_to_chrome).open_new_tab(filepath)


Analysis
  • We didn't specify the programming language for the visualization generator/script in our request. Consequently, ChatGPT correctly assumes that Python is the language to be used for rendering the HTML output of the query. 
  • It's worth noting that ChatGPT provided two statements for launching and opening the browser: one for the default browser and another tailored for Chrome.
  • Similar to the previous response, ChatGPT did not include error handling code.

Reflection pattern

Up to this point, ChatGPT has not incorporated error handling code into its responses. Consequently, we've opted to include it in our existing set of contextual statements within the prompt. In order to demonstrate the application of the reflection pattern, we've asked ChatGPT to provide its rationale for each component of its response.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Please add exception handling in connecting to the database
Can you explain your reasoning and assumption behind your answer
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments
"In the context of your request, here is the reasoning and assumptions:
PostgreSQL Connection: You mentioned that the PostgreSQL database is hosted on a remote service with the URL "mypostgresql". This implies that the Python script will require network access and the necessary credentials (username, password) to access the database.

Data Query: You want to query a large number of rows. This implies that the 'predictions' table will hold a large amount of data. For large result sets, it's good practice to use a server-side cursor, which fetches and processes the rows in manageable chunks. This reduces memory consumption and can potentially speed up the response times of your queries.

Table Creation: You want to create a table named "predictions" with specific fields and data types. This implies that you have the required permissions to create tables in the database.

HTML Output: You want to format the query output in HTML with a specific title and background color. The pandas library's to_html() method is ideal for this. It converts a DataFrame into an HTML table, which can then be customized with additional HTML and CSS.

Browser Display: You want the HTML output to be displayed in the Chrome browser. The Python webbrowser module can open URLs in new browser windows or tabs. However, it uses the system's default browser by default. If you specifically want to open the page in Chrome, you can do so by providing the path to the Chrome executable.
Assumptions:

Libraries: This solution assumes that you have the necessary Python libraries (psycopg2 and pandas) installed. If not, you can install them using pip (pip install psycopg2-binary pandas).

Permissions: The solution assumes that you have the necessary permissions to create tables and query data in the specified PostgreSQL database.

Data Existence: The solution assumes that there is data in the 'predictions' table when querying. If the table is empty, the HTML output will only include the table headers.

Server Performance: Handling large number of rows depends on the capabilities of your database server. The server needs to have enough resources to handle such large data sets efficiently.

Browser: This solution assumes that Chrome browser is installed on your system. If it's not, the webbrowser module's open methods will not work as expected. The exact path to Chrome will also depend on your operating system and the specific location where Chrome is installed.

Now, with this reasoning and assumptions, I'll provide the Python script in a follow-up response."

ChatGPT final source code
import psycopg2
import pandas as pd
import webbrowser
import os

# Define connection parameters
db_name = "EHR"
db_user = "username"  # replace with your actual username
db_password = "password"  # replace with your actual password
db_host =  "localhost"  # replace with your actual host
db_port = "5432"  # default PostgreSQL port

try:
    # Connect to the PostgreSQL database
  connection = psycopg2.connect(
     dbname=db_name,
     user=db_user,
     password=db_password,
     host=db_host,
     port=db_port
  )

    # Create a cursor object
  cursor = connection.cursor()

    # Create the 'predictions' table
  cursor.execute("""
     CREATE TABLE predictions (
         id SERIAL PRIMARY KEY,
         num_entries INT,
         variable VARCHAR(128),
         mean FLOAT,
         variance FLOAT
     );""")
  connection.commit()

    # Close the cursor
  cursor.close()

    # Re-establish cursor with server-side cursor for fetching large amount of rows
  cursor = connection.cursor("my_cursor")

    # Execute a SELECT query
  cursor.execute("SELECT * FROM predictions;")

    # Fetch all rows into pandas DataFrame
  df = pd.read_sql_query("SELECT * FROM predictions;", connection)

    # Convert DataFrame to HTML
  html = df.to_html()

    # HTML template
  html_output = f"""
    <!DOCTYPE html>
    <html>
    <head>
        <title>MYTITLE</title>  <!-- replace with your actual title -->
        <style>
        body {{
            background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
        }}
        table {{
            width: 100%;
            border-collapse: collapse;
        }}
        th, td {{
            border: 1px solid black;
            padding: 8px;
            text-align: left;
        }}
        </style>
    </head>
    <body>
        {html}
    </body>
    </html>
    """

    # Save the HTML output
    filename = "output.html"
    with open(filename, "w") as file:
        file.write(html_output)

    # Open the web page in the web browser
    filepath = 'file://' + os.path.realpath(filename)

    # Or to specifically open in Chrome, uncomment the following lines
    # and replace the path_to_chrome with the path to your Chrome executable
    # path_to_chrome = r'/path/to/chrome'  # update path as per your configuration
    # webbrowser.get(path_to_chrome).open_new_tab(filepath)

except psycopg2.Error as error:
    print(f'An error occurred while connecting to the PostgreSQL database {error}")
finally:
      # Always close the cursor and connection, whether an error occurred or not
    if 'cursor' in locals():
        cursor.close()
    if 'connection' in locals():
        connection.close()

Analysis
  • In accordance with our specific request, ChatGPT has incorporated exception handling into the most recent version of the code.
  • Furthermore, ChatGPT offers insights into its choice of using Pandas for HTML generation and makes assumptions about the PostgreSQL installation status, table condition, query size, and other aspects.
  • It's worth noting that the comment regarding server/database performance could have included additional guidance on multi-threaded execution, creating views, and manipulating cursors within the Python code.


Conclusion

We've showcased that the quality of the generated code significantly improves when we enhance an initial, basic prompt with successive patterns.
The synergy or integration of these prompt patterns enhances the overall quality of the generated Python code, sometimes even more so than the patterns individually contribute.
It's worth noting that there are numerous other patterns that are worth exploring [ref 4].


Thank you for reading this article. For more information ...

References

[3Secure ChatGPT API client in Scala
[4] Design Patterns: Elements of reusable Object-Oriented Software E. Gamma, R. Helm, R. Johnson, J. Vlissides - Addison-Wesley professional computing series - 1995
[5A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, D. C. Douglas



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3