Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Sunday, November 3, 2024

Posts History

I delve into a diverse range of topics, spanning programming languages, machine learning, data engineering tools, and DevOps. Our articles are enriched with practical code examples, ensuring their applicability in real-world scenarios.

Follow me on LinkedIn

2024

Thursday, June 15, 2023

Extend LangChain Sequences with Data Types

Target audience: Advanced
Estimated reading time: 4'  

Since its public debut last year, Generative AI, especially OpenAI's ChatGPT [ref 1], has captivated the collective imagination. Frameworks like LangChain [ref 2], built around LLM APIs, have subsequently emerged to streamline processes and workflows.

This post introduces the concept generative workflow by extending LangChain with typed input and condition on input values.The post assumes the reader is somewhat has some basic knowledge of Python, OpenAI ChatGPT and LangChain.

Table of contents

Introduction

Follow me on LinkedIn

Notes: 

  • As I publish this post, OpenAI releases a new version of gpt-3.5-turbo that supports functions with typed input and output (ChatGPT functions)[ref 3]
  • The code snippets uses Python 3.9 and LangChain 0.0.200
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.
  • This post is not generated by ChatGPT but assumes the reader is already familiar with Large Language Model
  • The source code is available on GitHub https://github.com/patnicolas/chatgpt-patterns

Introduction

The LangChain Python framework built on OpenAI API to build large language models applications. The framework organizes ChatGPT API functionality into functional components, similar to Object-Oriented design). These components are assembled into customizable sequence or chains that can be dynamically configured. It allows developers to sequence of tasks (chains) with message/prompt as input (role=user) and answer (role=assistant) as output [ref 4]. This concept is analog to traditional function call

Let's consider a typical function call in Python
    def func(**kwargs: dict[str, any]) -> output_type:
        ....
       return x

It easy to establish an equivalence between the component of a method and the components of a chain.

PythonLLM
Function callLLM message/request
Function name (func)Prompt prefix
Argument (**kwargs)List of tuple (variable_name, variable_type, condition)
Returned type (output_type)LangChain output key

LangChain does not explicitly support types such as integer, dictionary, float.. in input messages. The next section extends LangChain functionality by adding types in ChatGPT request messages and given the data type, a condition or filter on the variable.

Example:

  • prompt prefix "Compute the sum of elements of an array"
  • Arguments:  (x, list[float], element > 0.5)

generate the following  prompt.  "Compute the sum of the elements of an array x of type list[float] for which elements > 0.5"

The next section describes the Python implementation of a workflow of typed chains for ChatGPT using LangChain framework.


Generative workflow

The first step is to install LangChain Python module and setup the OpenAI API key as an environment variable of target machine. The LangChain Quickstart guide [ref 5] is very concise and easy to follow so there is no need to duplicate the information in this post.

LLM chains and sequence are important functions of LangChain framework. They allow developers to build sequence of chains. It allows developers to assemble basic function, LLMChain into fully functional workflow or sequence (type SequentialChain)

Let's extend SequentialChain with typed and condition on input values by implemented a workflow is defined by the class ChatGPTTypedChains. The constructor has two arguments:

  • Temperature, _temperature, to initialize the ChatGPT request
  • Optional task definition, task_builder that define the task implemented as a chain
The task builder has two arguments:
  • task description (prompt)
  • List of input variables defined as tuple (name variables, data type, and optional condition of the variable
and the type of output value
The class member, input_keys captures the names of input to the workflow

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import SequentialChain, LLMChain
from collections.abc import Callable

"""
    This class extends the langchain sequences by defining explicitly 
    - Task or goal of the request to ChatGPT
    - Typed arguments for the task

    The components of the prompt/message
    - Definition of the task (i.e. 'compute the exponential value of')
    - Input variables defined as tuple (name, type, condition) (i.e.  'x', 'list[float]', 'value < 0.8')
"""


class ChatGPTTypedChains(object):
  def __init__(self, _temperature: float, task_builder: Callable[[str, list[(str, str, str)]], str] = None):
      """
      Constructor for the typed sequence of LLM chains
      @param task_builder Builder or assembler for the prompt with {task definition and
                list of arguments {name, type, condition} as input and prompt as output
      @param _temperature Temperature for the softmax log probabilities
      @type _temperature: floating value >= 0.0
      """
      self.chains: list[LLMChain] = []
      self.llm = ChatOpenAI(temperature=_temperature)
      self.task_builder = task_builder if task_builder else ChatGPTTypedChains.__build_prompt
      self.input_keys: list[str] = []



Additional tasks are appended to the workflow with their 3 components
  • Task descriptor, task_definition (prompt)
  • Task parameters, arguments as tuple (name of input, type of input and optional condition)
  • Return type, as _output_key
The private, static method, _build_prompt assembles the task components to generate the actual prompt, this_input_prompt, then processed by the LangChain template generator.

def append(self, task_definition: str, arguments: list[(str, str, str)], _output_key: str) -> int:
   """
   Add a new task (LLM chain) into the current workflow...
   @param _output_key: Output key or variable
   @param task_definition: Definition or specification of the task
   @type arguments: List of tuple (variable_name, variable_type, variable_condition)
   """
       # We initialize the input variables for the workflow
   if len(self.input_keys) == 0:
      self.input_keys = [key for key, _, _ in arguments]

        # Build the prompt for this new prompt
   this_input_prompt = ChatGPTTypedChains.__build_prompt(task_definition, arguments)
   this_prompt = ChatPromptTemplate.from_template(this_input_prompt)

        # Create a new LLM chain and add it to the sequence
   this_llm_chain = LLMChain(llm=self.llm, prompt=this_prompt, output_key=_output_key)
   self.chains.append(this_llm_chain)
   return len(self.chains)


@staticmethod
def __build_prompt(task_definition: str, arguments: list[(str, str, str)]) -> str:
   def set_prompt(var_name: str, var_type: str, var_condition: str) -> str:
       prompt_variable_prefix = "{" + var_name + "} with type " + var_type
       return prompt_variable_prefix + " and " + var_condition \
                if not bool(var_condition) \
                else \
                prompt_variable_prefix

   embedded_input_vars = ", ".join(
            [set_prompt(var_name, var_type, var_condition) \ \
                for var_name, var_type, var_condition in arguments]
   )
   return f'{task_definition} {embedded_input_vars}'

The method __call__ implements the workflow as a LangChain sequence chain. This method takes two arguments: Input to the workflow (input to the first task) _input_values and the name/keys for the output values (output from the last task in the sequence).

def __call__(self, _input_values: dict[str, str], output_keys: list[str]) -> dict[str, any]:
   """
   Execute the sequence of typed task (LLM chains)
   @param _input_values: Input values to the sequence
   @param output_keys: Output keys for the sequence
   @return: Dictionary of output variable -> values
   """
   chains_sequence = SequentialChain(
            chains=self.chains,
            input_variables=self.arguments,
            output_variables=output_keys,
            verbose=True
    )
     return chains_sequence(_input_values)


Simple use cases

We select two simple use cases each implemented as workflow (LLM chain sequence) with the following tasks
  •   Two numerical tasks (math functions: sum and exp)
  •   Term frequency-Inverse document frequency (TF-IDF) scoring and ordering  task

Numerical computation chain

The sequence of chains, numeric_tasks of the two tasks consists of computing
  1. The sum of an array x of type list[float] or which values < 0.8
  2. Apply the exponential function to the sum

In this particular example, an array of 120 floating point values are generated through a sin function then filter through the condition x < 0.8. The output value is a dictionary with a single key 'u'.

def numeric_tasks() -> dict[str, str]:
   import math

   chat_gpt_seq = ChatGPTTypedChains(0.0)
   
   # First task:  implement lambda x: sin(x*0.001)
   input_x = ','.join([str(math.sin(n * 0.001)) for n in range(128)])
   chat_gpt_seq.append("Sum these values ", [('x', 'list[float]', 'values < 0.8')], 'res')
    , 
   # Second task: function u: exp(sum(x))
   chat_gpt_seq.append("Compute the exponential value of ", [('res', 'float', '')], 'u')
   input_values = {'x': input_x}
   output: dict[str, str] = chat_gpt_seq(input_values, ["u"])

   return output



TF-IDF score

This second use case consists of two tasks (LLM chains)

  1. Computation of TF-IDF score, tf_idf_score of terms extracted from 3 documents/files (file1.txt, file2.txt, file3.txt). The key for input values, documents, is the content of the 3 documents.
  2. Ordering the items by their TF-IDF score. The output key, ordered_list is the list of terms ranked by their decreasing TF-IDF score.

def load_content(file_name: str) -> str:
  with open(file_name, 'r') as f:
      return f.read()


def load_text(file_names: list[str]) -> list[str]:
  return [load_content(file_name) for file_name in file_names]


def tf_idf_score() -> str:
   chat_gpt_seq = ChatGPTTypedChains(0.0)

   # Load documents for which TF-IDF score has to be computed
   input_files = ['../input/file1.txt', '../input/file2.txt', '../input/file2.txt']
   input_documents = '```'.join(load_text(input_files))

   # Create first task: Compute the 
   chat_gpt_seq.append(
        "Compute the TF-IDF score for words from documents delimited by triple backticks with output format term:TF-IDF score ```",
        [('documents', 'list[str]', '')], 'terms_tf_idf_score')
  
   # Create a second task  
  chat_gpt_seq.append("Sort the terms and TF-IDF score by decreasing order of TF-IDF score",
                         [('terms_tf_idf_score', 'list[float]', '')], 'ordered_list')

   output = chat_gpt_seq({'documents': input_documents}, ["ordered_list"])
   return output['ordered_list']


Thank you for reading this article. For more information ...

References




---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Wednesday, May 31, 2023

Generate Code with ChatGPT Reusable Prompt Patterns

Target audience: Advanced
Estimated reading time: 4'  

Wouldn't it be fantastic if ChatGPT could generate source code? If it could, what level of quality could we expect from the generated output?

This post introduces the idea of employing reusable prompt patterns with large language models, focusing on their application in generating Python code. 
To illustrate the concept, we present a straightforward use case: generating a PostgreSQL database application that includes update and query functionalities, showcasing the effectiveness of these prompt patterns


Table of contents
Follow me on LinkedIn

Overview

Strategically crafting prompts for interacting with conversational large language models plays a vital role in enhancing the quality of their responses and suggestions.

The OpenAI documentation states: "Designing your prompt is essentially how you “program” the model, usually by providing some instructions or a few examples. This is different from most other NLP services which are designed for a single task, such as sentiment classification or named entity recognition. Instead, the completions and chat completions endpoint can be used for virtually any task including content or code generation, summarization, expansion, conversation, creative writing, and style transfer.."[ref 1]

Notes:

Reusable patterns

Since 1995, the field of software engineering has greatly benefited from the introduction and evolution of design patterns [ref 4]. These design patterns serve to name, abstract, and identify key elements of common design structures, offering software developers reusable solutions. Over time, various design patterns, such as factory, template, composite, and observer, have been applied, assessed, and documented.
It's logical to apply a similar approach to the emerging discipline of prompt engineering, considering the existing literature that identifies patterns within conversational prompts. 

This article draws upon the framework outlined in the seminal paper titled "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" [ref 5]. Prompt patterns, much like their counterparts in software engineering, provide reusable solutions to specific problems. They are categorized based on attributes such as Category (or purpose), Intent and context, Motivation, Contextual statements, Examples, and Consequences.

Here a subset of prompt patterns for the catalog

Pattern Purpose Contextual statements
Persona Output customization
Assign a persona/role/domain expert to LLM. The persona can be expressed as a role, job description, title, historical or known figure.
  • I would like you to ask me questions to achieve X
  • You should ask questions until X is achieved or the condition X is met. Ask me questions regarding X one at the time.
Template Output customization
Ensure output follows a precise template (i.e. format, URL, example…). This pattern instructs LLM to use a unique format in specific portion of the answer.
  • I am going to provide you with a template for your output
  • X is my place holder for content
  • Try to fit the output into one or more of the placeholders that I listed
  • Please preserve the formatting and template I provided
  • Here is the template to follow: PATTERN with PLACEHOLDER.
Cognitive verifier Prompt Improvement
Quality of LLM answers improves if the initial questions is broken into additional sub-questions.
  • When you asked a question, follow these rules
  • Generate several additional questions that would help more accurately answer the original question
  • Combine the answers to the individual questions to produce the final aggregate answer to the overall question.
Fact check list Error identification
Request LLM to provide/append a list of facts/assumptions to the answer, so the user may perform due-diligence.
  • Generate a set of facts that are contained in the output
  • The set of facts should be inserted/appended to the output
  • The set of facts should be the fundamental facts that could undermine the veracity of the output if any of them are incorrect.
Output automater Output customization
Having LLM generate a script or automated task that can be execute any steps the LLM recommends.
  • Whenever you produce an output with steps, always do this. Produce an executable artifact of type X that will automate these steps.
Reflection Error identification
Ask LLM to automatically explain the rationale behind a given answers to the user. The pattern clarifies any points of confusion, underlying assumptions, gaps in knowledge….
  • Whenever you generate an answer, explain the reasoning and assumptions behind your answer, so I can improve my question.
Visualization Generator Output customization
Use generated text to create a visualization as complex concepts are easier to grasp with diagrams and images. The LLM output should create a pathway for the tool to produce imagery.
  • Generate an X that I can provide to tool Y to visualize it.


Use case

To assess the influence of different prompt patterns on the quality of ChatGPT responses, we employ a specific use case: building a basic Python application for interacting with a PostgreSQL database, encompassing functionalities for access, update, and querying.

The evaluation procedure involves initiating a conversation with ChatGPT using an initial request or prompt. Subsequently, we iteratively enhance the prompt by applying various patterns, including Persona, Template, Visualization generator, and Reflection patterns.

Each interaction or prompt within the conversation includes:
  1. The prompt's objective.
  2. The prompt message itself.
  3. ChatGPT's response, accompanied by comments and source code.
  4. Our subsequent analysis.

Initial request

Prompt

"I would like you to generate Python code for create the table ’predictions’ with the following fields, ‘id’, ‘num_entries’, ‘’variable’, ‘mean’’ and variance’ for the PostgreSQL database ‘EHR’, then write a query for large number of rows."

Answer

ChatGPT code
Initialization of connection, creation of table and insertion of random values.
import psycopg2
from psycopg2 import sql
import random

  # Define connection parameters
db_name = "EHR"
db_user = "username"
db_password = "password"
db_host = "localhost"
db_port = "5432"

   # Connect to the PostgreSQL database
connection = psycopg2.connect(
  dbname=db_name,
  user=db_user,
  password=db_password,
  host=db_host,
  port=db_port
)

   # Create a cursor object
cursor = connection.cursor()

   # Create the 'predictions' table
cursor.execute("""CREATE TABLE predictions (
  id SERIAL PRIMARY KEY,
  num_entries INTEGER,
  variable VARCHAR(255),
  mean FLOAT,
  variance FLOAT);"""
)
connection.commit()

   # Insert a large number of rows into the 'predictions' table
for i in range(10000):      # adjust this number as necessary
  cursor.execute("""INSERT INTO predictions (
    num_entries, variable, mean, variance)VALUES (%s, %s, %s, %s
    )""", 
    (
      random.randint(1, 100),     # num_entries
      f'variable_{i}',                     # variable
      random.uniform(0, 100),    # mean
      random.uniform(0, 10)      # variance
    )
  )
connection.commit()

  # Close the cursor and connection
cursor.close()
connection.close()

Implementation of the query
# Re-establish connection and cursor
connection = psycopg2.connect(
    dbname=db_name,
    user=db_user,
    password=db_password,
    host=db_host,
    port=db_port
)
cursor = connection.cursor()

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows from the query
rows = cursor.fetchall()
# Print each row
for row in rows:
    print(row)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code functions as intended. ChatGPT accurately infers that PostgreSQL is locally deployed and employs randomly generated values for code evaluation. The only minor issue is that the response assumes the absence of a table creation check ('CREATE TABLE IF NOT EXIST').
  • In the second code snippet, the query is implemented with a straightforward print statement for each row, serving as a validation measure. To prevent excessive standard output, these print statements could have been applied to a subset of rows.
  • ChatGPT doesn't include an exception handling mechanism, as it wasn't explicitly required. Software developers should anticipate incorporating proper error handling in the generated source code for safety.

Persona Pattern

The objective is to assess ChatGPT's ability to identify an erroneous input 'VARCHAR2(128)' and utilize the context with minimal latency. The modifications made to the original prompt are indicated in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR2(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for 2000000 rows.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In PostgreSQL, VARCHAR2 is not a valid datatype. We use VARCHAR instead.
For querying 2,000,000 rows efficiently, it's recommended to use a server-side cursor. This will allow you to fetch the rows in manageable chunks, rather than trying to load all the rows into memory at once, which may not be possible with this volume of data."

ChatGPT source code update:
# Re-establish connection and cursor
connection = psycopg2.connect(
   dbname=db_name,
   user=db_user,
   password=db_password,
   host=db_host,
   port=db_port
)

# Create a server-side cursor
cursor_name = "my_cursor"
cursor = connection.cursor(cursor_name)

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch rows in manageable chunks
while True:
   rows = cursor.fetchmany(size=1000)  # adjust this number as necessary
   if not rows:
      break

    # Process each row
   for row in rows:
      print(row)

# Close the cursor and connection
cursor.close()
connection.close()

Analysis

  • ChatGPT detected the incorrect type VARCHAR2 for the variable.  
  • The constraint of low latency query is enforced by fetching a batch of 1000 rows at the time.

Template Pattern

We introduce the template pattern to our initial prompt, instructing ChatGPT to adhere to a specific format for its response. The modification made to the preceding prompt is marked in red.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Context: I am a software engineer developing an application with the lowest latency possible."

Answer

ChatGPT comments: "In this code, a server-side cursor is used to fetch rows from the 'predictions' table, and the fetched rows are stored in a pandas DataFrame. This DataFrame is then converted to HTML and inserted into the HTML template. The HTML template is then saved to an HTML file named output.html."

ChatGPT code update:
import pandas as pd

# Execute a SELECT query
cursor.execute("SELECT * FROM predictions;")

# Fetch all rows into pandas DataFrame
df = pd.read_sql_query("SELECT * FROM predictions;", connection)

# Convert DataFrame to HTML
html = df.to_html()

# HTML template
html_output = f"""
<!DOCTYPE html>
<html>
<head>
    <title>MYTITLE</title>  <!-- replace with your actual title -->
    <style>
    body {{
        background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
    }}
    table {{
        width: 100%;
        border-collapse: collapse;
    }}
    th, td {{
        border: 1px solid black;
        padding: 8px;
        text-align: left;
    }}
    </style>
</head>
<body>
    {html}
</body>
</html>
"""

# Save the HTML output
with open("output.html", "w") as file:
    file.write(html_output)

# Close the cursor and connection
cursor.close()
connection.close()


Analysis

  • The code executes as intended
  • ChatGPT generates the HTML output by dumping the results set into a panda dataset and invoking to_htm() method as we did not specify an approach in our request.
  • However, ChatGPT did not parameterize html_output, merely suggesting replacing the place holder as an HTML comment.

Visualization generator

As explained in the introduction, we utilize this pattern to instruct ChatGPT to generate a script that will render HTML output in a specified web browser.
.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments: "The resulting HTML output of the query will be displayed in a new tab in your default web browser. If you wish to open it specifically in Chrome, you'll need to provide the path to your Chrome executable and uncomment the relevant lines."

ChatGPT code update
import webbrowser

# Open the web page in the web browser
filepath = 'file://' + os.path.realpath(filename)

# To open in your default web browser
webbrowser.open_new_tab(filepath)

# Or to specifically open in Chrome, uncomment the following lines
# and replace the path_to_chrome with the path to your Chrome executable
# path_to_chrome = r'/path/to/chrome'  
# update path as per your configuration
webbrowser.get(path_to_chrome).open_new_tab(filepath)


Analysis
  • We didn't specify the programming language for the visualization generator/script in our request. Consequently, ChatGPT correctly assumes that Python is the language to be used for rendering the HTML output of the query. 
  • It's worth noting that ChatGPT provided two statements for launching and opening the browser: one for the default browser and another tailored for Chrome.
  • Similar to the previous response, ChatGPT did not include error handling code.

Reflection pattern

Up to this point, ChatGPT has not incorporated error handling code into its responses. Consequently, we've opted to include it in our existing set of contextual statements within the prompt. In order to demonstrate the application of the reflection pattern, we've asked ChatGPT to provide its rationale for each component of its response.

Prompt

"I would like you to generate Python code for create the table ‘predictions’ with the following fields, ‘id’ of type SERIAL, ‘num_entries’ with type INT,  ‘’variable’ with type VARCHAR(128), ‘mean’’ with type FLOAT and variance’ with type FLOAT, for the PostgreSQL database ‘EHR’, then write a query for large number of rows. 
Please add exception handling in connecting to the database
Can you explain your reasoning and assumption behind your answer
Context: I am a software engineer developing an application with the lowest latency possible.
The output has to be formatted in HTML. I provide a template for you output with the following placeholders in all caps: <title>MYTITLE</title> and CSS style background-color:BACKGROUNDCOLOR. Please fit the output into the two place holders MYTITLE and BACKGROUNDCOLOR.
Please write the script to display the HTML output on Chrome browser."

Answer

ChatGPT comments
"In the context of your request, here is the reasoning and assumptions:
PostgreSQL Connection: You mentioned that the PostgreSQL database is hosted on a remote service with the URL "mypostgresql". This implies that the Python script will require network access and the necessary credentials (username, password) to access the database.

Data Query: You want to query a large number of rows. This implies that the 'predictions' table will hold a large amount of data. For large result sets, it's good practice to use a server-side cursor, which fetches and processes the rows in manageable chunks. This reduces memory consumption and can potentially speed up the response times of your queries.

Table Creation: You want to create a table named "predictions" with specific fields and data types. This implies that you have the required permissions to create tables in the database.

HTML Output: You want to format the query output in HTML with a specific title and background color. The pandas library's to_html() method is ideal for this. It converts a DataFrame into an HTML table, which can then be customized with additional HTML and CSS.

Browser Display: You want the HTML output to be displayed in the Chrome browser. The Python webbrowser module can open URLs in new browser windows or tabs. However, it uses the system's default browser by default. If you specifically want to open the page in Chrome, you can do so by providing the path to the Chrome executable.
Assumptions:

Libraries: This solution assumes that you have the necessary Python libraries (psycopg2 and pandas) installed. If not, you can install them using pip (pip install psycopg2-binary pandas).

Permissions: The solution assumes that you have the necessary permissions to create tables and query data in the specified PostgreSQL database.

Data Existence: The solution assumes that there is data in the 'predictions' table when querying. If the table is empty, the HTML output will only include the table headers.

Server Performance: Handling large number of rows depends on the capabilities of your database server. The server needs to have enough resources to handle such large data sets efficiently.

Browser: This solution assumes that Chrome browser is installed on your system. If it's not, the webbrowser module's open methods will not work as expected. The exact path to Chrome will also depend on your operating system and the specific location where Chrome is installed.

Now, with this reasoning and assumptions, I'll provide the Python script in a follow-up response."

ChatGPT final source code
import psycopg2
import pandas as pd
import webbrowser
import os

# Define connection parameters
db_name = "EHR"
db_user = "username"  # replace with your actual username
db_password = "password"  # replace with your actual password
db_host =  "localhost"  # replace with your actual host
db_port = "5432"  # default PostgreSQL port

try:
    # Connect to the PostgreSQL database
  connection = psycopg2.connect(
     dbname=db_name,
     user=db_user,
     password=db_password,
     host=db_host,
     port=db_port
  )

    # Create a cursor object
  cursor = connection.cursor()

    # Create the 'predictions' table
  cursor.execute("""
     CREATE TABLE predictions (
         id SERIAL PRIMARY KEY,
         num_entries INT,
         variable VARCHAR(128),
         mean FLOAT,
         variance FLOAT
     );""")
  connection.commit()

    # Close the cursor
  cursor.close()

    # Re-establish cursor with server-side cursor for fetching large amount of rows
  cursor = connection.cursor("my_cursor")

    # Execute a SELECT query
  cursor.execute("SELECT * FROM predictions;")

    # Fetch all rows into pandas DataFrame
  df = pd.read_sql_query("SELECT * FROM predictions;", connection)

    # Convert DataFrame to HTML
  html = df.to_html()

    # HTML template
  html_output = f"""
    <!DOCTYPE html>
    <html>
    <head>
        <title>MYTITLE</title>  <!-- replace with your actual title -->
        <style>
        body {{
            background-color: BACKGROUNDCOLOR;  <!-- replace with your actual background color -->
        }}
        table {{
            width: 100%;
            border-collapse: collapse;
        }}
        th, td {{
            border: 1px solid black;
            padding: 8px;
            text-align: left;
        }}
        </style>
    </head>
    <body>
        {html}
    </body>
    </html>
    """

    # Save the HTML output
    filename = "output.html"
    with open(filename, "w") as file:
        file.write(html_output)

    # Open the web page in the web browser
    filepath = 'file://' + os.path.realpath(filename)

    # Or to specifically open in Chrome, uncomment the following lines
    # and replace the path_to_chrome with the path to your Chrome executable
    # path_to_chrome = r'/path/to/chrome'  # update path as per your configuration
    # webbrowser.get(path_to_chrome).open_new_tab(filepath)

except psycopg2.Error as error:
    print(f'An error occurred while connecting to the PostgreSQL database {error}")
finally:
      # Always close the cursor and connection, whether an error occurred or not
    if 'cursor' in locals():
        cursor.close()
    if 'connection' in locals():
        connection.close()

Analysis
  • In accordance with our specific request, ChatGPT has incorporated exception handling into the most recent version of the code.
  • Furthermore, ChatGPT offers insights into its choice of using Pandas for HTML generation and makes assumptions about the PostgreSQL installation status, table condition, query size, and other aspects.
  • It's worth noting that the comment regarding server/database performance could have included additional guidance on multi-threaded execution, creating views, and manipulating cursors within the Python code.


Conclusion

We've showcased that the quality of the generated code significantly improves when we enhance an initial, basic prompt with successive patterns.
The synergy or integration of these prompt patterns enhances the overall quality of the generated Python code, sometimes even more so than the patterns individually contribute.
It's worth noting that there are numerous other patterns that are worth exploring [ref 4].


Thank you for reading this article. For more information ...

References

[3Secure ChatGPT API client in Scala
[4] Design Patterns: Elements of reusable Object-Oriented Software E. Gamma, R. Helm, R. Johnson, J. Vlissides - Addison-Wesley professional computing series - 1995
[5A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, D. C. Douglas



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3