Tuesday, April 18, 2023

Secure ChatGPT API Client in Scala

Target audience: Beginner
Estimated reading time: 3' 

This post introduces the OpenAI ChatGPT API and describes an implementation in Scala for which the API key is encrypted.

Table of contents
ChatGPT REST API
       HTTP parameters
       Request body
       Response body
Follow me on LinkedIn
Notes: 
  • The code associated with this article is written using Scala 2.12.15
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

ChatGPT REST API

The ChatGPT functionality is accessible through a REST API [ref 1]. The client for the HTTP POST can be implemented
  • as an application to generate the request including the JSON content and process the response (Curl, Postman, ...)
  • programmatically by defining the request and response as self-contained data structure (class, data or case class, ...) to manage the remote service.

HTTP parameters

The connectivity parameters for the HTTP post are
  • OPEN_AI_KEY=xxxxx
  • URL= "https://api.openai.com/v1/chat/completions
  • CONTENT_TYPE="application/json"
  • AUTHORIZATION=s"Bearer ${OPEN_AI_KEY}" 

Request body

The parameters of the POST content:

  • model: Identifier of the model (i.e. gpt-3.5-turbo)
  • messages: Text of the conversation 
  • user: Identifier for the user
  • roleRole of the user {system|user|assistant}
  • content: Content of the message or request
  • nameName of the author
  • temperature: Hyper-parameter that controls the "creativity" of the language model by adjusting the distribution (softmax) for the prediction of the next word/token. The higher the value (> 0) the more diverse the prediction (default: 0)
  • top_p: Sample the tokens with top_p probability. It is an alternative to temperature (default: 1)
  • n: Number of solutions/predictions (default 1)
  • max_tokens: Limit the number of tokens used in the response (default: Infinity)
  • presence_penalty: Penalize new tokens which appear in the text so far if positive. A higher value favors most recent topics (default: 0)
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency if the value is positive (default: 0)
  • logit_bias: Map specific tokens to the likelihood of their appearance in the message

Note: Hyper-parameters in red are mandatory


Response body

  • id: Conversation identifier
  • object: Payload of the response
  • created: Creation date
  • usage.prompt_tokens:  Number of tokens used in the prompt
  • usage.completion_tokens: Number of tokens used in the completion
  • usage.total_tokens Total number of tokens
  • choices
  • choices.message.role Role used in the request
  • choices.message.content Response content
  • choices.finish_reason Description of the state of completion of the request.

Scala client

The first step in implementing the client to ChatGPT API is to define the POST request and response data structure. We define two types of requests
  • Simple user request, ChatGPTUserRequest 
  • Request with model hyper-parameters ChatGPTDevRequest which includes most of the parameters defined in the introduction.
We will be using the Apache Jackson serialization/deserialization library [ref 2to convert request structure to JSON and JSON response into response object.
The handle to the serialization/deserialization, mapper, is defined and configured in the singleton ChatGPTRequest
The toJson method convert any chatGPT request into a JSON string that is then converted into an array of bytes in method toJsonBytes

trait ChatGPTRequest {
   import ChatGPTRequest._ 
    
   def toJson: String = mapper.writeValueAsString(this)
   def toJsonBytes: Array[Byte] = toJson.getBytes
}


object ChatGPTRequest {
        // Instantiate a singleton for the Jackson serializer/deserializer
    val mapper = JsonMapper.builder().addModule(DefaultScalaModule).build()
    mapper.registerModule(DefaultScalaModule)
    mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
}

The basic end user and comprehensive request structure implement the request parameters defined in the first section.
case class ChatGPTMessage(role: String, content)

case class ChatGPTUserRequest(model: String, messages: Seq[ChatGPTMessage]) 
      extends ChatGPTRequest

case class ChatGPTDevRequest(
    model: String,
    user: String,
    prompt: String,
    temperature: Double,
    max_tokens: Int,
    top_p: Int = 1,
    n: Int = 1,
    presence_penalty: Int = 0,
    frequency_penalty: Int = 1) extends ChatGPTRequest


The response, ChatGPTResponse reflects the REST API specification described in the first section. These classes have to be implemented as case classes to support serialization of object through the class loader.

case class ChatGPTChoice(
   message: ChatGPTMessage, 
   index: Int, 
   finish_reason: String)

case class ChatGPTUsage(
   prompt_tokens: Int, 
   completion_tokens: Int, 
   total_tokens: Int)

case class ChatGPTResponse(
    id: String,  
    `object`: String,
    created: Long,
    model: String,
    choices: Seq[ChatGPTChoice],
    usage: ChatGPTUsage)


For convenience we implemented two constructors for the ChatGPTClient class
  • A fully defined constructor taking 3 arguments: complete request url, timeout and an encrypted API key, encryptedAPIKey
  • A simplified constructor with JSON timeout and encryptedAPIKey  as arguments.
The connection is implemented by the HttpURLConnection java class and use an encrypted API key. The secure access to the ChatGPT service is defined by the authorization property. 
The request implementing the ChatGPTRequest trait is converted into bytes, pushed into the outputStream. In our example, we just throw an IllegalStateException exception in case the HTTP error code is not 200. A recovery handler  (httpCode: Int) => ChatGPTRequest would be more useful.

import com.fasterxml.jackson.databind.json.JsonMapper
import com.fasterxml.jackson.databind.DeserializationFeature
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import java.io.{BufferedReader, InputStreamReader, OutputStream}
import java.net.{HttpURLConnection, URL}


class ChatGPTClient(
   url: String, 
   timeout: Int,
   encryptedAPIKey: String
) {
   import ChatGPTClient._

  def apply(chatGPTRequest: ChatGPTRequest): Option[ChatGPTResponse] = {
     var outputStream: Option[OutputStream] = None

     try {
        EncryptionUtil.unapply(encryptedAPIKey).map(
          apiKey => {
            // Create and initialize the HTTP connection
            val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
            connection.setRequestMethod("POST")
            connection.setRequestProperty("Content-Type", "application/json")
            connection.setRequestProperty("Authorization", s"Bearer $apiKey")
            connection.setConnectTimeout(timeout)
            connection.setDoOutput(true)

            // Write into the connection output stream
            outputStream = Some(connection.getOutputStream)
            outputStream.foreach(_.write(chatGPTRequest.toJsonBytes))

               // If request failed.... just throw an exception for now.
            if(connection.getResponseCode != HttpURLConnection.HTTP_OK)
              throw new IllegalStateException(
                   s"Request failed with HTTP code ${connection.getResponseCode}"
              )

             // Retrieve the JSON string from the connection input stream
            val out = new BufferedReader(new InputStreamReader(connection.getInputStream))
                .lines
                .reduce(_ + _)
                .get
            // Instantiate the response from the Chat GPT output
            ChatGPTResponse.fromJson(out)
        }
      )
    }
    catch {
      case e: java.io.IOException =>
        logger.error(e.getMessage)
        None
      case e: IllegalStateException =>
        logger.error(e.getMessage)
        None
    }
    finally {
      outputStream.foreach(os => {
        os.flush
        os.close
      })
    }
  }
}

The request/response to/from ChatGPT service is implemented by the method apply which returns an optional ChatGPTResponse if successful, None otherwise

val model = "text-davinci-003"
val user = "system"
val encryptedAPIKey = "8df109aed"
val prompt = "What is the color of the moon"
val timeout = 200
val request = ChatGPTUserRequest(model, user, prompt)
// 
// Simply instantiate, invoke ChatGPT and print its output
val chatGPTClient = ChatGPTClient(timeout, encryptedAPIKey)
chatGPTClient(request).foreach(println(_))

with the following output....
The color of the moon is typically described as a pale gray or white. However, the appearance of the moon's color can vary depending on various factors such as atmospheric conditions, the moon's position in the sky,...


A simple encryption..

As mentioned in the introduction, a simple and easy way to secure access any remote service is to encrypt credentials such as password, keys...so they do not appear in clear text in configuration, properties files, docker files or source code.
Encryption is the process of applying a key to plain text that transforms that plain text into unintelligible (cipher) text. Only programs with the key to turn the cipher text back to original text can decrypt the protected information.
The javax.crypto java package [ref 3] provides developers with classes, interfaces and algorithms for cryptography.

Our code snippet relies on a cypher with a basic AES encryption scheme. The Apache commons code package is used to implement the base 64 encoder.
The method apply, in the following code snippet, initializes the cypher in encryption model to encrypt the raw API key prior its encoding in base64.

import javax.crypto.spec.{IvParameterSpec, SecretKeySpec}
import javax.crypto.Cipher
import org.apache.commons.codec.binary.Base64.{decodeBase64, encodeBase64String}


  // Define the parameter of the cypher using AES encoding schemer
final val AesLabel = "AES"
final val EncodingScheme = "UTF-8"
final val key = "aesEncryptorABCD"
final val initVector = "aesInitVectorABC"

  // Instantiate the Cypherr
val iv = new IvParameterSpec(initVector.getBytes(EncodingScheme))
val keySpec = new SecretKeySpec(key.getBytes(), AesLabel)
val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding")


  /**
   * Encrypt a string or content using AES and Base64 bytes representation
   * @param credential String to be encrypted
   * @return Optional encrypted string
   */
def apply(clearCredential: String): Option[String] =  {
   cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv)

   val encrypted = cipher.doFinal(clearCredential.getBytes)
   Some(encodeBase64String(encrypted))
} 

The decryption method, unapply, reverses the steps used in the encryption
  1. initialize the cipher in decryption model
  2. decode encrypted API key 
  3. apply the cipher (doFinal)
def unapply(encryptedCredential: String): Option[String] = {
   cipher.init(Cipher.DECRYPT_MODE, keySpec, iv)
 
   val decrypted = cipher.doFinal(decodeBase64(encryptedCredential)
   Some(new String(decrypted))
}


Thank you for reading this article. For more information ...

References

Environments: Scala 2.12.15,   JDK 11,  Apache Commons Text 1.9,  FasterXML Jackson 2.13.1


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Tuesday, March 21, 2023

ChatGPT API Python Client

Target audience: Beginner
Estimated reading time: 3' 

This post describes a generic implementation of a client to the OpenAI ChatGPT REST web service. This implementation has been written and tested with Python 3.9. Comments in the source code are omitted for the sake of clarity.

ChatGPT client application in Python

Table of contents
ChatGPT API overview
       HTTP parameters
       Request body
Follow me on LinkedIn
Notes
  • Environments:  Python 3.9.16,  ChatGPT 3.5
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

ChatGPT API overview

Let's review the parameters for OpenAI Chat Completion REST API

HTTP Parameters

The connectivity parameters for the HTTP post are
  • OPEN_AI_KEY=xxxxx
  • URLhttps://api.openai.com/v1/chat/completions
  • CONTENT_TYPE=application/json
  • AUTHORIZATION="Bearer ${OPEN_AI_KEY}" 

Request body

The parameters of the POST content:

  • model: Identifier of the model (i.e. gpt-3.5-turbo)
  • messagesText of the conversation 
  • user: Identifier for the user
  • role:  Role of the user {system|user|assistant}
  • content: Content of the message or request
  • name:  Name of the author
  • temperatureHyper-parameter that controls the "creativity" of the language model by adjusting the distribution (softmax) for the prediction of the next word/token. The higher the value (> 0) the more diverse the prediction (default: 0)
  • top_pSample the tokens with top_p probability. It is an alternative to temperature (default: 1)
  • nNumber of solutions/predictions (default 1)
  • max_tokens: Limit the number of tokens used in the response (default: Infinity)
  • presence_penalty: Penalize new tokens which appear in the text so far if positive. A higher value favors most recent topics (default: 0)
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency if the value is positive (default: 0)
NoteOpenAI models are non-deterministic, as identical requests may yield different answers. Setting temperature = 0 will make the outputs mostly deterministic.

Chat completion request

The first step is to implement the data structure for the content of the request as described in the previous section. The content of the request, ChatGPTRequest is a read-only and therefore implemented as a data class. The class members reflect the semantic of the OpenAI chat completion API.
  • modelIdentifier of the model (i.e. gpt-3.5-turbo, code-davinci-002 ...)
  • role Role of the user as system, user or assistant
  • temperatureHyper-parameter that adjusts the distribution for the prediction of the next token
  • max_tokensLimit the number of tokens used in the response 
  • top_p: Sample the tokens with p highest probability (default 1)
  • n Number predictions/choices (default 1)
  • presence_penalty: Penalize new tokens which appear in the text so far 
  • frequency_penalty: Penalize new tokens which appear in the text with higher frequency
from dataclasses import dataclass

@dataclass
class ChatGPTRequest:
   model: str
   role: str
   temperature: float
   max_tokens: int
   top_p: int
   n: int
   presence_penalty: int
   frequency_penalty: int


ChatGPT client

There are two constructors for the client of type ChatGPTClient 
  • Default constructor for a fully customizable request ChatGPTRequest as argument.
  • Constructor build for simple request that required only model, role and temperature (evaluated with values 0 and 1) with all other parameters using the default values specified in the openAI API, openapi.ChatCompletion, documentation [ref 1]. We use the annotated type, InstanceType to specify the type of instance to create [ref 2]
from typing import Type, TypeVar, Callable

Instancetype = TypeVar('Instancetype', bound='ChatGPTClient')


class ChatGPTClient(object):
    import constants

      # static variable for the API key and the default maximum 
      # number of tokens returned
    openai.api_key = constants.openai_api_key
    default_max_tokens = 1024

    def __init__(self, chatGPTRequest: ChatGPTRequest):
       self.chatGPTRequest = chatGPTRequest

   @classmethod
   def build(cls, model: str, role: str, temperature: float) -> Instancetype: 
       chatGPTRequest = ChatGPTRequest( 
           model, 
           role, 
           temperature,
           ChatGPTClient.default_max_tokens, 
           1, 
           1, 
           0, 
           0)
       return cls(chatGPTRequest)


Post

Let's start with a simple version of the invocation that returns only the answer without any explanation, status or usage. The only argument of the HTTP post is the user prompt. The response is extracted from the message of the first choice of the answer.

def post(self, prompt: str) -> str:     
   import logging

   try:
       response = openai.ChatCompletion.create(  
           model=self.chatGPTRequest.model,
           messages=[{'role': self.chatGPTRequest.user, 'content': prompt}],
           temperature=self.chatGPTRequest.temperature,
           max_tokens=self.chatGPTRequest.max_tokens
       )
       return response['choices'][0].message.content
   
   except  openai.error.AuthenticationError as e:
       logging.error(f'Failed as {str(e)}')


Some advanced client applications may require processing and evaluating some metadata included in the response. In this case, the JSON content of the Chat completion answer is objectified.
The key variables of the ChatCompletion response are
  • idConversation identifier
  • object: Payload of the response
  • createdCreation date
  • usage.prompt_tokens:  Number of tokens used in the prompt
  • usage.completion_tokens: Number of tokens used in the completion
  • usage.total_tokens Total number of tokens
  • choices
  • choices.message.role Role used in the request
  • choices.message.content Response content
  • choices.finish_reason Description of the state of completion of the request
As with the request content, the difference components of the answer are implemented as data classes to reflect the structure of the JSON response from the ChatCompletion API.
The previous implementation of the post method is upgraded by adding a conversion of the JSON response into ChatGPTResponse.

@dataclass
class ChatGPTChoice:
   messages: []
   index: int
   finish_reason: str

@dataclass
class ChatGPTUsage:
   prompt_tokens: int
   completion_tokens: int
   total_tokens: int

@dataclass
class ChatGPTResponse:
   id: str
   object: str
   created: int
   model: int
   choices: []
   usage: ChatGPTUsage


def post_dev(self, prompt: str) -> ChatGPTResponse:
   import json
   import logging
        
   try:
      response = openai.ChatCompletion.create(
          model=self.chatGPTRequest.model,
          messages=[{'role': self.chatGPTRequest.user, 'content': prompt}],
          temperature=self.chatGPTRequest.temperature,
          max_tokens=self.chatGPTRequest.max_tokens
      )
      return json.loads(response)
 
   except  openai.error.AuthenticationError as e:
       logging.error(f'Failed as {str(e)}')

Note: We are using the built-in json Python library [ref 3] to decode the ChatGPT response. The list of alternative libraries include Orjson [ref 4] and SimpleJson [ref 5]

References

[4Orjson




---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3




Wednesday, February 15, 2023

Improve Python with Pydantic 2.x

Target audience: Advanced
Estimated reading time: 3'

In my tenure as a Scala developer, I frequently underscored the importance of robust type checking mechanisms. When transitioning to Python, it was the incorporation of the Pydantic library and the native typing package that significantly underscored Python's capabilities as a mature programming language, bolstering its appeal to developers like me.

This article delves into the contemporary features of the Pydantic library, elucidating concepts such as serialization, annotated validators, and discriminated unions.


Table of contents
Follow me on LinkedIn
Notes:
  • Environments: Python 3.10, Pydantic 2.4.2
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Why pydantic?

Pydantic [ref 1] is a Python library designed for data modeling and parsing, equipping Python developers with customizable tools for error handling and validation. This library seamlessly integrates built-in features for JSON encoding and decoding, combined with validation. It offers a declarative approach to data representation.

Mirroring the typing module, Pydantic aims to emulate the type checking seen in static languages like Java and C++ through IDE-enforced hints. Moreover, it enhances the Python dataclasses module by integrating agile error handling and data validation capabilities.

Pydantic has several benefits:
  • Dedicated, specific types such as FilePath or HttpUrl for validation.
  • Custom validators.
  • Options (Optional type) similar to Java and Scala.
  • Integration with lint functionality in most common IDE.
  • Declarative syntax for modeling data.
  • Wrapper (Field type) for data validation.

To install the Pydantic library (MacOS)
       pip install pydantic

Notes: 
  • This is a sample of features available in version 2.2.1 [ref 2]. The reader is invited to explore the entire functionality of pedantic.
  • A migration from Pydantic version 1.x is available to Python developers [ref 3].
  • FastAPI web framework leverages Pydantic library [ref 4].

Serialization validators

One basic function is validation of JSON representation of an object. The following example combines Pydantic data class Algorithm as inherited from BaseModel and data type from typing module.
An algorithm instance if completely defined by its name (i.e., Binary tree), time complexity (i.e., O(logN), asymptotic search time complexity [ref 5] and supported programming languages (i.e., C++, Python, Go, ...). For sake of clarity, time and space complexity is defined as an enumerator.

from pydantic import BaseModel, ValidationError
from typing import AnyStr, Tuple, Dict, Optional
from enum import Enum

""
    # Time complexity for average case for search operation
class AlgorithmComplexity(Enum): O1 = 0 # Hash table OlogN = 1 # Binary search, red-black tree, ... ON = 2 # Linked list, stack, queue, ... ONlogN = 3 # Quick, merge, shell sort, ... ON2 = 4 # Insertion sort, ... class Algorithm(BaseModel): name: AnyStr time_complexity: AlgorithmComplexity space_complexity: AlgorithmComplexity languages: Tuple
# Tuple of supported programming languages


In the following code snippet, the Pydantic constructor Algorithm.__init__ validates the textual/JSON representation of the data which fails for the improperly definition of the linked list.

if __name__ == '__main__':

       # Correct definition of a binary tree
binary_tree_json = { 'name': 'Binary Tree', 'time_complexity': AlgorithmComplexity.OlogN, 'space_complexity': AlgorithmComplexity.OlogN, 'languages': ('java', 'C++', 'Python') } try: binary_tree = Algorithm(**binary_tree_json) except ValidationError as err: logging.error(err.errors())

      # Incorrect specification of the time complexity to access a linked list 
linked_list_json = { 'name': 'LinkedList', 'time_complexity': 'N', 'space_complexity': AlgorithmComplexity.ON, 'languages': ('java', 'C++', 'Scala') } try: linked_list = Algorithm(**linked_list_json) except ValidationError as err: logging.error(err.errors()) # [{'type': 'enum', 'loc': ('time_complexity',), 'msg': 'Input should be 0,1,2,3 or 4', # 'input': 'N2', 'ctx': {'expected': '0,1,2,3 or 4'}}]



Annotated validator

Pydantic provides a way to apply validators on a field value or types . You should use this whenever you want to bind validation to a type instead of model or field.

There are several types of validators:
  • After validator which is run after Pydantic internal parser and validator.
  • Before validator, run before parsing and validation
  • Plain validator which is a variant of before validator terminating the validation once it fails.
  • Wrap validator which supports the functionality of before, after and plain validators.
The following example implements a 'before validator': the languages listed for a given algorithm should be valid as specified in the tuple, valid_languages.

Here are the 2 steps:
  1. Define a validation function, validate_languages. Note that the type of the output, Tuple is the same as the type of input (argument)
  2. Convert the variable, languages as an annotation which takes the input/output type, Tuple as first argument and the validation function as second argument.

from typing_extensions import Annotated
from pydantic.functional_validators import BeforeValidator



# Formal list of programming languages used to implement the algorithms
valid_languages: Tuple = ('python', 'go', 'scala', 'c++', 'java')

"""
    Remove the languages which are not correctly specified (typos, non-compliant, ..)
"""
def validate_languages(languages: Tuple) -> Tuple:
    return (lang for lang in languages if lang.lower() in valid_languages)


        # We add the annotator for the programming language field.
class Algorithm(BaseModel):
    name: AnyStr
    time_complexity: AlgorithmComplexity   
    space_complexity: AlgorithmComplexity
                         # is the list of supported languages legitimate?
    languages: Annotated[Tuple, BeforeValidator(validate_languages)]



The simplest validation scheme is to make sure the programming languages available for this algorithm are valid. We do not make any assumption whether programming languages are specified with lower or upper case characters.

  
binary_tree_json = {
     'name': 'Binary Tree',
     'time_complexity': AlgorithmComplexity.OlogN,
     'space_complexity': AlgorithmComplexity.OlogN,
     'languages': ('Java', 'C+++', 'Python')    # C+++ is not a valid programming language
 }
    
 try:
     binary_tree = Algorithm(**binary_tree_json)
     logging.info(str(binary_tree.languages))   # ('Java', 'Python')
 except ValidationError as err:
     logging.error(err.errors())



Discriminated unions for composition

The goal of discriminated unions is to resolve the need of multi-inheritance by creating a composite of mutual exclusive types.
Let's consider a simple hierarchy of HTML elements:


Class diagram in sub-classing using union discriminated on html_type

The HTMLText class for the HTML fragment <p>[text]</p> and HTMLImage class for the HTML component <img src=[url] /> inherit from the base abstract class HTMLElement. The member html_type declared with type Literal defines explicitly the type of element.
The goal is to create a new composite type, WebElement, that can invoke either HTMLText or (exclusive) HTMLImage class.

import abc
from typing import AnyStr, Union, Literal
from pydantic import BaseModel, Field


class HTMLElement(BaseModel):
   html_type: AnyStr

   @abc.abstractmethod
   def get(self) -> AnyStr:
      pass


class HTMLText(HTMLElement):
   html_type: Literal["p"]
   doc: AnyStr
    
   def get(self) -> AnyStr:
      return f'<{self.html_type}>{self.doc}<{self.html_type}/>'


class HTMLImage(HTMLElement):
   html_type: Literal["img"]
   img_url: AnyStr
    
   def get(self) -> AnyStr:
       return f'<{self.html_type} src={self.img_url} />'



The objective is to create a mutually exclusive composite, WebElement of HTMLImage and HTMLText using html_type as discriminator.
The method polymorphic method get illustrates the mechanism by generating the actual textual representation of the HTML components.

class WebElement(BaseModel):
  element: Union[HTMLImage, HTMLText] = Field(discriminator="html_type")

  def get(self) -> AnyStr:
        return self.element.get()


The simple test consists of creating a dictionary, test_element to initialize a WebElement instance with type 'img'.

if __name__ == '__main__':
  html_text = HTMLText(html_type='p', doc='This is correct')
  print(html_text.get())  # <p>This is correct<p/>
    
  test_element: Dict = {"img_url": "python.png", "text": "", "html_type": "img"}
  web_element = WebElement(element=test_elementl)
  print(web_element.get()) # <img src=python.png />


Note: For readers familiar with Scala, discriminated unions are somewhat similar to mixins.

Thank you for reading this article. For more information ...

References

[4FastAPI


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3