Tuesday, August 2, 2022

Pattern Matching: Python vs. Scala

Target audience: Beginner
Estimated reading time: 3'   

Ever found yourself frustrated by those pesky chains of if and elif statements? Python's latest versions (3.10 and above) offer a remedy.
This article describes the structured pattern matching [ref 1] and how it relates to its definition and implementation in Scala.


Table of contents

Notes
  • Environments:  Python 3.10, Scala 2.13.2
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Overview

As a data engineer using Scala for data pre-processing and Python deep learning frameworks, I have always been interested in comparing the features of these two languages.

Python already supports a limited form of this through sequence unpacking assignments. There are two approaches to match a value or patterns in older version of Python (similar to switch/case idiom available in most programming languages):
  • Write a sequence of if/elif/else condition - action pairs.
  • Create a dictionary with key as condition and value as action.
Neither of these options are flexible and easy to maintain. It was a matter of time for Python to join quite a few programming languages in adopting structural pattern matching, in version 3.10 [ref 2].

This new feature is very similar to the pattern matching found in the Scala programming language. Would it be interesting to compare the semantic and extensibility of pattern matching in Python and Scala?

Pattern matching in Scala

Typed pattern matching has been part of the Scala programming language for the last 10 years [ref 3]. The purpose is to check a value against one or several values or patterns. This feature is more powerful than the switch statement found in most of programming language as it  can deconstructs a value or class instance into its constituent parts. 

In the following example the type of a status instance (inherited from trait Status) is matched against all possible types (Failure, Success and Unknown).

sealed trait Status

case class Failure(error: String) extends Status
case object Success extends Status
case object Unknown extends Status
  

def processStatus(status: Status): String = status match {
    case Failure(errorMsg) => errorMsg
    case Success => "Succeeded"
    case Unknown => "Undefined status"
}

Note that the set of types derived from Status is sealed (or restricted). Therefore the function processStatus does not need to handle undefined types (already checked by the compiler).


Python value-type pattern

This is the simplest construct for pattern matching. A value along with its type is checked against a give set of value.

from typing import Any
from enum import Enum

class EnumValue(Enum):
    SUCCESS = 1
    FAILURE = -1
    UNKNOWN = 0


def variable_matching(value: Any):
    match value:
        case 2.0:
            print(f'Input {value} is match as a float')
        case "3.5":
            print(f'Input {value} is match as a string')
        case EnumValue.SUCCESS:
            print(f'Success')
        case _:
            print(f'Failed to match {value}')


if __name__ == '__main__':
    variable_matching(3.5)              # Failed to match 3.5
    variable_matching("3.5")            # 3.5 is matched as a string
    variable_matching(EnumValue.FAILURE)   # Failed to match EnumValue.FAILURE


In the previous code snippet, the argument of the function variable_matching is checked against a set of values AND their types. The attempt to match the input against 3.5 failed because the type is incorrect.

The following truth table illustrates the basic matching algorithm:

Matched type

Matched value

Outcome

No

No

Failed

Yes

No

Failed

No

Yes

Failed

Yes

Yes

Succeed


What about more complex types?

Python mappings pattern

The previous section dealt with matching single value and types. What about more complex structures such as dictionaries.
The following code snippet illustrates the mechanism to match a pattern of key-value pairs.

from typing import Dict, Optional
#  Dict keys: 'name', 'status', 'role', 'bonus'


def mappings_matching(json_dict: Dict) -> Optional[Dict]:
    match json_dict:
       case {'name': 'Joan'}:
          json_dict['status'] = 'vacation'
          return json_dict
       case {'role': 'engineer', 'status': 'promoted'}:
          json_dict['bonus'] = True
          return json_dict
       case _:
          print(f'ERROR: {str(json_dict)} not supported')
          return None



if __name__ == '__main__':
  json_object = {
   'name': 'Joan', 'status': 'full-time', 'role': 'marketing director', 'bonus': False
  }
  print(mappings_matching(json_object))
  # {'name': 'Joan', 'status': 'vacation', 'role': 'marketing director', 'bonus': False}

  json_object = {
   'name': 'Frank', 'status': 'promoted', 'role': 'engineer', 'bonus': False
  }
  print(mappings_matching(json_object))
  # {'name': 'Frank', 'status': 'promoted', 'role': 'engineer', 'bonus': True}

  json_object = {
   'name': 'Frank', 'status': 'promoted', 'role': 'account manager', 'bonus': False
  }
  print(mappings_matching(json_object))
  # ERROR: {'name': 'Frank', 'status': 'promoted', 'role': 'account manager', 'bonus': False} not supported



The function mappings_matching attempts to match a single value Frank for key name then match values for two values, engineer and promoted for the respective keys, role and status.

Python class pattern

The previous section dealt with matching against built-in types. Let's look at custom types or classes that applies an operation such as adding, multiplying of two values.
First we defined a data class, Operator which is fully defined by two parameters:
  • name of the operator with type string
  • arguments, args, of the operator with a type tuple.
In order to succeed, the operation should be 
  • supported (name as key)
  • has exactly two arguments
These two condition defined the context of the pattern matching. The method Operator.__call__ generates the string representation of the operation op (args) (i.e,  + (4, 5)).
The method attempts to match 1) the name of the operator and 2) the number of arguments (which is expected to be 2 for addition and multiplication).

from typing import Any, AnyStr, Tuple
from dataclasses import dataclass


@dataclass
class Operator:
    name: AnyStr
    args: Tuple

    def __call__(self) -> AnyStr:
        match (self.name, len(self.args)):       # Minimum condition for matching
            case ('multiply', 2):
                value = self.args[0]*self.args[1]
                return f'{self.args[0]} x {self.args[1]} = {value}'
            case ('add', 2):
                value = self.args[0] + self.args[1]
                return f'{self.args[0]} + {self.args[1]} = {value}'
            case _:
                return "Undefined operation"

    def __str__(self):
        return f'{self.name}: {str(self.args)}'


if __name__ == '__main__':
operator = Operator(
"add", (3.5, 6.2))
print(operator) #
add: (3.5, 6.2)


Now let's match object of any type to perform the operation. The process follows two steps
  1. Match the type of input to Operator
  2. Match the attributes of the operator by invoking the method Operator.__call__ described above.

def object_matching(obj: Any) -> AnyStr:
    match obj:                                     # First match: Is an operator?
        case Operator('multiply', _):      # Second match: Are operator attributes valid?
            return obj()
        case Operator(_, _):
            return obj()
        case _:
           return f'Type not find {str(obj)}'


if __name__ == '__main__':
    operator = Operator("add", (3.5, 6.2))
    print(object_matching(operator))               # 3.5 + 6.2 = 9.7
    operator = Operator("multiply", (3, 2))
    print(object_matching(operator))               # 3 x 2 = 6
    operator = Operator("multiply", (1, 3, 9))
    print(object_matching(operator)) # Undefined operation
operator = Operator("divided", (3, 3)) print(object_matching(operator)) vvvv# Undefined operation print(object_matching(3.4)) b. # Type not find 3.4



This post illustrates some of the applications of the structural pattern matching feature introduced in version 3.10. There are many more patterns that worth exploring [4].

Thank you for reading this article. For more information ...

References




---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Wednesday, July 6, 2022

Fractal Dimension of Images in Python

Target audience: Expert
Estimated reading time: 8'
Configuring the parameters of a 2D convolutional neural network, such as kernel size and padding, can be challenging because it largely depends on the complexity of an image or its specific sections. Fractals help quantify the complexity of important features and boundaries within an image and ultimately guide the data scientist in optimizing his/her model.


Table of content
       Original image
        Image section
Follow me on LinkedIn

What you will learn: How to evaluate the complexity of an image using fractal dimension index.

Notes

  • Environments: Python  3.11,  Matplotlib 3.9.
  • Source code is available at  Github.com/patnicolas/Data_Exploration/fractal
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Overview

Fractal dimension

A fractal dimension is a measure used to describe the complexity of fractal patterns or sets by quantifying the ratio of change in detail relative to the change in scale [ref 1].

Initially, fractal dimensions were used to characterize intricate geometric forms where detailed patterns were more significant than the overall shape. For ordinary geometric shapes, the fractal dimension theoretically matches the familiar Euclidean or topological dimension.

However, the fractal dimension can take non-integer values. If a set's fractal dimension exceeds its topological dimension, it is considered to exhibit fractal geometry [ref 2].

There are many approaches to compute the fractal dimension [ref 1]  of an image, among them:
  • Variation method
  • Structure function method
  • Root mean square method
  • R/S analysis method
  • Box counting method
This article describes the concept and implementation of the box counting method in Python.

Box counting method

The box counting method is similar to the perimeter measuring technique we applied to coastlines. However, instead of measuring length, we overlay the image with a grid and count how many squares in the grid cover any part of the image. We then repeat this process with progressively finer grids, each with smaller squares [ref 3]. By continually reducing the grid size, we capture the pattern's structure with greater precision.
fig. 1 Illustration of the box counting method for Kock shape

If N is the number of measurements units (yardstick in 1D, square in 2D, cube in 3D,..) and eps the related scaling factor, the fractal dimension index is computed as\[ D=- \displaystyle \lim_{\epsilon  \to 0} \frac{log(N)}{log(\epsilon)} \simeq - \frac{log(N)}{log(eps)} \ \ with \ N=r^2  \ (1) \] We will use the height of the square box r as our measurement unit for images.

Implementation

First, let's define the box parameter (square)
  • eps Scaling factor for resizing the boxes
  • r Height or width of the squared boxes
@dataclass
class BoxParameter:
    eps: float
    r: int

    # Denominator of the Fractal Dimension
    def log_inv_eps(self) -> float:
        return -np.log(self.eps)

    # Numerator of the Fractal Dimension
    def log_num_r(self) -> float:
        return np.log(self.r)

The two methods log_inv_eps and log_num_r implement the numerator and denominator of the formula (1)


The class FractalDimImage encapsulates the computation of fractal dimension of a given image.
The two class (static) members are
  • num_grey_levels: Default number of grey scales
  • max_plateau_count: Number of attempts to exit a saddle point.
class FractalDimImage(object):
      # Default number of grey values
    num_grey_levels: int = 256
      # Convergence criteria
    max_plateau_count = 3

    def __init__(self, image_path: AnyStr) -> None:
        raw_image: np.array = self.__load_image(image_path)
        
        # If the image is actually a RGB (color) image, then converted to grey scale image
        self.image = FractalDimImage.rgb_to_grey( raw_image)  if raw_image.shape[2] == 3 
        else raw_image


We cannot assume that the image is not defined with the 3 RGB channels. Therefore if the 3rd value of the shape is 3, then the image is converted into a grey scale array.

The following private method, __load_image load the image from a given path and converted into a numpy array

@staticmethod
def __load_image(image_path: AnyStr) -> np.array
     from PIL import Image
     from numpy import asarray

    this_image = Image.open(mode="r", fp=image_path)
    return asarray(this_image)



The computation of fractal dimension is implemented by the method __call__. The method returns a tuple:
  • fractal dimension index
  • trace/history of the box parameters collected during execution.
The symmetrical nature of fractal allows to iterate over half the size of the image [1]. The number of boxes N created at each iteration i, take into account the grey scale. N= (256/ num_pixels) *i [2].

The method populates each box with pixels/grey scale (method __create_boxes) [3] , then compute the sum of least squares (__sum_least_squares) [4]. The last statement [5] implement the formula (1). The source code for the private methods __create_boxes and __sum_least_squares are included in the appendix for reference.

def __call__(self) -> (float, List[BoxParameter]):
   image_pixels = self.image.shape[0]  
   plateau_count = 0
   prev_num_r = -1
      
   trace = []
   max_iters = (image_pixels // 2) + 1   # [1]

   for iter in range(2, max_iters):
       num_boxes = grey_levels // (image_pixels // iter)  # [2]
       n_boxes = max(1, num_boxes)
       num_r = 0     # Number of squares
            
       eps = iter / image_pixels
       for i in range(0, image_pixels, iter):
           boxes = self.__create_boxes(i, iter, n_boxes)    # [3]
           num_r += FractalDimImage.__sum_least_squares(boxes, n_boxes)  # [4]

        # Detect if the number of measurements r has not changed..
       if num_r == prev_num_r:
           plateau_count += 1
           prev_num_r = num_r
       trace.append(BoxParameter(eps, num_r))

        # Break from the iteration if the computation is stuck 
        # in the same number of measurements
        if plateau_count > FractalDimImage.max_plateau_count:
             break

    # Implement the fractal dimension given the trace [5]
   return FractalDimImage.__compute_fractal_dim(trace), trace



The implementation of the formula for fractal dimension extracts a polynomial fitting the numerator and denominator and return the first order value.

@staticmethod
def __compute_fractal_dim(trace: List[BoxParameter]) -> float:
   from numpy.polynomial.polynomial import polyfit

   _x = np.array([box_param.log_inv_eps() for box_param in trace])
   _y = np.array([box_param.log_num_r() for box_param in trace])
   fitted = polyfit(x=_x, y=_y, deg=1, full=False)

   return float(fitted[1])


Evaluation


We compute the fractal dimension for an image then a region that contains the key features (meaning) of the image.

image_path_name = '../images/fractal_test_image.jpg'
fractal_dim_image = FractalDimImage(image_path_name)
fractal_dim, trace = fractal_dim_image()


Original image

The original RGB image has 542 x 880 pixels and converted into grey scale image.

Fig. 2 Original grey scale image 


Output: fractal_dim = 2.54

Fig. 3 Trace for the squared box measurement during iteration

The size of the box converges very quickly after 8 iterations.

Image region

We select the following region of 395 x 378 pixels

Fig. 4 Key region of the original grey scale image 


Output: fractal_dim = 2.63
The region has similar fractal dimension as the original image. This outcome should not be surprising: the pixels not contained in the selected region consists of background without features of any significance.

Fig. 5 Trace for the squared box measurement during iteration

The convergence pattern for calculating the fractal dimension of the region is comparable to that of the original image, reaching convergence after 6 iterations.

References


------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.

Appendix

Source code for initializing the square boxes

def __create_boxes(self, i: int, iter: int, n_boxes: int) -> List[List[np.array]]:
   boxes = [[]] * ((FractalDimImage.num_grey_levels + n_boxes - 1) // n_boxes)
   i_lim = i + iter

     # Shrink the boxes that are larger than i_lim
   for img_row in self.image[i: i_lim]:  
      for pixel in img_row[i: i_lim]:
          height = int(pixel // n_boxes)
          boxes[height].append(pixel)

   return boxes

Computation of the same of leas squares for boxes extracted from an image.

@staticmethod
def __sum_least_squares(boxes: List[List[float]], n_boxes: int) -> float:
   # Standard deviation of boxes
   stddev_box = np.sqrt(np.var(boxes, axis=1))
   # Filter out NAN values
   stddev = stddev_box[~np.isnan(stddev_box)]

   nBox_r = 2 * (stddev // n_boxes) + 1
   return sum(nBox_r)