Monday, August 19, 2024

Performance Improvement in Numpy 2.x & Lapack 3.9

Target audience: Beginner
Estimated reading time: 4'
The performance of training geometric learning and complex deep learning models directly affects time to market and development costs. Numpy 2.x, combined with the accelerator and the latest BLAS/LAPACK libraries, can reduce execution time by up to 38%.


Table of contents
Overview
Linear Algebra
      Evaluation
      Implementation
       Evaluation
Follow me on LinkedIn

What you will learn: Enhanced performance in linear algebra computations using Numpy 2.x with the ILP64 accelerator and LAPACK 3.9.1 library

Notes

  • Environments: Python  3.11,  Matplotlib 3.9, Numpy 1.2.6/2.1, LAPACK 3.9.1
  • Source code is available at Github.com/patnicolas/Data_Exploration/numpy2
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.


Overview

Non-linear deep learning and geometric learning depend on geodesics, Riemannian metrics, and exponential maps to train models and predict outcomes. These operations are computationally intensive, making them ideal candidates for performance enhancements in the latest release of Numpy, version 2.1 [ref 1]. 
Linear algebra operations, on the other hand, rely on the BLAS and LAPACK libraries, which can be optimized for specific platforms [ref 2].

This study is performed on a MacOS M2 Maxlaptop, utilizing optimizations of various mathematical libraries like LAPACK and BLAS from the Accelerate framework [ref 3]. Specifically, we use the ILP64 interface with the LAPACK 3.9.1 library alongside Numpy 2.1 [ref 4].

In the following two sections, we will assess the performance enhancements in Numpy 2.1 when utilizing the ILP64 accelerator for the LINPACK library on macOS. The evaluation will cover two scenarios:
- Basic Linear Algebra operations
- Fast Fourier Transform

Linear Algebra

Let's compare the performance of Numpy 2.1 with the LINPACK accelerator 3.9.1 against Numpy 1.2.6 by performing the following computations:
  • Sigmoid function on a Numpy array of varying sizes
  • Summation of two 3D Numpy arrays
  • Multiplication of two 3D Numpy arrays
  • Dot product of two arrays.

Implementation

Let's start by creating keys for our linear algebra operators.

SIGMOID: AnyStr = 'sigmoid'
ADD: AnyStr = 'add'
MUL: AnyStr = 'mul'
DOT: AnyStr = 'dot'


We wraps the basic operations on Numpy array into the class LinearAlgebraEval with two constructors
  • Default, __init__ with input array and shape
  • build method to generate a random 3D array.
The four methods/operations—sigmoid, add, mul, and dot product—are straightforward and self-explanatory.

class LinearAlgebraEval(object):
    def __init__(self, x: np.array, shape: List[int]) -> None:
        self.x = x.reshape(shape)

    @classmethod
    def build(cls, size: int) -> Self:
        _x = np.random.uniform(0.0, 100.0, size)
        return cls(_x, [10, 100, -1])

    def sigmoid(self, a: float) -> np.array:
        return 1.0/(1.0+ np.exp(-self.x * a))

    def add(self, y: np.array) -> np.array:
        return self.x + y.reshape(self.x.shape)

    def mul(self, y: np.array) -> np.array:
        z = 2.0* self.x * y.reshape(self.x.shape)
        return z * z

    def dot(self, y: np.array) -> np.array:
        return np.dot(self.x.reshape(-1), y.reshape(-1))
   

The performance evaluation is conducted by an instance of the LinearAlgebraPerfEval class. The key method __call__ relies on a dictionary with operation as key and list of execution times for these operations.

class LinearAlgebraPerfEval(object):
    def __init__(self, sizes: List[int]) -> None:
        self.sizes = sizes

    def __call__(self) -> Dict[AnyStr, List[float]]:
        _perf = {}
        lin_algebra = LinearAlgebraEval.build(self.sizes[0])
        _perf[SIGMOID] = [LinearAlgebraPerfEval.__performance(lin_algebra, SIGMOID)]
        _perf[ADD] = [LinearAlgebraPerfEval.__performance(lin_algebra, ADD)]
        _perf[MUL] = [LinearAlgebraPerfEval.__performance(lin_algebra, MUL)]
        _perf[DOT] = [ LinearAlgebraPerfEval.__performance(lin_algebra, DOT)]

        for index in range(1, len(self.sizes)):
            # Alternative constructor
            lin_algebra = LinearAlgebraEval.build(self.sizes[index])

            # The method __record executes each of the four operations for
            # a variable size of Numpy array.

            _perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, SIGMOID)
            _perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, ADD)
            _perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, MUL)
            _perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, DOT)
        return _perf


The private, helper method, __record described in the Appendix, executes and time  each of the four operations for set of Numpy arrays of increasing size. The timing relies on the timeit decorator described in the Appendix.

Evaluation

Let's plot the execution time for these 4 operations with a size of 3D arrays (10, 100, -1) varying between 1million to 60 million values.

Numpy 1.2.6 with OpenBlas


Numpy 2.1 with ILP64- BLAS/LAPACK 3.9.1



Numpy 2.1 and ILP-64 have no impact on the performance of the Sigmoid but reduce the average time to execute and addition, multiplication and dot product on these large arrays by 25 to 40%


Fast Fourier Transform

Let's define a signal as a sum of 4 sinusoidal functions with 4 different frequency modes.\[ f(t)=\sum_{i=1}^{4} sin(2.\pi.f_{i}.t)\]The signal is then sampled in the interval [0, 1] with various number of data points.

Implementation

We encapsulate the generation, sampling, and extraction of the frequency spectrum within the `FFTEval` class. 
The constructor iteratively adds sine functions to implement the desired formula. 
The `compute` method samples the signal over the interval [0, 1] and calls the Numpy `fft.fftfreq` method to generate the frequency distribution. The timing of the execution in the method compute uses the timeit decorator.

class FFTEval(object):
    def __init__(self, frequencies: List[int], samples: int) -> None:
        self.F = 1.0/samples
        self.x = np.arange(0, 1.0, self.F)
        pi_2 = 2*np.pi
        
       # Composed the sinusoidal signal/function
        self.signal = np.sin(pi_2*frequencies[0]*self.x)
        if len(frequencies) > 1:
            for f in frequencies[1:]:
                self.signal += np.sin(pi_2*f*self.x)

    @timeit
    def compute(self) -> (np.array, np.array):
        num_samples = len(self.signal)

        # Select the positive amplitudes and frequencies
        num_half_samples = num_samples//2

        # Invoke the Numpy FFT function to extract frequencies
        freqs = np.fft.fftfreq(num_samples, self.F)
        _freqs = freqs[:num_half_samples]
        _amplitudes = np.abs(np.fft.fft(self.signal)[:num_half_samples]) / num_samples
        return _freqs, _amplitudes

The class FFTPerfEval implements the execution and collection of the execution time for the FFT with various frequency of samples (number of samples in the interval [0, 1])

class FFTPerfEval(PerfEval):
    def __init__(self, sizes: List[int]) -> None:
        super(FFTPerfEval, self).__init__(sizes)

    def __call__(self) -> List[float]:
        durations = []

        # Collect the execution time for each of the number of
        # samples defined in the constructor
        for samples in self.sizes:
            durations.append(FFTPerfEval.__compute(samples))
        return durations

    @staticmethod
    def __compute(sz: int) -> float:
        frequencies = [4, 7, 11, 17]
        fft_eval = FFTEval(frequencies, sz)
        # Time it
        return fft_eval.compute()


Evaluation

Numpy 1.2.6 with OpenBlas



Numpy 2.1 with BLAS/LAPACK 3.9.1



Relative performance improvement


The switch to Numpy 2.1 with ILP-64 on MacOS 14.6.1 shows an average improvement of 33%

References

[1Numpy
[2Lapack


------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 
and Geometric Learning in Python Newsletter on LinkedIn.

Appendix

Standard timing decorator in Python

def timeit(func):
    import time

    def wrapper(*args, **kwargs):
        start_time = time.time()
        func(*args, **kwargs)
        end_time = time.time()
        return end_time - start_time
    return wrapper


Methods to compute and record the execution time for basic linear algebra operations. The execution time is recorded through the decorator timer

 @staticmethod
    def __record(perf: Dict[AnyStr, List[float]],
                 lin_algebra: LinearAlgebraEval,
                 op: AnyStr) -> Dict[AnyStr, List[float]]:
        duration = LinearAlgebraPerfEval.__performance(lin_algebra, op)
        lst = perf[op]
        lst.append(duration)
        perf[op] = lst
        return perf

    @timeit
    @staticmethod
    def __performance(lin_algebra: LinearAlgebraEval, op: AnyStr) -> np.array:
        match op:
            case "sigmoid":
                return lin_algebra.sigmoid(8.5)
            case "add":
                return lin_algebra.add(lin_algebra.x)
            case "mul":
                return lin_algebra.mul(lin_algebra.x)
            case "dot":
                return lin_algebra.dot(lin_algebra.


Wednesday, August 7, 2024

Fractal Dimension of Images in Python

Target audience: Expert
Estimated reading time: 8'
Configuring the parameters of a 2D convolutional neural network, such as kernel size and padding, can be challenging because it largely depends on the complexity of an image or its specific sections. Fractals help quantify the complexity of important features and boundaries within an image and ultimately guide the data scientist in optimizing his/her model.


Table of content
       Original image
        Image section
Follow me on LinkedIn

What you will learn: How to evaluate the complexity of an image using fractal dimension index.

Notes

  • Environments: Python  3.11,  Matplotlib 3.9.
  • Source code is available at  Github.com/patnicolas/Data_Exploration/fractal
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Overview

Fractal dimension

A fractal dimension is a measure used to describe the complexity of fractal patterns or sets by quantifying the ratio of change in detail relative to the change in scale [ref 1].

Initially, fractal dimensions were used to characterize intricate geometric forms where detailed patterns were more significant than the overall shape. For ordinary geometric shapes, the fractal dimension theoretically matches the familiar Euclidean or topological dimension.

However, the fractal dimension can take non-integer values. If a set's fractal dimension exceeds its topological dimension, it is considered to exhibit fractal geometry [ref 2].

There are many approaches to compute the fractal dimension [ref 1]  of an image, among them:
  • Variation method
  • Structure function method
  • Root mean square method
  • R/S analysis method
  • Box counting method
This article describes the concept and implementation of the box counting method in Python.

Box counting method

The box counting method is similar to the perimeter measuring technique we applied to coastlines. However, instead of measuring length, we overlay the image with a grid and count how many squares in the grid cover any part of the image. We then repeat this process with progressively finer grids, each with smaller squares [ref 3]. By continually reducing the grid size, we capture the pattern's structure with greater precision.
fig. 1 Illustration of the box counting method for Kock shape

If N is the number of measurements units (yardstick in 1D, square in 2D, cube in 3D,..) and eps the related scaling factor, the fractal dimension index is computed as\[ D=- \displaystyle \lim_{\epsilon  \to 0} \frac{log(N)}{log(\epsilon)} \simeq - \frac{log(N)}{log(eps)} \ \ with \ N=r^2  \ (1) \] We will use the height of the square box r as our measurement unit for images.

Implementation

First, let's define the box parameter (square)
  • eps Scaling factor for resizing the boxes
  • r Height or width of the squared boxes
@dataclass
class BoxParameter:
    eps: float
    r: int

    # Denominator of the Fractal Dimension
    def log_inv_eps(self) -> float:
        return -np.log(self.eps)

    # Numerator of the Fractal Dimension
    def log_num_r(self) -> float:
        return np.log(self.r)

The two methods log_inv_eps and log_num_r implement the numerator and denominator of the formula (1)


The class FractalDimImage encapsulates the computation of fractal dimension of a given image.
The two class (static) members are
  • num_grey_levels: Default number of grey scales
  • max_plateau_count: Number of attempts to exit a saddle point.
class FractalDimImage(object):
      # Default number of grey values
    num_grey_levels: int = 256
      # Convergence criteria
    max_plateau_count = 3

    def __init__(self, image_path: AnyStr) -> None:
        raw_image: np.array = self.__load_image(image_path)
        
        # If the image is actually a RGB (color) image, then converted to grey scale image
        self.image = FractalDimImage.rgb_to_grey( raw_image)  if raw_image.shape[2] == 3 
        else raw_image


We cannot assume that the image is not defined with the 3 RGB channels. Therefore if the 3rd value of the shape is 3, then the image is converted into a grey scale array.

The following private method, __load_image load the image from a given path and converted into a numpy array

@staticmethod
def __load_image(image_path: AnyStr) -> np.array
     from PIL import Image
     from numpy import asarray

    this_image = Image.open(mode="r", fp=image_path)
    return asarray(this_image)



The computation of fractal dimension is implemented by the method __call__. The method returns a tuple:
  • fractal dimension index
  • trace/history of the box parameters collected during execution.
The symmetrical nature of fractal allows to iterate over half the size of the image [1]. The number of boxes N created at each iteration i, take into account the grey scale. N= (256/ num_pixels) *i [2].

The method populates each box with pixels/grey scale (method __create_boxes) [3] , then compute the sum of least squares (__sum_least_squares) [4]. The last statement [5] implement the formula (1). The source code for the private methods __create_boxes and __sum_least_squares are included in the appendix for reference.

def __call__(self) -> (float, List[BoxParameter]):
   image_pixels = self.image.shape[0]  
   plateau_count = 0
   prev_num_r = -1
      
   trace = []
   max_iters = (image_pixels // 2) + 1   # [1]

   for iter in range(2, max_iters):
       num_boxes = grey_levels // (image_pixels // iter)  # [2]
       n_boxes = max(1, num_boxes)
       num_r = 0     # Number of squares
            
       eps = iter / image_pixels
       for i in range(0, image_pixels, iter):
           boxes = self.__create_boxes(i, iter, n_boxes)    # [3]
           num_r += FractalDimImage.__sum_least_squares(boxes, n_boxes)  # [4]

        # Detect if the number of measurements r has not changed..
       if num_r == prev_num_r:
           plateau_count += 1
           prev_num_r = num_r
       trace.append(BoxParameter(eps, num_r))

        # Break from the iteration if the computation is stuck 
        # in the same number of measurements
        if plateau_count > FractalDimImage.max_plateau_count:
             break

    # Implement the fractal dimension given the trace [5]
   return FractalDimImage.__compute_fractal_dim(trace), trace



The implementation of the formula for fractal dimension extracts a polynomial fitting the numerator and denominator and return the first order value.

@staticmethod
def __compute_fractal_dim(trace: List[BoxParameter]) -> float:
   from numpy.polynomial.polynomial import polyfit

   _x = np.array([box_param.log_inv_eps() for box_param in trace])
   _y = np.array([box_param.log_num_r() for box_param in trace])
   fitted = polyfit(x=_x, y=_y, deg=1, full=False)

   return float(fitted[1])


Evaluation


We compute the fractal dimension for an image then a region that contains the key features (meaning) of the image.

image_path_name = '../images/fractal_test_image.jpg'
fractal_dim_image = FractalDimImage(image_path_name)
fractal_dim, trace = fractal_dim_image()


Original image

The original RGB image has 542 x 880 pixels and converted into grey scale image.

Fig. 2 Original grey scale image 


Output: fractal_dim = 2.54

Fig. 3 Trace for the squared box measurement during iteration

The size of the box converges very quickly after 8 iterations.

Image region

We select the following region of 395 x 378 pixels

Fig. 4 Key region of the original grey scale image 


Output: fractal_dim = 2.63
The region has similar fractal dimension as the original image. This outcome should not be surprising: the pixels not contained in the selected region consists of background without features of any significance.

Fig. 5 Trace for the squared box measurement during iteration

The convergence pattern for calculating the fractal dimension of the region is comparable to that of the original image, reaching convergence after 6 iterations.

References


------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.

Appendix

Source code for initializing the square boxes

def __create_boxes(self, i: int, iter: int, n_boxes: int) -> List[List[np.array]]:
   boxes = [[]] * ((FractalDimImage.num_grey_levels + n_boxes - 1) // n_boxes)
   i_lim = i + iter

     # Shrink the boxes that are larger than i_lim
   for img_row in self.image[i: i_lim]:  
      for pixel in img_row[i: i_lim]:
          height = int(pixel // n_boxes)
          boxes[height].append(pixel)

   return boxes

Computation of the same of leas squares for boxes extracted from an image.

@staticmethod
def __sum_least_squares(boxes: List[List[float]], n_boxes: int) -> float:
   # Standard deviation of boxes
   stddev_box = np.sqrt(np.var(boxes, axis=1))
   # Filter out NAN values
   stddev = stddev_box[~np.isnan(stddev_box)]

   nBox_r = 2 * (stddev // n_boxes) + 1
   return sum(nBox_r)