Wednesday, July 6, 2022

Fractal Dimension of Images in Python

Target audience: Expert
Estimated reading time: 8'
Configuring the parameters of a 2D convolutional neural network, such as kernel size and padding, can be challenging because it largely depends on the complexity of an image or its specific sections. Fractals help quantify the complexity of important features and boundaries within an image and ultimately guide the data scientist in optimizing his/her model.


Table of content
       Original image
        Image section
Follow me on LinkedIn

What you will learn: How to evaluate the complexity of an image using fractal dimension index.

Notes

  • Environments: Python  3.11,  Matplotlib 3.9.
  • Source code is available at  Github.com/patnicolas/Data_Exploration/fractal
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Overview

Fractal dimension

A fractal dimension is a measure used to describe the complexity of fractal patterns or sets by quantifying the ratio of change in detail relative to the change in scale [ref 1].

Initially, fractal dimensions were used to characterize intricate geometric forms where detailed patterns were more significant than the overall shape. For ordinary geometric shapes, the fractal dimension theoretically matches the familiar Euclidean or topological dimension.

However, the fractal dimension can take non-integer values. If a set's fractal dimension exceeds its topological dimension, it is considered to exhibit fractal geometry [ref 2].

There are many approaches to compute the fractal dimension [ref 1]  of an image, among them:
  • Variation method
  • Structure function method
  • Root mean square method
  • R/S analysis method
  • Box counting method
This article describes the concept and implementation of the box counting method in Python.

Box counting method

The box counting method is similar to the perimeter measuring technique we applied to coastlines. However, instead of measuring length, we overlay the image with a grid and count how many squares in the grid cover any part of the image. We then repeat this process with progressively finer grids, each with smaller squares [ref 3]. By continually reducing the grid size, we capture the pattern's structure with greater precision.
fig. 1 Illustration of the box counting method for Kock shape

If N is the number of measurements units (yardstick in 1D, square in 2D, cube in 3D,..) and eps the related scaling factor, the fractal dimension index is computed as\[ D=- \displaystyle \lim_{\epsilon  \to 0} \frac{log(N)}{log(\epsilon)} \simeq - \frac{log(N)}{log(eps)} \ \ with \ N=r^2  \ (1) \] We will use the height of the square box r as our measurement unit for images.

Implementation

First, let's define the box parameter (square)
  • eps Scaling factor for resizing the boxes
  • r Height or width of the squared boxes
@dataclass
class BoxParameter:
    eps: float
    r: int

    # Denominator of the Fractal Dimension
    def log_inv_eps(self) -> float:
        return -np.log(self.eps)

    # Numerator of the Fractal Dimension
    def log_num_r(self) -> float:
        return np.log(self.r)

The two methods log_inv_eps and log_num_r implement the numerator and denominator of the formula (1)


The class FractalDimImage encapsulates the computation of fractal dimension of a given image.
The two class (static) members are
  • num_grey_levels: Default number of grey scales
  • max_plateau_count: Number of attempts to exit a saddle point.
class FractalDimImage(object):
      # Default number of grey values
    num_grey_levels: int = 256
      # Convergence criteria
    max_plateau_count = 3

    def __init__(self, image_path: AnyStr) -> None:
        raw_image: np.array = self.__load_image(image_path)
        
        # If the image is actually a RGB (color) image, then converted to grey scale image
        self.image = FractalDimImage.rgb_to_grey( raw_image)  if raw_image.shape[2] == 3 
        else raw_image


We cannot assume that the image is not defined with the 3 RGB channels. Therefore if the 3rd value of the shape is 3, then the image is converted into a grey scale array.

The following private method, __load_image load the image from a given path and converted into a numpy array

@staticmethod
def __load_image(image_path: AnyStr) -> np.array
     from PIL import Image
     from numpy import asarray

    this_image = Image.open(mode="r", fp=image_path)
    return asarray(this_image)



The computation of fractal dimension is implemented by the method __call__. The method returns a tuple:
  • fractal dimension index
  • trace/history of the box parameters collected during execution.
The symmetrical nature of fractal allows to iterate over half the size of the image [1]. The number of boxes N created at each iteration i, take into account the grey scale. N= (256/ num_pixels) *i [2].

The method populates each box with pixels/grey scale (method __create_boxes) [3] , then compute the sum of least squares (__sum_least_squares) [4]. The last statement [5] implement the formula (1). The source code for the private methods __create_boxes and __sum_least_squares are included in the appendix for reference.

def __call__(self) -> (float, List[BoxParameter]):
   image_pixels = self.image.shape[0]  
   plateau_count = 0
   prev_num_r = -1
      
   trace = []
   max_iters = (image_pixels // 2) + 1   # [1]

   for iter in range(2, max_iters):
       num_boxes = grey_levels // (image_pixels // iter)  # [2]
       n_boxes = max(1, num_boxes)
       num_r = 0     # Number of squares
            
       eps = iter / image_pixels
       for i in range(0, image_pixels, iter):
           boxes = self.__create_boxes(i, iter, n_boxes)    # [3]
           num_r += FractalDimImage.__sum_least_squares(boxes, n_boxes)  # [4]

        # Detect if the number of measurements r has not changed..
       if num_r == prev_num_r:
           plateau_count += 1
           prev_num_r = num_r
       trace.append(BoxParameter(eps, num_r))

        # Break from the iteration if the computation is stuck 
        # in the same number of measurements
        if plateau_count > FractalDimImage.max_plateau_count:
             break

    # Implement the fractal dimension given the trace [5]
   return FractalDimImage.__compute_fractal_dim(trace), trace



The implementation of the formula for fractal dimension extracts a polynomial fitting the numerator and denominator and return the first order value.

@staticmethod
def __compute_fractal_dim(trace: List[BoxParameter]) -> float:
   from numpy.polynomial.polynomial import polyfit

   _x = np.array([box_param.log_inv_eps() for box_param in trace])
   _y = np.array([box_param.log_num_r() for box_param in trace])
   fitted = polyfit(x=_x, y=_y, deg=1, full=False)

   return float(fitted[1])


Evaluation


We compute the fractal dimension for an image then a region that contains the key features (meaning) of the image.

image_path_name = '../images/fractal_test_image.jpg'
fractal_dim_image = FractalDimImage(image_path_name)
fractal_dim, trace = fractal_dim_image()


Original image

The original RGB image has 542 x 880 pixels and converted into grey scale image.

Fig. 2 Original grey scale image 


Output: fractal_dim = 2.54

Fig. 3 Trace for the squared box measurement during iteration

The size of the box converges very quickly after 8 iterations.

Image region

We select the following region of 395 x 378 pixels

Fig. 4 Key region of the original grey scale image 


Output: fractal_dim = 2.63
The region has similar fractal dimension as the original image. This outcome should not be surprising: the pixels not contained in the selected region consists of background without features of any significance.

Fig. 5 Trace for the squared box measurement during iteration

The convergence pattern for calculating the fractal dimension of the region is comparable to that of the original image, reaching convergence after 6 iterations.

References


------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.

Appendix

Source code for initializing the square boxes

def __create_boxes(self, i: int, iter: int, n_boxes: int) -> List[List[np.array]]:
   boxes = [[]] * ((FractalDimImage.num_grey_levels + n_boxes - 1) // n_boxes)
   i_lim = i + iter

     # Shrink the boxes that are larger than i_lim
   for img_row in self.image[i: i_lim]:  
      for pixel in img_row[i: i_lim]:
          height = int(pixel // n_boxes)
          boxes[height].append(pixel)

   return boxes

Computation of the same of leas squares for boxes extracted from an image.

@staticmethod
def __sum_least_squares(boxes: List[List[float]], n_boxes: int) -> float:
   # Standard deviation of boxes
   stddev_box = np.sqrt(np.var(boxes, axis=1))
   # Filter out NAN values
   stddev = stddev_box[~np.isnan(stddev_box)]

   nBox_r = 2 * (stddev // n_boxes) + 1
   return sum(nBox_r)


Friday, June 3, 2022

Manage Memory in Deep Java Library

Target audience: Advanced
Estimated reading time: 4'

This post introduces some techniques to monitor memory usage and leaks in machine learning applications using the Deep Java Learning (DJL) library [ref 1]. This bag of tricks is far from being exhaustive.


Table of contents

DJL is an open source framework to support distributed inference in Java for deep learning models such as MXNet, Tensor flow or PyTorch.
The training of deep learning models may require a very significant amount of floating computations which are best supported by GPUs. However, the memory model in JVM is incompatible with column-based resident memory requires by the GPU. 

Vectorization libraries such as Blast are implemented in C/C++ and support fast execution of linear algebra operations. The ubiquitous Python numerical library, numpy [ref 2] commonly used in data science is a wrapper around these low level math functions. The ND interface, used in DJL, provide Java developers with similar functionality.


NotesThe code snippets in this post are written in Scala but can be easily reworked in Java

The basics

Memory types
DJL supports monitoring 3 memory components
  • Resident Set Size (RSS) is the portion of the memory used by a process that is held in RAM memory and cannot be swapped. 
  • Heap is the section of memory used by object dynamically allocated
  • Non-heap is the section encompassing static memory and stack allocation

Tensor representation

Deep learning frameworks operations on tensors. Those tensors are implemented as NDArray objects, created dynamically from array of values (integer, float,...). NDManager is memory collector/manager native to the underlying C++ implementation of the various deep learning frameworks. Its purpose is to create and delete (close) NDArray instances. NDManager has a hierarchical (single root tree) structure the child manager can be spawn from a parent [ref 3].


Let's consider the following, simple example of the computation of the mean of a sequence of floating point values.
 
 
import ai.djl.ndarray.NDManager

// Set up the memory manager
val ndManager = ndManager.newBaseManager()
    
val input = Array.fill(1000)(Random.nexFloat())
// Allocate resources outside JVM
val ndInput = ndManager.create(input)
val ndMean = ndInput.means()
val mean = ndMean.toFloatArray.head

// Release ND resources
ndManager.close()
 

The steps implemented in the code snippet are:
  1. instantiates the root resource manager, ndManager
  2. creates an array of 1000 random floating point values
  3. convert into a ND array, ndInput
  4. computes the mean, ndMean
  5. convert back to Java data types
  6. and finally close the root manager.

The root NDManager can be broken down it child managers to allow a finer granularity of allocation and release of resources. The following method, computeMean, instantiates a child manager, subNDManager,  to compute the mean value.  The child manager has to be explicitly closed (releasing associated resources) before the function returns.
The memory associated with the local ND variables, ndInput and ndMean are automatically released when going out of scope.

 
import ai.djl.ndarray.NDManager

def computeMean(input: Array[Float], ndManager: NDManager): Float = 
   if(input.nonEmpty) {
      val subNDManager = ndManager.newSubManager()
      val ndInput = ndManager.create(input)
      val ndMean = ndInput.means()
      val mean = ndMean.toFloatArray.head
     
      subNDManager.close()
      mean 
////f// Local resources, ndInput and ndMean are released
     // when going out of scope
  }
  else
     0.0F 
  


JMX to the rescue

The JVM provides developers with the ability to access operating system metrics such as CPU, or heap consumption through the Java Management Extension (JMX) interface [ref 4]
The DJL class, MemoryTrainingListener, leverages JMX monitoring capability, It provides developers with a simple method, collectMemoryInfo to collect metrics

First we need to instruct DJL to enable collection of memory stats as a Java property

  
System.setProperty("collect-memory", "true") 
 

Similarly to the VisualVM heap memory snapshot, described in the next section, we can collect memory metrics (RSS, Heap and NonHeap) before and after each new NDArray object is created or released. 

  
def computeMean(
   input: Array[Float], 
   ndManager: NDManager, 
   metricName: String): Float = {
      
    val manager = ndManager.newSubManager()
    // Initialize a new metrics
    val metrics = new Metrics()

    //  Initialize the collection of memory related metrics
    MemoryTrainingListener.collectMemoryInfo(metrics)
    val initVal = metrics.latestMetric(metricName).getValue.longValue
      
    val ndInput = ndManager.create(input)
    val ndMean = ndInput.mean()

    collectMetric(metrics, initVal, metricName)
    val mean = ndMean.toFloatArray.head

    // Close the output array and collect metrics
    ndMean.close()
    collectMetric(metrics, initVal, metricName)
     
    // Close the input array and collect metrics
    ndInput.close()
    collectMetric(metrics, originalValue, metricName)
      
    // Close the sub manager and collect metrics
    ndManager.close()
    collectMetric(metrics, initVal, metricName) 
    mean
}

First we instantiate a Metrics that is passed along all the various snapshots. Given the metrics and current NDManager, we create a base line in heap memory size, initVal.  We then collect the value of the metric for each creation and release of NDArray instances (collectMetric) from our mean computation example.

Here is a simple snapshot method which compute the increase/decrease in heap memory from the base line.
 
 
def collectMetric(
  metrics: Metrics, 
  initVal: Long, 
  metricName: String): Unit = {

    MemoryTrainingListener.collectMemoryInfo(metrics)  
    val newVal = metrics.latestMetric(metricName).getValue.longValue
    println(s"$metricName: ${(newVal - initVal)/1024.0} KB")
}


Memory leaks detection

I have been a combination of several investigative techniques for estimating the source of a memory leak.

MemoryTrainingListener.debugDump
This method will dump basic memory and CPU stats into a local file for a given metrics

 
  MemoryTrainingListener.debugDump(metrics, outputFile)
  
 
Output
Heap.Bytes:72387328|#Host:10.5.67.192
Heap.Bytes:74484480|#Host:10.5.67.192
NonHeap.Bytes:39337256|#Host:10.5.67.192
NonHeap.Bytes:40466888|#Host:10.5.67.192
cpu.Percent:262.2|#Host:10.5.67.192
cpu.Percent:262.2|#Host:10.5.67.192
rss.Bytes:236146688|#Host:10.5.67.192
rss.Bytes:244297728|#Host:10.5.67.192

NDManager.cap

It is not uncommon to have a NDArray objects associated with a sub manager not been properly closed. One simple solution is to prevent allocating new objects into the parent manager.


 
// Protect the parent/root manager from
// accidental allocation of NDArray objects
 ndManager.cap()

 // Set up the memory manager
 val ndManager = ndManager.newBaseManager()

 val ndInput = ndManager.create(input)

  


Profilers
For reference, DJL introduces a set of experimental profilers to support investigation of memory consumption bottlenecks [ref 5]

VisualVM
We select VisualVM [ref 6] among the various JVM profiling solutions to highlight some key statistics in investigating a memory leak.  VisualVM is a utility that is to be downloaded for Oracle site. It is not bundled with JDK.

A simple way to identify excessive memory consumption is taking regular snapshots or dump of the objects allocated from the heap, as illustrated below.


VisualVM has an intuitive UI to drill down into the sequence or composite objects. Besides quantifying memory consumption during inference, the following details view illustrates the hierarchical nature of the ND manager.






Environments; JDK 11, Scala 2.12.15, Deep Java Library 0.20.0



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3