Showing posts with label K-means. Show all posts
Showing posts with label K-means. Show all posts

Wednesday, May 1, 2024

K-means on Riemann Manifolds

Target audience: Advanced
Estimated reading time: 7'

Traditional clustering models often fail complex datasets commonly found in advanced applications like medical imaging, 3D shape analysis, and natural language processing where data is highly interrelated.
K-means on manifolds respects the intrinsic geometry of the data, such as curvature and metric.


Table of contents

       Setup
       Euclidean space
       Hypersphere
Follow me on LinkedIn

What you will learn: How to apply k-means clustering on a Riemann manifold (Hypersphere) using Geomstats, contrasted with its implementation in Euclidean space, using scikit-learn library.

Notes

  • Environments: Python  3.10.10, Geomstats 2.7.0, Scikit-learn 1.4.2, Matplotlib 3.8.3
  • This article assumes that the reader is somewhat familiar with differential and tensor calculus [ref 1]. Please refer to our previous articles related to geometric learning [ref 2, 3, 4].
  • Source code is available at  Github.com/patnicolas/Data_Exploration/manifolds
  • To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.

Introduction

The primary goal of learning Riemannian geometry is to understand and analyze the properties of curved spaces that cannot be described adequately using Euclidean geometry alone. Riemannian geometry enables to describe the geometric structures of manifolds equipped with a metric, which defines the concept of distance and angle on these spaces.

This article is the seventh part of our ongoing series focused on geometric learning. In this installment, we utilize the Geomstats Python library [ref. 5] and explore the ubiquitous K-means clustering algorithm on the hypersphere manifold. The hypersphere which was introduced in a previous piece, Geometric Learning in Python: Manifolds - Hypersphere  and is detailed in the Geomstats API [ref. 6]. 

I highly recommend watching the comprehensive series of 22 YouTube videos Tensor Calculus - Eigenchris  to familiarize yourself with fundamental concepts of differential geometry. 
Summaries of my earlier articles on this topic can be found in the Appendix

There are many benefits for clustering data on a manifold for complex data sets [ref 7]:
  • Grouping of dense, continuous non-linear data depends on the 'shape' of data
  • Projection to Euclidean space may introduce distortion
  • Loss, distances are better assessed and computed through geodesics than Euclidean metrics (i.e. sphere)

K-means


Among the array of unsupervised learning algorithms, K-means stands out as one of the most well-known. This algorithm has a straightforward goal: to divide the data space so that data points within the same cluster are as similar as possible (intra-cluster similarity), and data points in different clusters are as dissimilar as possible (inter-cluster similarity). K-means aims to identify a predetermined number of clusters in an unlabeled dataset. It employs an iterative approach to finalize the clustering, which depends on the number of clusters specified by the user (denoted by the variable K).

Given K, Ck clusters each with a centroid mk, the input data xi is distributed across the cluster so to minimize the reconstruction error:\[Rerr(K)=\max_{C_k}\sum_{k=1}^{K}\sum_{x_i\in C_k}^{}\left \| x_i-m_k\right \|^2\]

Clustering data on a manifold


To assess and contrast the k-means model in both Euclidean space and on a hypersphere, it's necessary to create clustered data. This involves a two-step process:
  1. Generate a template cluster by employing a random generator on the manifold.
  2. Generate 4 clusters from the template using a special orthogonal Lie group in 3-dimensional space, SO(3).

Randomly generated manifold data

Let's evaluate and compare the following random generators for data points on the hypersphere we introduced in a previous article [ref 8]
  • Uniform distribution
  • Uniform distribution with constraints
  • von Mises-Fisher distribution

Uniform distribution
We start with the basic random uniform generator over interval [0, 1].\[r=rand_{[0,1]}(x)\]The data points for the 4 clusters are visualized in the following plot.
4-Cluster random generation using uniform distribution


Constrained uniform random generator.
In this scenario, we constrain random values r on each of the 3 dimension (or axis) within a sub-interval [ai, bi].\[r=rand_{[0,1]}(x)\ \ \ a_i < r_i < b_i\]

4-Cluster random generation using constrained uniform distribution

von mises-Fisher random generator
This approach relies on a generative mixture-model approach to clustering directional data based on the von Mises-Fisher distribution [ref 9].
Given a d-dimensional unit random vector x on a hypersphere of dimension d-1, the d-variate von Moses-Fisher distribution is defined by the following probability density distribution:\[f(x|\mu , \kappa )=C_d(\kappa).e^{\kappa \mu^Tx} \ \ \  \ C_d(\kappa)=\frac{\kappa^{\frac{d}{2}-1}}{2\pi^{\frac{d}{2}}I_{\frac{d}{2}-1}(\kappa)}\]Id is the Bessel function and Cd is the normalization factor.

I
4-Cluster random generation using Von Mises-Fisher distribution

As anticipated, using a pure uniform random generator distributes data evenly across the hypersphere, rendering it ineffective for evaluating KMeans.
Instead, we will employ the von Mises-Fisher distribution and a constrained uniform random generator to more effectively analyze the performance of KMeans on a Riemann manifold.

Synthetic clusters using SO(3)

We leverage the SO(3) Lie group to replicate the randomly generated cluster.

Although the discussion of Lie groups and special orthogonal group in 3-dimensional space is beyond the scope of this article, here is a short summary:
In differential geometry, Lie groups play a crucial role by connecting the concepts of algebra and geometry. A Lie group is a mathematical structure that is both a group and a differentiable manifold. This means that the group operations of multiplication and taking inverses are smooth (differentiable), and it allows the application of calculus within the group structure.

The Special Orthogonal Lie group in 3-dimension space SO(3) is simply a group of 3 x 3 orthogonal matrices with determinant = 1. These represent rotation in
n-dimensional space and form a compact Lie group.

Implementation

Setup 

Let's wraps the random generators and k-means training methods in a class, KMeansOnManifold.
The von Mises-Fisher generator for the data in the initial cluster is initialized with a mean, _mu and a kappa arbitrary value. The constrained uniform random generator, accepts random values in each dimension x: [-1, -0.35], y: [0.3, 1] and z: [-1. 0.4].
The SO(3) Lie group is initialized without metric (equip=False) to generate the 4 synthetic clusters.

from geomstats.geometry.hypersphere import Hypersphere
from geomstats.geometry.special_orthogonal import SpecialOrthogonal

class KMeansOnManifold(object):

   def __init__(self, num_samples: int, num_clusters: int, random_gen: AnyStr):
     # Step 1: Initialize the manifold
     self.hypersphere = Hypersphere(dim=2, equip=True)

     # Step 2: Generate a single cluster with random data points on hypersphere
     match random_gen:
        case 'random_von_mises_fisher':
             # Select a pivot or mean value
             _mu = self.hypersphere.random_uniform(n_samples=1)
             # Generate the cluster
             cluster = self.hypersphere.random_von_mises_fisher(
                      mu=_mu[0], 
                      kappa=60, 
                      n_samples=num_samples, 
                      max_iter=200)
        
        case 'random_riemann_normal':
             cluster = self.hypersphere.random_riemannian_normal(n_samples=num_samples, max_iter=300)
        
        case 'random_uniform':
            cluster = self.hypersphere.random_uniform(n_samples=num_samples)
        
        case 'constrained_random_uniform'
            # Generate random values with constrains on each dimension.
            y = [x for x in self.hypersphere.random_uniform(n_samples=100000)
            if x[0] <= -0.35 and x[1] >= 0.3 and x[2] <= -0.40]
            cluster = np.array(y)[0:num_samples]

        case _:
            raise ValueError(f'{random_gen} generator is not supported')
        
     # Step 3: Generate other clusters using SO(3) manifolds
     so3_lie_group = SpecialOrthogonal(3, equip=False)
     # Generate the clusters
     self.clusters = [cluster @ so3_lie_group.random_uniform() for _ in range(num_clusters)]



Data in Euclidean space

The data class, KMeansCluster encapsulates the output (centroid and label) of the training of the k-means algorithm on the synthetic clustered data.

@dataclass
class KMeansCluster:
    center: np.array
    label: np.array

We rely k-means implementation in scikit-learn library [ref 10] (class KMeans) on the  to identify the clusters in the Euclidean space, selecting the elkan algorithm, and k-means++ initialization.

def euclidean_clustering(self) -> List[KMeansCluster]:
   from sklearn.cluster import KMeans

   kmeans = KMeans(
       n_clusters=len(self.clusters), 
       init='k-means++', 
       algorithm='elkan',
       max_iter=140)
  
   # Create a data set from points in clusters
   data = np.concatenate(self.clusters, axis=0)
   kmeans.fit(data)

   # Extract centroids and labels
   centers = kmeans.cluster_centers_
   labels = kmeans.labels_

   return [KMeansCluster(center, label) for center, label in zip(centers, labels)]

Output:
Cluster Center: [ 0.56035023 -0.4030522   0.70054776], Label: 0
Cluster Center: [-0.1997325  -0.38496744  0.8826764 ], Label: 2
Cluster Center: [0.04443849 0.86749237 0.46118632], Label: 3
Cluster Center: [-0.83876485 -0.45621187  0.23570083], Label: 1

Clearly, this implementation of k-means was not able to identify the proper clusters

Data on hypersphere

The methods to train k-means on the hypersphere uses the same semantic as its sklearn counterpart. It leverage the Geomstats, RiemannianKMeans class method.

def riemannian_clustering (self) -> List[KMeansCluster]:
    from geomstats.learning.kmeans import RiemannianKMeans

    # Invoke the Geomstats Riemann Means
    kmeans = RiemannianKMeans(space=self.hypersphere, n_clusters=len(self.clusters))
    
    # Build the data set from the clustered data points
    data = gs.concatenate(self.clusters, axis =0)
        
    kmeans.fit(data)

    # Extract predictions, centroids and labels
centers = kmeans.centroids_ labels = kmeans.labels_ return [KMeansCluster(center, label) for center, label in zip(centers, labels)]


Similar to k-means in Euclidean space, we identify the centroids for 4 clusters using 500 randomly generated samples.

num_samples = 500
num_clusters = 4
kmeans = KMeansOnManifold(num_samples, num_clusters,  'random_von_mises_fisher')
kmeans_cluster = kmeans.riemannian_clustering()


Output:
500 random samples on 4 clusters with von-mises-Fisher distribution
Cluster Center: [ 0.17772496 -0.36363422  0.91443097], Label: 2
Cluster Center: [ 0.44403679  0.06735507 -0.89347335], Label: 0
Cluster Center: [ 0.85407911 -0.50905801  0.10681211], Label: 3
Cluster Center: [ 0.90899637  0.02635062 -0.41597025], Label: 1

500 random samples on 4 clusters with constrained uniform distribution
Cluster Center: [-0.05344069 -0.91613807  0.3972847 ], Label: 1
Cluster Center: [ 0.6796575   0.39400079 -0.61873181], Label: 2
Cluster Center: [ 0.51799972 -0.67116261 -0.530299 ], Label: 0
Cluster Center: [ 0.49290501 -0.45790221 -0.73984473], Label: 3

Note: The labels are arbitrary indices assigned to each cluster for the purpose of visualization and validation against true labels.

References



--------------------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3

Appendix

Here is the list of published articles related to geometric learning:

Keywords: geometriclearning, riemanngeometry, manifold, ai, python, geomstats, Liegroups, kmeans

 

Wednesday, October 10, 2018

K-Means Clustering in Java: Components

Target audience: Advanced
Estimated reading time: 5'

K-means clustering stands as a prominent unsupervised learning technique, aiming to classify unlabelled data into distinct categories. Its primary objective is to identify inherent groupings within the dataset. To achieve this, the algorithm operates in cycles, designating each data entry to a specific group based on its defining features.

This introductory piece in our series delves into the implementation of K-means' fundamental components.


Table of contents
Follow me on LinkedIn
 

Overview

Among the clustering methods have been developed over the years from Spectral clustering, Non-negative Matrix factorization, Canopy to Hierarchical and K-means clustering. The K-means algorithm is by far the easiest to implement. This simplicity comes with a high price in terms of scalability and even reliability. However, as an unsupervised learning technique, K-means is still valuable for reducing the number of model features or detecting anomalies.

The objective is to classify observations or data points by groups that share common attributes or features. The diagram below illustrates the clustering of observations (x,y) for a simple 2-feature model.



Each cluster has a mean or centroid, m = ( .. m..). First we need to define a distance between an observation  X = (...x ..) and c. The Manhattan and Euclidean distances are respectively defined as:  \[d_{M} = \sum_{i=0}^{n}\left | x_{i} - m_{i}\right |\,\,\,,\,\,d_{E}= \sum_{i=0}^{n} (x_{i} - m_{i})^{2}\] The loss function for N cluster Cj is defined by \[W(C)=\frac{1}{2}\sum_{k=0}^{N-1}\sum_{c_{i}=k}\sum_{C_{j}} d(x_{i},x_{j})\]  The goal is to find the centroid m, and clusters C, that minimize the loss function as: \[C^{*}\left (i \right ) = arg\min _{k\in [0,N-1]}d (x_{i}, m_{k})\]

Note: For the sake of readability of the implementation of algorithms, all non-essential code such as error checking, comments, exception, validation of class and method arguments, scoping qualifiers or import is omitted.


Distances and observations

First we need to define the distance between each observation and the centroid of a cluster. The class hierarchy related to the distance can be implemented as nested classes as there is no reason to "expose" to client code. The interface, Distance, defines the signature of the computation method. For sake of simplicity, the sample code implements only the Manhattan and Euclidean distances.  Exceptions, validation of method arguments, setter and getter methods are omitted for the sake of simplicity.

protected interface Distance {
    public double compute(double[] x, Centroid centroid);
}

    // Defintion of d(x,y) =|x-y|
protected class ManhattanDistance implements Distance {
   
   public double compute(double[] x, Centroid centroid) {
       double sum = 0.0, xx = 0.0;
       for( int k = 0; k< x.length; k++) {
           xx = x[k] - centroid.get(k);
           if( xx < 0.0) {
              xx = -xx;
           }
           sum += xx;
       }
       return sum;
    }
}

  // Definition d(x,y)= sqr(x-y) 
protected class EuclideanDistance implements Distance {
  
    public double compute(double[] x, Centroid centroid) {
        double sum = 0.0, xx = 0.0;
        for( int k = 0; k < x.length; k++) {
            xx = x[k] - centroid.get(k);
            sum += xx*xx;
        } 
        return Math.sqrt(sum);
    } 
}

Next, we define an observation (or data point) as a vector or array of floating point values, in our example.  An observation can support heterogeneous types (boolean, integer, float point,..) as long as they are normalized to [0,1]. In our example we simply normalized over the maximum values for all the observations.

public final class Observation {

    // use Euclidean distance that is shared between all the instances
   private static Distance metric = new EuclideanDistance();

   public static void setMetric(final Distance metric) {
      this.metric = metric;
   }
 
   private double[] _x  = null;
   private int  _index  = -1;

   public Observation(double[] x, int index) { 
       _x = x; 
       _index = index; 
   }
   
    // compute distance between each point and the centroid
   public double computeDistance(final Centroid centroid) {
       return metric.compute(_x, centroid);
   }

    // normalize the value of data points.
   public void normalize(double[] maxValues) {
      for( int k = 0; k < _x.length; k++) {
         _x[k] /= maxValues[k];
      }
   }
}


Clustering

Centroid for each cluster are computed iteratively to reduce the loss function.  The centroid values are computed using the mean of each feature across all the observations. The method Centroid.compute initialize the data points belonging to a cluster with the list of observations and compute the centroid values _x by normalizing with the number of points. 

protected class Centroid {
   private double[] _x = null;       
       
   protected Centroid() {}
   protected Centroid(double[] x) {
       Array.systemCopy(_x, x, 0, x.length, sizeOf(double));
   }

    // Compute the centoid values _x by normalizing with the number of points.
   protected void compute(final List<Observation> observations)  {
       double[] x = new double[_x.length];
       Arrays.fill(x, 0.0);
           
      for( Observation point : observations ) {
         for(int k =0; k < x.length; k++) {
            x[k] += point.get(k);
         }
      }
    
      int numPoints = observations.size();
      for(int k =0; k < x.length; k++) {
         _x[k] = x[k]/numPoints;
      }
   }
}

A cluster, KmeansCluster is defined by its label (_index in this example) a centroid, _centroid, the list of observations, _observations it contains and the current loss associated to the cluster (sum of the distance between all observations and the centroid).
The cluster behavior is defined by the following methods:
  • computeCentroid: compute the sum of the distance between all the point in this cluster and the centroid.
  • attach: Attach or add a new observation to this cluster
  • detach: Remove an existing observations from this cluster.

public final class KmeansCluster {
   private int       _index   = -1;
   private Centroid  _centroid  = null; 
   private double    _sumDistances  = 0.0;
   private List<observation> _observations = new ArrayList<Observation>()

   public void computeCentroid() {
      _centroid.compute( _observations );
      for( Observation point : _observations  ) {
          point.computeDistance(_centroid);
      }
      computeSumDistances();
   }

     // Attach a new observation to this cluster.
   public void attach(final Observation point) { 
      point.computeDistance(_centroid);
      _observations.add(point);
      computeSumDistances();
   }

   public void detach(final Observation point) {
      _observations.remove(point);
      computeSumDistances();
   }
           
   private void computeSumDistances() { 
      _sumDistances = 0.0;     
      for( Observation point : _observations) {
        _sumDistances += point.computeDistance(_centroid);
      }
   }
      //....
}

Finally, the clustering class implements the training and run-time classification. The train method iterates across all the clusters and for all the observations to reassign the observations to each cluster. The iterative computation ends when either the loss value converges or the maximum number of iterations is reached. 

If the algorithm use K clusters with M observations with N variables the execution time for creating the clusters is K*M*N. If the algorithm converges after T iterations then the overall execution is T*K*M*N. For instance, the K-means classification of 20K observations and data with 25 dimension, using 10 clusters, converging after 50 iterations requires  250,000,000 evaluations! The constructor create the clustering algorithm with a predefined number of cluster, K, and a set of observations.
The method getCentroids retrieves the current list of centroids (value of centroid vectors)

public final class KmeansClustering { 
   private KmeansCluster[] _clusters = null;
   private Observation[] _obsList = null; 
   private double _totalDistance  = 0.0;
   private Centroid[] _centroids = null;
   
   public KmeansClustering(int numClusters, final Observation[] obsList) {   
      _clusters = new KmeansCluster[numClusters];
      for (int i = 0; i < numClusters; i++) {
         _clusters[i] = new KmeansCluster(i);
      }
      _obsList = obsList;
   }

 
   public final List<double[]> getCentroids() {
       List<double[]> centroidDataList = null;

       if(_clusters != null &&; _clusters.length < 0) {
           centroidDataList = new LinkedList<double[]>();
           for( KmeansCluster cluster : _clusters) {
               centroidDataList.add(cluster.getCentroid().getX());
           }
       }
       return centroidDataList;
   }
}

The next article, K-means clustering in Java: Classification  describes the implementation of the training and classification tasks.

Thank you for reading this article. For more information ...

References

  • The Elements of Statistical Learning   - T. Hastie, R.Tibshirani, J. Friedman  - Springer 2001
  • Machine Learning: A Probabilisitc Perspective 11.4.2.5 K-means algorithm - K. Murphy - MIT Press 2012
  • Pattern Recognition and Machine Learning: Chap 9 "Mixture Models and EM: K-means Clustering" C.Bishop - Springer Science 2006 
  • github.com/patnicolas


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3