Target audience: Advanced
Estimated reading time: 6'
Newsletter: Geometric Learning in Python
Traditional linear models in machine learning, such as logistic regression, struggle to grasp the complex characteristics of data in very high dimensions.
Symmetric Positive Definite manifolds improve the output quality of logistic regression by enhancing feature representation in a lower-dimensional space.
Table of content
What you will learn: How to implement and validate a binary logistic regression classifier on SPD manifolds using affine invariant and log Euclidean metrics.
Notes:
- Environments: Python 3.10.10, Geomstats 2.7.0, Scikit-learn 1.4.2
- This article assumes that the reader is somewhat familiar with differential and tensor calculus [ref 1]. Please refer to our previous articles related to geometric learning [ref 2, 3, 4].
- Source code is available at Github.com/patnicolas/Data_Exploration/manifolds
- To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statements.
Introduction
This article is the eight installments of our ongoing series focused on geometric learning. In this installment, we utilize the Geomstats Python library [ref. 5].
Note: Summaries of my earlier articles on this topic can be found in the Appendix
The primary goal of learning Riemannian geometry is to understand and analyze the properties of curved spaces that cannot be described adequately using Euclidean geometry alone.
Using logistic regression for classification on low-dimensional data manifolds offers several benefits:
- Simplicity and interpretability: The model provides clear insights into the relationship between the input features and the probability of belonging to a certain class.
- Efficiency: On low-dimensional manifolds, logistic regression is computationally efficient.
- Good performance in linearly separable cases: The logistic regression performs exceptionally well if the data in the low-dimensional manifold is linearly separable.
- Robustness to overfitting: In lower-dimensional spaces, the risk of a simpler model such as the logistic regression to overfit is generally reduced.
- Support for non-linear boundaries: Although linear, the logistic regression can handle non-linear boundaries in low-dimensional space than Euclidean space.
This article relies on the Symmetric Positive Definite (SPD) Lie group of matrices as our manifold for evaluation. We will introduce, review or describe:
- Logistic regression as a binary classifier
- SPD matrices
- Logarithms and exponential maps on manifolds introduced in (Geometric Learning in Python: Manifolds)
- Riemannian metrics associated to SPD
- Implementation of binary logistic regression using Scikit-learn and Geomstats Python libraries
- Verification using randomly generated SPDs and cross-validation.
Logistic regression on manifolds
Logistic regression
Let's review the ubiquitous binary logistic regression. For a set of two classes C = {0, 1} the probability of predicting the correct class given a features set x and a model weights w is defined by sigmoid, sigm transform:\[p(C|\mathbf{x},\mathbf{w})=p^k(1-p)^{1-k}\ \ \ \ p =sigm(w_{0}+\mathbf{w}^{T}\mathbf{x})= \frac{1}{1-e^{-w_{0}-\mathbf{w}^{T}\mathbf{x}}}\]The binary classifier is then defined as C := 1 <=> p(C=1|x, w) >= 0.5 and C := 0 <=> p(C=1|x, w) < 0.5.
For an introduction to basic logistic regression and its implementation for beginners, check out this detailed guide: Logistic Regression Explained and Implemented in Python (ref 6).
SPD manifolds
Let's introduce our manifold defined as the group of symmetric positive definite (SPD) matrices.
SPD matrices are used in a wide range of applications:
- Diffusion tensor imaging (analysis of diffusion of molecules and proteins)
- Brain connectivity
- Dimension reduction through kernels
- Robotics and dynamic systems
- Multivariate principal component analysis
- Spectral analysis and signal reconstruction
- Partial differential equations numerical methods
- Financial risk management.
A square matrix A is symmetric if it is identical to its transpose, meaning that if a are the entries of A, then aaij. This implies that A can be fully described by its upper triangular elements.
A square matrix A is positive definite if, for every non-zero vector b, the product >= 0.
If a matrix A is both symmetric and positive definite, it is referred to as a symmetric positive definite (SPD) matrix. This type of matrix is extremely useful and appears in various real-world applications. A prominent example in statistics is the covariance matrix, where each entry represents the covariance between two variables (with diagonal entries indicating the variances of individual variables). Covariance matrices are always positive semi-definite (meaning ), and they are positive definite if the covariance matrix has full rank, which occurs when each row is linearly independent from the others.
The collection of all SPD matrices of size forms a manifold.
Logarithmic map
As discussed in Geometric Learning in Python: Manifolds, the exponential & logarithmic maps and parallel transportation are crucial for Riemannian approaches in machine learning. On any manifold, distance, dist(x, y) are defined as geodesics that correspond to straight lines in Euclidean space.
Let consider a vector x to y on a tangent space at point y. Operations on the point on the manifold rely on the exponential map (projection) onto the manifold.
The table below show the equivalent operations between Euclidean space and manifolds. Operations Euclidean Manifold
In the case of the binary logistic regression, the prediction on the manifold is defined by the exponential map expx: \[p(y|\mathbf{x}, \mathbf{w})=exp_{x}(y, sigm(w_{0}+\mathbf{w}^T.\mathbf{x}))\]Let select two Riemannian metrics for the SPD manifolds [ref 7].
Affine invariant Riemannian metric
For any two symmetric positive definite (SPD) matrices 𝐴 and 𝐵, the Affine Invariant Riemannian Metric (AIRM) between them is defined as: \[d(A, B)=\left \| log\left ( A^{-1/2}BA^{-1/2} \right ) \right \|_F\]
Log-euclidean Riemannian metric
Given a symmetric positive definite matrix SPD at point S and a tangent space TsSPD, the logarithmic and exponential maps can be expressed as: \[\begin{matrix} log_{s}(f(s))=D_{log(s)}\ exp.\left (log(f(s) -log(s) \right )\\ exp_{s}\left ( T_{f(s)} \right )=exp\left ( log(s)+D_{s}log.T_{f(s))} \right ) \end{matrix}\]
Fig. 1 Illustration of the log-euclidean metric for SPD
Implementation
Setup
First, let's create a data class, SPDTestData that encapsulates the training features and label . This class will be used to validate our implementation of logistic regression on SPD manifolds using various metrics, as well as in Euclidean space.
@dataclass
class SPDTestData:
X: np.array # Features
y: np.array # Labels
def flatten(self) -> NoReturn:
shape = self.X.shape
self.X = self.X.reshape(shape[0], shape[1]*shape[2])
The flatten method vectorizes each 2-dimension SPD matrix entry in the training set to be processed by the scikit-learn cross validation function.
We wrap the generation of random data, SPD manifolds and the evaluation of various metrics in the class BinaryLRManifold.
class BinaryLRManifold(object):
def __init__(self, n_features: int, n_samples: int):
self.n_features = n_features
self.n_samples = n_samples
def generate_random_data(self) -> SPDTestData:
y = np.stack([np.random.randint(0, 2) for _ in range(self.n_samples)])
X = np.stack([self.__generate_spd_data() for _ in range(self.n_samples)])
return SPDTestData(X, y)
The generation of the labeled training set uses the numpy random values generation method.
Data generation
The method __generate_spd_data create symmetric positive definite n_features x n_features matrices by computing eigenvalues for the diagonal component, reducing the upper triangle values and replicating them to the lower triangle.
def __generate_spd_data(self) -> np.array:
epsilon = 1e-6
mat = np.random.rand(self.n_features, self.n_features)
mat = (mat + mat.T)/2
eigenvalues = np.linalg.eigvals(mat)
min_eigen = np.min(eigenvalues)
if min_eigen <= 0:
mat += (np.eye(self.n_features)*(-min_eigen + epsilon))
return mat
Manifold generation
Creating an SPD matrix is straightforward:
- Instantiate the Geomstats SPDMatrices class
- Equip it with a Riemannian metric.
def create_spd(self, riemannian_metric: RiemannianMetric) -> SPDMatrices:
spd = SPDMatrices(self.n_features, equip=False)
spd.equip_with_metric(riemannian_metric)
return spd
The following code snippet creates two SPD manifolds with affine metric and log Euclidean Riemannian metrics.
from geomstats.geometry.spd_matrices import SPDAffineMetric, SPDLogEuclideanMetric
n_samples = 10000
n_features = 16
binary_lr_on_spd = BinaryLRManifold(n_features, n_samples)
# Create a SPD matrix and assigned a Affine metric
spd = binary_lr_on_spd.create_spd(SPDAffineMetric)
# Create a SPD matrix and assigned a Log Euclidean metric
spd = binary_lr_on_spd.create_spd(SPDLogEuclideanMetric)
Validation
The initial phase involves verifying our implementation of the metrics related to SPD manifolds. This is achieved by calculating the cross-validation score for SPD matrices containing random values between [0, 1] and ensuring that the average score approximates 0.5.
Euclidean space
We utilize the logistic regression class and the cross_validate method from Scikit-learn once the contents of the matrix have been converted into a vector form.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
@staticmethod
def evaluate_euclidean(spd_data: SPDTestData) -> Dict[AnyStr, np.array]:
model = LogisticRegression()
# Reduce the matrix into a vector for sklearn cross-validation
spd_data.flatten()
return cross_validate(model, spd_data.X, spd_data.y)
The test code used a training set of 6000 samples of 16 x 16 (256) SPD matrices. The binary logistic regression in the Euclidean space as a mean cross validation score of 0.487 instead of 0.5.
n_samples = 6000
n_features = 16
binary_lr_on_spd = BinaryLRManifold(n_features, n_samples)
train_data = binary_lr_on_spd.generate_random_data()
print(f'Training data shape: {train_data.shape()}')
result_dict = binary_lr_on_spd.evaluate_euclidean(train_data)
mean_test_score = np.mean(result_dict["test_score"])
print(f'Cross validation: {result_dict["test_score"]} with mean: {mean_test_score}')
Output
Cross validation: [0.478 0.513 0.497 0.474 0.471] with mean: 0.487
Classification on SPD manifold
To utilize scikit-learn's cross-validation features, the SPD matrix must first be differentiated on its tangent space before applying logistic regression. These two steps are executed using a scikit-learn pipeline.
@staticmethod
def evaluate_spd(spd_data: SPDTestData, spd_matrices: SPDMatrices) -> Dict[AnyStr, np.array]:
from geomstats.learning.preprocessing import ToTangentSpace
from sklearn.pipeline import Pipeline
pipeline = Pipeline(
steps=[ ('features', ToTangentSpace(space = spd_matrices)),
('classifier', LogisticRegression())
]
)
return cross_validate(pipeline, spd_data.X, spd_data,y)
We employ the same training setup as used in the evaluation of logistic regression in Euclidean space, but we apply the log Euclidean (SPDLogEuclideanMetric) and affine invariant (SPDAffineInvariant) metrics. The mean values of the cross-validation scores are 0.492 and 0.5, respectively, which significantly improve upon the results from the Euclidean scenario.
n_samples = 6000
n_features = 16
binary_lr_on_spd = BinaryLRManifold(n_features, n_samples)
train_data = binary_lr_on_spd.generate_random_data()
spd = binary_lr_on_spd.create_spd(SPDLogEuclideanMetric)
result_dict = binary_lr_on_spd.evaluate_spd(train_data, spd)
mean_test_score = np.mean(result_dict["test_score"])
print(f'Cross validation: {result_dict["test_score"]} with mean: {mean_test_score}')
Output for Log Euclidean metric
Cross validation: [0.495 0.504 0.498 0.491 0.470] with mean: 0.492
Output for affine invariant metric
Cross validation: [0.514 0.490 0.490 0.490 0.504] with mean: 0.500
References
--------------------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning.
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning.
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.
Appendix
Here is the list of published articles related to geometric learning:
- Foundation of Geometric Learning introduces differential geometry as an applied to machine learning and its basic components.
- Differentiable Manifolds for Geometric Learning describes manifold components such as tangent vectors, geodesics with implementation in Python for Hypersphere using the Geomstats library.
- Intrinsic Representation in Geometric learning reviews the various coordinates system using extrinsic and intrinsic representation.
- Vector and Covector fields in Python describes vector and co-vector fields with Python implementation in 2 and 3-dimension spaces.
- Geometric Learning in Python: Vector Operators illustrates the differential operators, gradient, divergence, curl and laplacian using SymPy library.
- Functional Data Analysis in Python describes the key elements of non-linear functional data analysis to analysis curves, images, or functions in very high-dimensional spaces
- Riemann Metric & Connection for Geometric Learning reviews Riemannian metric tensor, Levi-Civita connection and parallel transport for hypersphere.
- Riemann Curvature in Python describes the intricacies of Riemannian metric curvature tensor and its implementation in Python using Geomstats library.
- K-means on Riemann Manifolds compares the implementation of k-means algorithm on Euclidean space using Scikit-learn and hypersphere using Geomstats
#geometriclearning #riemanngeometry #manifold #ai #python #geomstats #Liegroups #kmeans
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.