Showing posts with label Geometric learning. Show all posts
Showing posts with label Geometric learning. Show all posts

Monday, December 16, 2024

2024 Reviews of Papers on Geometric Learning

  Target audience: Beginner
Estimated reading time: 8'


'

Follow me on LinkedIn



  1. ChatGPT for Computational Topology
  2. An introduction to Topological Data Analysis
  3. Synthetic Data Generation and Deep Learning for the Topological Analysis of 3D Data
  4. Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras from First Principles
  5. Autoencoders for discovering manifold dimension and coordinates in data from complex dynamical systems
  6. Reliable Malware Analysis and Detection using Topology Data Analysis
  7. Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning
  8. Categorical Foundations of Explainable AI
  9. Introduction to Geometric Learning in Python with Geomstats
  10. Machine Learning Algebraic Geometry for Physics
  11. Deep Learning in Asset Pricing
  12. Intrinsic and extrinsic deep learning on manifolds
  13. Kalman Filters on Differentiable Manifolds
  14. Manifold Matching via Deep Metric Learning for Generative Modeling
  15. Machine learning a manifold 
  16. A Geometric Perspective on Variational Autoencoders
  17. Deep Hyperspherical Learning
  18. Learning Weighted Submanifolds with Variational Autoencoders and Riemannian Variational Autoencoders
  19. Learning Manifold Dimensions with Conditional Variational Autoencoders
  20. Variational Transformer Autoencoder with Manifolds Learning
  21. Riemannian Score-Based Generative Modelling
  22. Riemannian Diffusion Models
  23. Convolutional Neural Networks on Manifolds: From Graphs and Back
  24. Riemannian Residual Neural Networks
  25. A singular Riemannian geometry approach to Deep Neural Networks
  26. Transformer with Hyperbolic Geometry
  27. Deep Extrinsic Manifold Representation for Vision Tasks
  28. Manifold Matching via Deep Metric Learning for Generative Modeling
  29. Pullback Flow Matching on Data Manifolds
  30. A Survey of Geometric Optimization for Deep Learning: From Euclidean Space to Riemannian Manifold



The purpose of the study is to enable ChatGPT to support theoretical mathematicians in the creation of algorithms and code, bypassing the need for deep programming language expertise.
Through dialogue with ChatGPT, mathematicians can impart their knowledge of mathematical concepts and theories in simple English. ChatGPT would then use this guidance to convert the mathematical theories into executable algorithms and code, eliminating the necessity for mathematicians to be versed in the intricacies of coding.
 
The paper concentrates on directing ChatGPT to produce Python code, utilizing the 'networkx' and 'scipy' libraries, for several tasks:
  • Computing Betti numbers for a simplicial complex
  • Constructing a Vietoris-Rips complex
  • Creating a boundary matrix for a specified simplicial complex
  • Formulating a graph Laplacian matrix.


2. An introduction to Topological Data Analysis
This article provides a comprehensive yet concise guide to applying Topological Data Analysis (TDA) in data analysis, offering just enough information without overwhelming the reader. TDA is a burgeoning field that utilizes novel tools from algebraic topology and computational geometry to extract significant features from complex data sets. 

What I appreciate about the paper is
  • Clear definition of the TDA pipeline, which interestingly doesn't rely on Euclidean metrics.
  • Its accessibility to non-experts, introducing fundamental TDA concepts like metric spaces, simplicial complexes, and Persistent homology in an understandable manner (although I feel that the sections on applying persistent homology in data analysis, feature engineering, and optimizing machine learning architectures could have been more detailed and extensive
  • The way it illustrates the concept of filtration with practical examples and visual aids
  • The application of persistent homology to proteins, demonstrated with Python code and the Gudhi library. 


Have you ever pondered whether Neural Networks can be used to determine the manifold topology in 3D data? 

This paper provides insights into this query. It introduces Persistent Homology and Betti numbers as tools for comprehending the topology of 3D objects. The paper explains how convolutional neural networks can minimize the computational resources typically needed for applying persistent homology. 

Additionally, it explores the application and comparison of various networks for semantic segmentation of topology data, offering an enhanced TDA method. This includes: (1) A transformer block featuring self-attention for 3D point clouds, and (2) a manifold curvature estimator, both yielding similar outcomes.



Much effort in the field of machine learning is devoted to creating increasingly complex models. However, it is becoming evident that the caliber of training and validation data is crucial, aligning with the concept of Data-centric AI.
 
The paper discusses how defining symmetry properties in the hidden layers of a deep learning model can streamline its training and make it easier to interpret. It introduces a neural network architecture that detects and classifies symmetric patterns in a labeled training dataset. 

This method utilizes group theory to:
  1. Generate symmetry transformations that maintain the integrity of labeled data.
  2. Pinpoint infinitesimal transformations, known as symmetry generators.
  3. Modify the machine learning model's loss function to uncover sub-algebras (Lie algebra) of the symmetry group, which are formed as linear combinations of the symmetry generators.
  4. Ascertain the complete symmetry group and Lie subgroups that optimize the number of generators.
Note: The paper presupposes a fundamental understanding of group theory, homogeneous spaces, and manifolds. It does not include Python code.



Reducing dimensions, whether implicitly or explicitly, is key to enhancing the prediction accuracy of complex models that handle continuous data. The authors have trained an Autoencoder framework that incorporates regularization. This framework uses multiple internal linear layers and the L2 norm. The precise dimensionality of the feature space is determined by conducting a singular value decomposition on the covariance matrix of the latent Z space in the auto-encoder. This approach combines the best aspects of both methodologies!
 
The concept was put to the test for identifying 3-dimensional (and in another case, 8-dimensional) manifolds within a 4-dimensional (and respectively, infinite-dimensional) Euclidean space. This was done using data derived from various nonlinear partial differential equations. The outcomes are remarkable: the proposed method uniquely succeeded in pinpointing the top 3 (or 8) singular/eigenvalues, outperforming both PCA and generic Autoencoders.

The reader should have basic knowledge of unsupervised, dimension reduction.

The Python code is available on GitHub https://github.com/mdgrahamwisc/IRMAE_WD


6. Reliable Malware Analysis and Detection using Topology Data Analysis
This paper introduces and evaluates three topological-based data analysis (TDA) techniques, 
to efficiently analyze and detect complex malware signature. TDA is known for its robustness under noise and with imbalanced datasets.

The paper first introduces the basic formulation of metric spaces, simplicial complexes, and homology before describing the 3 techniques for analysis.
  • TDA mapper
  • Persistence homology
  • Topological model analysis tool
These TDA techniques are compared to traditional features engineering (unsupervised) algorithms such as PCA or t-SNE using a False Positive Rate and a Detection Rate. The features are then used as input to various traditional ML classifiers (SVM, Logistic Regression, Random Forest, Gradient Boosting,).
The paper also covers CPU and memory usage analysis.
 
Conclusion:
TDA mapper with PCA for feature extraction for extracting malware clusters as well as t-SNE to identify overlapping malware characteristics.
As far as detection rate in supervised classification, Random Forest, XGBoost and GBM achieve a detection rate close to 100% 
 
Basic understanding of machine learning and algebraic topology is required to benefit from the paper.



I discovered a remarkably thorough review of all the mathematics essential for machine learning. This review is well-articulated, making it accessible even for those less enthusiastic about mathematics (no need to bring out your dissertation). Spanning over 2100 pages, the paper encompasses a wide range of algorithms, from fundamental linear algebra, dual space, and eigenvectors to more advanced topics like tensor algebra, affine geometry, topology, homologies, kernel models, and differential forms.

The structure of the paper allows for independent topic exploration. For instance, you can jump right into the chapter on soft-margin support vector machines without needing to go through the preceding chapters. This makes the paper an invaluable resource for anyone seeking a comprehensive mathematical foundation in machine learning.

The index and bibliography are both extensive and detailed. However, a minor drawback is that the only coding examples provided are in Matlab, with no Python implementations.


8. Categorical Foundations of Explainable AI
Category Theory to Explain Model Prediction
Explainable AI (XAI), a vital research area addressing ethical and legal AI issues, still lacks a solid mathematical foundation. Category theory offers a solution.

This paper starts with an overview of Category theory basics, such as feedback monoids and cartesian streams. It then presents frameworks for Explaining Learning Agents and Agent translators. These concepts are applied to a basic multilayer perceptron using the ADAM optimizer. The study compares model-agnostic explaining formalisms with model-specific ones. It also introduces a forward-based explainer (relying on model parameters) and a backward-based explainer (utilizing the loss function and gradient).

Ultimately, the XAI framework provides a structured definition of explanation and lays the groundwork for classifying explanations both synthetically and semantically.



Linear models sometimes fall short when dealing with high-dimensional data found in areas like computer vision, molecular biology, or radar signal processing. This type of data typically exists on curved, differentiable vector spaces, known as manifolds. Training and validating models with manifold data necessitates the use of differential geometry. 

The paper highlights Geomstats, an open-source Python toolkit for handling data on non-linear manifolds. Included in the package are tutorials that combine theoretical concepts with practical Python implementations, such as

  • Statistics and Geometric Statistics, demonstrated through hypersphere and Frechet mean examples.
  • Techniques for managing Data on Manifolds, utilizing the exponential map (tangent vector) and the Riemann Tensor metric.
  • Classifying Symmetric Positive Definite matrices, which allows for the application of standard learning algorithms, like the K-Nearest Neighbors classifier, to non-linear manifolds.
  • Graph representations that involve embeddings in hyperbolic spaces.
  • This paper serves as an ideal entry point into the world of manifold learning, offering an easy-to-understand overview without requiring deep expertise in differential geometry.

Source code available at github.com/geomstats/geomstats 

A quick tutorial is available at https://youtu.be/Ju-Wsd84uG0



10. Machine Learning Algebraic Geometry for Physics
This paper is a valuable contribution to any discussion or teaching on Information Geometry (refer to the mentioned source).
While some are accustomed to applying laws of Physics, like Partial Differential Equations, to restrict deep learning models in machine learning, the use of machine learning combined with algebraic/differential geometry to process large datasets produced by physics is relatively unusual.

The paper utilizes the Calabi-Yau manifold, derived from the string theory concept of a 10-dimensional spacetime landscape. It examines unsupervised models such as PCA, Topology Data Analysis, Clustering, and then explores neural networks used for data analysis on hypersurfaces. The authors delve into a variety of subjects including projecting to lower-dimensional spaces, Hilbert series, interactions, and equivalences of Branes, as well as cluster mutations.

The study concludes by discussing the optimal transport problem and Kahler geometry in the context of Generative Adversarial Networks.

 

Reference: Information Geometry: Near Randomness and Near Independence -  K. Arwini, C.T.J. Dobson – Springer 2008



Deep learning enhances dynamic asset pricing by streamlining the feature engineering process. This approach was tested in an empirical study spanning 40 years and involving 30,000 individual stocks.

The assessment and comparison of various machine learning models were conducted through Monte Carlo Simulations, with Max a Posterior Estimation (MAPE) as objective.

The findings indicate:
  • Predicting stock prices is challenged by significant issues related to distribution and concept shift.
  • Advanced deep learning models generally surpass traditional linear predictors in performance.
  • Among these, Recurrent Neural Networks, especially LSTM and GRU featuring attention mechanisms and transformers, demonstrate superior effectiveness.
  • The addition of skip connections in deep layers did not lead to performance enhancements.
  • Convolutional Neural Networks were found to be less effective than even simple feedforward neural networks.
A crucial takeaway is the importance of integrating domain-specific knowledge and financial theory into deep learning models to make them both more effective and interpretable.
 
Note: This paper is accessible to those without financial expertise and only a basic understanding of deep learning, as the models are explained in clear terms.


 
This paper introduces two types of general deep neural network architecture on manifolds:
 
Extrinsic deep neural network on manifolds
This architecture maintains the geometric characteristics of manifolds by employing an equivariant embeddings of a manifold into the Euclidean space.
This method is applied in regression models or Gaussian processes on manifolds, where the idea is to construct the neural network based on the manifold's representation after embedding, while still maintaining its geometric properties. By adopting this strategy, it becomes possible to utilize stochastic gradient descent and backpropagation techniques from Euclidean space. This results in enhanced accuracy compared to conventional machine learning algorithms like SVM, random forest, etc.
 
Intrinsic deep neural network on manifolds
The objective is to embed the inherent geometric nature of Riemannian manifolds using exponential and logarithmic maps. This framework, which projects localized points from a Riemannian manifold onto a single tangent space, proves beneficial when embeddings cannot be determined. Each localized tangent space (or chart) is mapped (via exp/log functions) onto a neural network. This architectural approach achieves higher accuracy compared to deep models in Euclidean space and the Extrinsic architecture.
 
These two frameworks are assessed based on their performance in 1) classifying health-related simulated datasets on a spherical manifold, and 2) dealing with symmetric semi-positive definite matrices.


Even though data from real-world control applications are complex and typically reside on manifolds, the Kalman filter has traditionally been developed for Euclidean spaces. This paper presents a more adaptable approach than the widely utilized error-state extended Kalman filter for managing manifold constraints. It showcases a method that incorporates these constraints directly into the Kalman filtering process, using examples of 3-dimensional rotation and motion groups.

The structure of the paper is as follows:

  • A concise overview of differentiable manifolds, tangent spaces, and the exponential/logarithmic mappings.
  • A discussion on modeling the error state and the covariance matrix.
  •  A detailed description of how to apply the predict and update steps of the Kalman filter from tangent spaces back to the manifold context.
To demonstrate the practical application of the Error-state Kalman filter on manifolds, a controlled Lidar navigation system is examined.
 
Note: The reader should have a foundational understanding of differential geometry and Lie algebra to fully grasp the content of this paper. 
I would recommend “The SO(3) and SE(3) Lie Algebras of Rigid Body Rotations and Motions and their Application to Discrete Integration, Gradient Descent Optimization, and State Estimation” to get familiar with lie algebras, manifolds and extended Kalman.




This study advances the recent progress in blending geometry with statistics to enhance generative models. It proposes a novel method for identifying manifolds within Euclidean spaces for generative models like variational encoders and GANs through two neural networks: 
  • Data generator sampling data on the manifold
  • Metric generator learning geodesic distances.  
Metric learning:
The metric generator produces a pullback of the Euclidean space while the data generator produces a push forward of the prior distribution. The algorithm is described with easy-to-follow pseudo code.
 
The method is tested on unconditional ResNet image creation and GAN-based image super-resolution, showing improved Frechet Inception Distance and perception scores.
 
This paper will be especially of interest to engineers already familiar with GANs and Frechet metric.



In quantum physics, the symmetry of experimental data often simplifies many challenges. The researchers utilize neural networks to detect symmetries within a dataset. They define a symmetry V in the context of a transformation f acting on coordinates X within a manifold, where V(x) = V(f(x)). This approach focuses on the manifold's local properties, which helps manage issues arising from data with very high dimensions, as the Lie algebra is situated within the manifold's tangent space.
 
To achieve this, an 8-layer feed-forward network is employed for interpolating data, predicting infinitesimally transformed fields, and identifying the symmetry. The Keras library is used for this implementation. The neural model is capable of recognizing symmetries in the Lie groups of SU(3), which involves orientation preservation, and SO(8), which relates to Riemannian metric invariance.
Understanding this paper benefits from some background in differential geometry, Lie algebra, and manifolds.
 
For introduction to manifolds, as reference: https://math.berkeley.edu/~jchaidez/materials/reu/lee_smooth_manifolds.pdf


Variational autoencoders (VAEs) are generative models that encode input data into a reduced-dimensional latent space, making them well-suited for manifold modeling. In a standard VAE, the latent space's multivariate Gaussian distribution is substituted with a uniform distribution on the manifold, employing the Riemannian tensor metric. This Riemannian metric in the latent space is approximated using a first-order Taylor series, known as a pull-back metric. 
This modeling approach enables sampling of latent values for datasets like MNIST, CIFAR, and CELEBA. These samples are then assessed against a range of autoencoders, from Wasserstein to Hamiltonian, using evaluation metrics such as Frechet Inception Distance (FID) and Precision/Recall (PRD) – specifically, F1 scores.

The sampling VAE that utilizes the manifold model demonstrates a marked enhancement in FID and PRD for the CIFAR10 and CELEBA datasets.




17. Deep Hyperspherical Learning

This study addresses certain limitations of convolutional neural networks (CNNs) during training, such as overfitting and vanishing gradients. The suggested approach involves:

  • Incorporating a geometric constraint by substituting the Euclidean inner product with the geodesic distance on a hypersphere.
  • Replacing the softmax loss with a normalized softmax loss that is weighted by a metric tensor (dependent on the geodesic distance input and convolution weights) on the hypersphere.

The model's performance was assessed using the CIFAR-10/100 and ImageNet-2012 datasets, without ReLU activation, and compared to the ResNet-32 model. The proposed model achieves an accuracy comparable to the most efficient ResNet model but converges significantly faster.




A great paper from Nina Miolane, one of the contributors of Geomstats, a very useful Python library for Geometric learning.


The author explores alternative to Euclidean data representation, motivated by the brain connectomes in MRI and devise a Riemannian variational autoencoder (with latent space as a Manifold or Lie group). 


The paper evaluates geodesic vs. non-geodesic subspaces, review the standard Euclidean variational autoencoder,  introduces Riemannian VAE including maximization of a modified Evidence Lower Bound. The generative function applied to latent space is constructed from the Euclidean version through the Riemannian exponential map at the base point on the manifold. The model is evaluating using a 2-Wassertein distance on S3 and H2 groups/manifolds.



The authors seek to determine the dimension of the underlying manifold—a geometric characteristic—from the global minimum of a variational autoencoder (VAE) setup that includes an encoder, a decoder, and a Gaussian distribution prior.

 

The study highlights several trade-offs in the design of autoencoders, with these key findings:

  • A conditioned prior is unnecessary: a normal distribution prior suffices.
  • The initial variance of the Gaussian decoder impacts the loss function.
  • Commonly used strategies like weight sharing to accelerate training may hinder convergence.

The evaluation of both Variational Autoencoders (VAE) and Conditional Variational Autoencoders (CVAE) was conducted using the standard ELBO metrics across three datasets:

  • A synthetic dataset with 5-dimensional categorical data, consisting of 100,000 samples.
  • The MNIST dataset implemented with ResNet blocks.
  • The Fashion MNIST dataset.

The results indicate that the correct number of active dimensions were identified (5 for the synthetic dataset and 20 for MNIST), provided that the latent dimension was sufficiently large.




20. Variational Transformer Autoencoder with Manifolds Learning
The paper illustrates how the Riemann metric effectively addresses the challenges of modeling non-linear latent spaces by interpolating between input data samples along geodesics. The researchers have developed a variational autoencoder that incorporates a spatial transformer as the encoder to map the latent space model onto a Riemann manifold.

The goal is to refine the variational auto-encoder to calculate the geodesic distances between input data points, utilizing the Riemann metric tensor, and to delineate a semantic feature/latent space. The encoded transformer component is responsible for calculating the prior in the latent space. Importantly, this new model does not necessitate any modifications to the existing loss function (ELBO) or the training methodology. 

Implemented in PyTorch, the model consists of four convolutional layers, two linear reshaping layers, and fifty latent variables. It has been evaluated using a variety of image sets, including grayscale images (such as MNIST and FashionMNIST) and color, natural images (such as CelebA and CIFAR-10). The proposed model demonstrates substantial improvements in image reconstruction. 
Note: The paper assumes knowledge of differential geometry and generative models. The source code is available on GitHub.



21. Riemannian Score-Based Generative Modelling
Diffusion models, or score-based generative models, progressively add noise to create a generative model that reverses the noise addition process over time. Although effective, these models primarily utilize data in Euclidean space and often fail to capture the intrinsic geometry of complex data such as proteins, geological formations, or high-energy physics components.
 
This paper outlines four steps to adapt diffusion models to manifolds:
  • Modify the noise addition process to suit manifolds using the Riemannian gradient.
  • Incorporate Brownian motion into the Euclidean time-reversal formula.
  • Implement Geodesic random walks to approximate sampling from stochastic differential equations.
  • Approximate and evaluate the drift in the time-reversal process.
The authors provide well-documented pseudocode for the geodesic random walk and the training/sampling methodology for the Riemannian Score-Based Generative model. The evaluation of this model, using datasets related to earth and climate science on a hypersphere, and synthetic datasets for SO(3), demonstrates a significant improvement in the log-likelihood score.

Code is available on Github.


22.  Riemannian Diffusion Models
Euclidean diffusion models often fail to capture the generative factors related to the geometry of the space in applications such as protein modeling or geoscience. This paper introduces a variational architecture for the Riemann manifold, applying the Feynman-Kac conditional expectation to the Ito-diffusion process. 

The authors address the diffusion stochastic differential equation on the manifold by defining test functions within a variational framework based on the Riemannian divergence of vector fields (Evidence Lower Bound). The proposed solution is validated by its association with the Riemannian score-based model, replicating the same relationship found in Euclidean space.
The algorithm is evaluated on various manifolds, including Hyperbolic spaces, SO(n), Tori, and Hyperspheres. Notably, the authors include an appendix reviewing key elements of differential geometry for non-experts.


23. Convolutional Neural Networks on Manifolds: From Graphs and Back
This paper extends deep learning models to non-Euclidean spaces by introducing manifold convolution filters. The convolutional neural network is constructed by stacking convolution layers based on Laplace-Beltrami (LB) operators for the heat diffusion process. 
After presenting the fundamental concepts of manifold signals and intrinsic gradients, the authors explore sequentially:
  1. The extraction of orthogonal eigenvalues from the LB operator
  2. The definition of the manifold filter and its frequency response using eigenfunctions
  3. The assembly of a bank of filters into a manifold neural network
  4. The sampling of space and time domains to approximate a continuous architecture with point clouds
  5. The discretization of the filter by reformulating the LB operator as a graph Laplacian
The proposed solution is evaluated through simulated sampling of point clouds for classification using the ModelNet10 dataset. Graph neural networks demonstrated a performance improvement of 50% to 68% over graph filters.



24. Riemannian Residual Neural Networks
This paper introduces the extension of residual neural networks to Riemannian manifolds.
 
Residual Neural Networks
Residual networks were initially designed to mitigate the issues of vanishing and exploding gradients in deep neural networks. These networks achieve this by learning residual functions relative to the layer inputs using identity mappings.
 
Riemannian Residual Neural Networks
The primary aim of this paper is to extend residual neural networks to Riemannian manifolds. By leveraging Riemannian geometry concepts such as tangent spaces, induced metrics, geodesics, and pullback/pushforward techniques, the paper demonstrates how to represent residuals and outputs from each hidden layer of a neural network on manifolds with constant sectional curvature, including Euclidean, Hyperbolic, and Spherical spaces.

Projection and Implementation
The authors detail the method of projecting the residuals for each hidden layer from the local Euclidean tangent space onto manifold geodesics via the exponential map. The implementation focuses on hyperplanes and uses the Whitney embedding theorem to project vector fields onto the tangent space. The feature map, induced by these vector fields, is learned through a pullback mechanism, assuming a closed-form definition for the exponential map.
 
Performance
The Riemannian residual networks show superior performance compared to various graph neural network variants on manifolds with semi-defined positive matrices.



25. A singular Riemannian geometry approach to Deep Neural Networks
manifolds, with a Riemannian metric applied to the final layer. This metric is propagated back through the layers or maps. 
The authors' approach includes:
  • Reviewing essential elements of Riemannian geometry.
  • Defining the geometry for smooth layers (utilizing weights, biases, and activation functions) and for the entire smooth network (considered as a sequence of maps between manifolds).
  • Applying this framework to analyze the equivalence of neural networks in classification problems, focusing on representation within the input manifold.
  • Evaluating level curves and equivalence classes for non-linear regression, demonstrated with the Ackley function.
  • Studying the space of weights and biases for a given input.
 
Although one section introduces key concepts of Riemannian geometry, such as metrics and tangent spaces, it is assumed that the reader has a solid understanding of differential manifolds.


26. Transformer with Hyperbolic Geometry
The paper presents a novel transformer architecture that leverages both hyperbolic and Euclidean spaces. It incorporates a hyperbolic-to-linear transformation within the self-attention module.
To address performance degradation in high-dimensional settings, the authors generate hyperbolic keys and queries. They use the Poincaré ball with negative Riemannian curvature as the hyperbolic representation, projecting from the tangent space using the exponential map. Backpropagation is carried out through a Riemannian adaptation of stochastic gradient descent.
 
The proposed model is evaluated using the F1 metric on two datasets: biomedical named-entity recognition (sequence labeling) and an English machine reading task. The results demonstrate that the model effectively reduces overfitting in very high-dimensional spaces.


This paper explores the concept of embedding extrinsic manifolds into deep neural networks for image processing tasks. The objective is to optimize the neural network's computational graph within this embedded space. Unlike intrinsic manifold regression methods, which compute the geodesic that best fits the data within the tangent space, extrinsic methods utilize an equivariant embedding into a higher-dimensional Euclidean space.
The proposed approach, termed Deep Extrinsic Manifold Representation (DEMR), involves externally embedding manifolds at the final regression layer of neural networks, such as ResNet and PointNet. This embedding is defined using a matrix Lie group, with Singular Value Decomposition (SVD) applied to extract orthogonal vectors.
The approach demonstrates significant improvements for SE(3) and related quotient Lie groups, particularly in scenarios involving:
  • Relative affine motion within SE(3)
  • Variations in illumination and pose in face recognition tasks using Grassmann manifolds.
 
The reader is expected to have basic knowledge of Lie groups and embedding manifolds.



28. Manifold Matching via Deep Metric Learning for Generative Modeling
This study advances the recent progress in blending geometry with statistics to enhance generative models. It proposes a novel method for identifying manifolds within Euclidean spaces for generative models like variational encoders and GANs through two neural networks: 
- Data generator sampling data on the manifold
- Metric generator learning geodesic distances.  

Metric learning:
The metric generator produces a pullback of the Euclidean space while the data generator produces a push forward of the prior distribution. The algorithm is described with easy-to-follow pseudo code. 
The method is tested on unconditional ResNet image creation and GAN-based image super-resolution, showing improved Frechet Inception Distance and perception scores.
This paper will be especially of interest to engineers already familiar with GANs and Frechet metric.



29. Pullback Flow Matching on Data Manifolds
Generative modeling on data manifolds has proven to be more challenging than models that use the Euclidean metric. This paper addresses these challenges by proposing a method for accurate interpolation in latent space using neural ordinary differential equations (ODEs). The authors introduce a pullback geometry framework with Riemannian Flow Matching (RFM) to transform the input data manifold onto a latent manifold.

Since directly solving RFM is intractable, the paper presents an approximation strategy. This approach involves learning an isometric mapping based on the pullback metric, achieved by minimizing a global isometry loss or a graph matching loss, both of which require solving neural ODEs.
The results are compared with a conditional flow matching approach on a protein binding dataset.
A solid understanding of differential geometry is recommended for readers.


30. A Survey of Geometric Optimization for Deep Learning: From Euclidean Space to Riemannian Manifold

Optimization on Riemannian manifolds leverages intrinsic geometric properties to uncover valuable information, addressing challenges like over-parameterization and feature redundancy. This approach reduces reliance on computationally intensive models while achieving faster convergence and mitigating issues like gradient explosion or vanishing.

The article illustrates the geometric optimization process using gradient descent on manifolds, with a focus on retraction as a computationally efficient alternative to the exponential map. It evaluates various gradient descent optimizers, ranging from the standard SGD to RMSProp and manifold-constrained SGD, across different manifold types such as Symmetric Positive-Definite, Unitary, Stiefel, Grassmann manifolds, and SO(n) Lie groups.
 
Applications to machine learning problems, including dimensionality reduction and state-transition modeling, are thoroughly reviewed. 
 
The final section explores geometric deep learning methods, covering Geometric CNNs, Orthogonal RNNs, and Geometric Graph Neural Networks. 
 
A reader with a basic understanding of differential geometry will find the material accessible and engaging. The inclusion of overviews for key Python libraries is a particularly appreciated touch.








Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and 
Geometric Learning in Python Newsletter on LinkedIn.