Target audience: Beginner
Estimated reading time: 3'
Posts history
Newsletter: Geometric Learning in Python
Notes:
- Environments: Python 3.12.5, Numpy 2.2.0
- Source code is available on GitHub [ref 1]
- To enhance the readability of the algorithm implementations, we have omitted non-essential code elements like error checking, comments, exceptions, validation of class and method arguments, scoping qualifiers, and import statement.
Overview
Matrix operations
@
operator or dedicated NumPy methods.a = np.array([[1.0, 0.5], [2.0, 1.5]])b = np.array([[0.1, 2.0], [1.2, 0.5]])output_matrix = np.einsum('ij,jk->ik', a, b)ref_matrix = a @ b
a = np.array([1.0, 0.5, 2.0])b = np.array([0.1, 2.0, 1.2])np.einsum('i,i->', a, b)ref_dot_value = np.dot(a, b)
a = np.array([[1.0, 0.5], [2.0, 1.5]])
b = np.array([[0.1, 2.0], [1.2, 0.5]])output = np.einsum('ij,ij->', a, b)ref_output = a* b
a = np.array([1.0, 0.5, 2.0, 0.7])b = np.array([0.1, 2.0, 1.2])output = np.einsum('i,j->ij', a, b)ref_output = np.outer(a, b)
a = np.array([[1.0, 0.5], [2.0, 1.5]])output = np.einsum('ij->ji', a)ref_output = a.T
a = np.array([[1.0, 0.5], [2.0, 1.5]])output = np.einsum('ii->', a)ref_output = np.trace(a)
Examples
Kalman filter state equation
A = np.array([[1.0, 0.1], [0.0, 1.0]]) # State transition matrixB = np.array([[0.5], [1.0]]) # Control input matrixx_k_1 = np.array([2.0, 1.0]) # State vector at time k-1u = np.array([0.1]) # Control inputw = np.array([0.05, 0.02])# Using numpy matrix operatorx_k = A @ x_k_1 + B @ u + w# Using Einstein summation methodx_k = np.einsum('ij,j->i', A, x_k_1) + np.einsum('ij,j->i', B, u) + w
Neural linear transformation
W = np.array([[0.20, 0.80, -0.50],[0.50, -0.91, 0.26],[-0.26, 0.27, 0.17]]) # Shape (3, 3)x = np.array([1, 2, 3]) # Input vector, shape (3,)b = np.array([2.0, 3.0, 0.5]) # Bias vector, shape (3,)# Using @ operatory = W @ x + b# Using Einstein summationy_einsum = np.einsum('ij,j->i', W, x) + b
Simplified Einstein field equation
# Define the Riemannian metric tensor (4-dimension)
g = np.array([[-1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]]) # Shape (4, 4)
# Ricci curvature tensor
R_mu_nu = np.random.rand(4, 4) # Shape (4, 4)
# R: trace of Ricci tensor as scalar curvature
R = np.einsum('ii', np.linalg.inv(g) @ R_mu_nu) #
# Compute Einstein tensor G_mu_nu
G_mu_nu = R_mu_nu - 0.5 * R * g
Thanks for reading. For comprehensive topics on geometric learning, including detailed analysis, reviews and exercises, subscribe to Hands-on Geometric Deep Learning
References
-------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning.
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3 and Geometric Learning in Python Newsletter on LinkedIn.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.