Monday, November 9, 2015

Time Complexity for Graph & ML Algorithms

Target audience: Beginners
Estimated reading time: 3'


Asymptotic time complexity gives a high-level understanding of an algorithm's efficiency and scalability, which is crucial for comparing algorithms, predicting their performance, and identifying potential bottlenecks.
This short article lists the time complexity for the most common graph, linear algebra and machine learning algorithms.


The most used Big O notation defines the upper bound on the time complexity, so the runtime does not exceed a certain threshold as the size of input grows indefinitely.


Follow me on LinkedIn

Overview

Time complexity (or worst case scenario for the duration of execution given a number of elements) is commonly used in computing science. However, you will be hard pressed to find a comparison of machine learning algorithms using their asymptotic execution time.

Summary

The following summary list the time complexity of some of the most common algorithms used in machine learning including, recursive computation for dynamic programming, linear algebra, optimization and discriminative supervised training.

Algorithm

Time complexity

Hash table search

1

Array

N

Binary tree

Log N

KD tree

N

Recursion with one element reduction

N2

Recursion with halving reduction

logN

Recursion with halving reduction

N

Naïve Bayes

N.D.C

Nearest neighbors search

D.logk.n.Logn

PCA

D2n + n3

Linear regression

D2n + n3

Logistic regression

InDC

Matrix multiplication (a, b) x (b, c)

a.b.c

Matrix multiplication (a, a)

a3

Matrix multiplication (a, a) Strassen

a2.8

Partial eigenvalues extraction

e.n2

Complete eigenvalues extraction

n3

Minimum spanning tree Prim linear queue

V2

Minimum spanning tree Prim binary heap

(E + V).logV

Minimum spanning tree Prim Fibonacci h

V.logV

Shortest paths Graph Dijkstra linear sort

V2

Shortest paths Graph Dijkstra binary heap

(E + V).logV

Shortest paths Graph Dijkstra Fibonacci 

V.logV

Shortest paths kNN Graph - Dijkstra

(k + logN).N2

Shortest paths kNN Graph Floyd-Warshall

N3

Fast Fourier transform

n.logn

Batched gradient descent

n.P.ep

Stochastic gradient descent

n.P.ep

Newton with Hessian computation

n3.ep

Conjugate gradient

n.n0.sqrt(cd)

L-BFGS

n.g.I

K-means (*)

C.n.D.I

Decision tree

n2.D

Lasso regularization - LARS(*)

n.D.min(n,D)

Hidden Markov Model Forward/backward 

s2.n

Multilayer Perceptron (*)

i.h.o.n.ep

Support vector machine (*) Newton

D2n + n3

Support vector machine (*) Cholesky

D2n + n2

Support vector machine (*) - SMO

D2n + n2

Principal Components Analysis (*)

D2n + n3

Expectation-Maximization (*)

D2n

Laplacian Eigenmaps

D.logk.n.logn + m.n.k2 + D.N2

Genetic algorithms

pz.logpz.I.G

Feed forward neural network

n.h.(i + h2/H).H



N

Number elements

n

Number of observations

D

Dimension/features

C

Number of classes

I

Number of iterations

ep

Number of epochs

k

Number of neighbors

a

1st dimension matrix

b

2nd dimension of matrix

c

3rd dimension of matrix

e

Number of eigenvalues

V

Number of vertices

E

Number of edges

P

Number of variables

cd

Condition of matrix

n0

Number of non-zero

g

Number of gradients

s

Number of states

i

Number of input variables

h

Number of hidden neurons

H

Number of hidden layers

o

Number output variables

G

Number of genes

pz

Population size



References

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.