I'm interested in how fast SVMs can classify new data with $c \in \mathbb{N}_{\geq 2}$ classes and $n \in \mathbb{N}_{\geq 1}$ features.
Example for Neural Networks
For neural networks, this depends very much on the architecture. For supposing you only have one hidden layer with $3n$ neurons, you would have a $n:3n:c$ topology and hence
- one multiplication of a $n$-dimensional vector with a matrix in $\mathbb{R}^{n \times 3n}$,
- then a multiplication of a vector in $\mathbb{R}^{3n}$ with a matrix in $\mathbb{R}^{3n \times c}$
- and of course $3n+c$ applications of the activation functions.
- Adding the biases is dominated by the matrix multiplications.
This results in an overall complexity of $\mathcal{O}(n^2 \cdot c)$.
Question
I would be interested in a similar analysis of the classification complexity (NOT the training!) of SVMs, preferably with a reference to literature.