The question emerged while reading Ch. 3 of Rasmussen & Williams http://www.gaussianprocess.org/gpml/. In the end of this chapter, the authors gave results for the problem of handwritten digits classification (16x16 greyscale pictures); features are 256 pixel intensities + bias. I was surprised that in such a high-dimensional problem, 'metric' methods, like Gaussian processes with squared exponential kernel, or SVM with the same kernel, behave quite nice.
Also, I heard sometimes that SVM is good for [essentially bag-of-word] text classification. Why aren't they suffering from the curse of dimensionality?