For a standard normal random variable $X \sim \mathcal{N}(0,1)$, we have the simple upper-tail bound of $$\mathbb{P} (X > x) \leq \frac{1}{x \sqrt{2\pi}} e^{-x^2 / 2}$$ and thus from this we can deduce the general upper-tail bound for $X' \sim \mathcal{N}(\mu, \sigma^2)$ to be $$\mathbb{P}(X' > x) = \mathbb{P}\left(X > \frac{x - \mu}{\sigma}\right) \leq \frac{\sigma}{(x - \mu)\sqrt{2 \pi}} e^{-(x - \mu)^2/(2\sigma^2)}$$
How can this type of exponential decay bound be generalized to a $d$-dimensional multivariate normal distribution $\vec{X}$ with mean $\vec{\mu}$ and covariance matrix $\Sigma$? Specifically, can we bound the probability $$\mathbb{P}(\|\vec{X} - \vec{\mu}\| > x)$$ My guess is that we can probably get a a bound that is exponentially small in $x^2$, but how exactly does $\Sigma$ figure in? The more concise the description, the better, e.g. I would prefer a bound that only depends on certain eigenvalues of $\Sigma$ to one that depends on all the entries of $\Sigma$. Even better would be a bound only depending on $\|\Sigma\|$ (for some suitable norm).
I took a look at this post, but my question is simpler as I don't want a bound on each component of my multivariate Gaussian. I suspect there should be a simpler bound (that does not depend on individual matrix elements) to my question than what is given in the answer to the question I linked.
Thanks!