9

Assume I have the following lens distortion function:

$$ x' = x (1 + k_1 r^2 + k_2 r^4) \\ y' = y (1 + k_1 r^2 + k_2 r^4) $$

where $r^2 = x^2 + y^2$. Given coefficients $k_1$ and $k_2$, I need to calculate the inverse function:

$$ x = f(x') = \, ?\\ y = f(y') = \, ? $$

This inverse function can be an estimate as well, e.g., a polynomial function whose coefficients can be calculated with numerical methods.

My problem is the following:

Given a picture, generate another picture by simulating lens distortion. I want to create another program, which given the output of the first one and the coefficients of the lens distortion function used, will calculate the original image.

First I tried:

$$ x = { x' \over 1 + k_1r'^2 + k_2r'^4}\\ y = { y' \over 1 + k_1r'^2 + k_2r'^4} $$

However, since $r'^2=x'^2+y'^2\neq r^2$, this won't give the original values of $x$ and $y$.

I was thinking then if I can use a similar formula, but different coefficients:

$$ x = x' (1 + k'_1r'^2 + k'_2r'^4)\\ y = y' (1 + k'_1r'^2 + k'_2r'^4) $$

where $k'_1$ and $k'_2$ would be calculated from $k_1$ and $k_2$.

But I'm open to any suggestion.

Pal Szasz
  • 191
  • Is $r=\sqrt{x^2+y^2}$? – Hans Lundmark Feb 27 '14 at 15:18
  • Welcome to Math.SE! This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level. – Joe Tait Feb 27 '14 at 15:19
  • Yes, $r=\sqrt{x^2+y^2}$. I edited the question a bit, hopefully it's more clear now. – Pal Szasz Feb 27 '14 at 18:18
  • Presumably this distortion formula is an approximation (truncated series in $r^2). It makes sense to try an approximation of the same form in the inverse direction. Perhaps adjust the coefficients by least squares (just continuous, not discrete). I don't know how hairy the systems of equations turn out to be... – vonbrand Feb 27 '14 at 18:33
  • Are you still looking for the solution? I have solved this using Newton iteration, but need to know wether to bother with writing down the solution for you. – Libor Sep 14 '16 at 20:06
  • I moved on to a different project now... but for other people it could still be useful. – Pal Szasz Sep 16 '16 at 08:09

5 Answers5

9

Using a slightly different but equivalent notation, the radius of the distorted point as a function of the undistorted point is described by the following equation:

\begin{aligned} f(r) = r + k_1 r^3 + k_2 r^5 \end{aligned}

where $r = \sqrt{x^2 + y^2}$.

Problem Statement

Given $x', y'$, let $r' = \sqrt{x'^2 + y'^2}$. Find $r \in f^{-1}(r')$, after which $x = \frac{r}{r'} \cdot x'$ and $y = \frac{r}{r'} \cdot y'$ (in the special case where $r' = 0$, $x = x'$ and $y = y'$).

General Solution

The solutions to the problem are all the real roots of the polynomial $f(r) - r' = k_2 r^5 + k_1 r^3 + r - r'$. There is no closed-form solution for the roots of a quintic function, but there are software solutions available in various programming languages that provide numerical estimates (Python, R, C++, MATLAB, Mathematica, etc.). Many of these solutions rely on the Jenkins-Traub algorithm.

Alternative (Simpler) Methods

When Is $f$ Invertible?

For many realistic values of $(k_1, k_2)$, it turns out that $f$ is an invertible function, which means we can find a unique solution for any $r'$. In some applications, ensuring that $f$ is invertible is important. The following observations allow us to understand when $f$ is invertible:

  • $f'(0) = 1$, which means that $f$ is strictly increasing at $r = 0$.
  • $f'$ has real roots if and only if $k_2 \leq g(k_1)$, where

\begin{aligned} g(k_1) = \begin{cases} \frac{9}{20}k_1^2 & ~\text{if}~k_1 < 0, \\ 0 & ~\text{otherwise}. \end{cases} \end{aligned}

I'll omit the proof of the second bullet, but it can be derived by applying the quadratic formula to $f'(r)$ after the substitution $u = r^2$.

It follows from the second bullet that $f$ is strictly monotone (and therefore invertible) if and only if $k_2 \geq g(k_1)$. The first bullet further implies that if $f$ is invertible, then it is also strictly increasing.

According to Figure 3 in [1], the majority of empirically measured lens distortion coefficients fall within the region for which $f$ is invertible. Below, I've plotted the empirical curve from the paper (red) along with the region of valid coefficients (green).

Invertible region visualization

Method 1: Bisection Search (Guaranteed Convergence Under Loose Conditions)

If $f$ is invertible, we may use a simple bisection search to find $\hat{r} \approx f^{-1}(r')$ with error tolerance $\left|f(\hat{r}) - r'\right| \leq \tau$. The actual conditions for convergence using the bisection search algorithm presented in this section may be relaxed to the following:

Important: This algorithm is guaranteed to terminate only if either $\bf k_2 > 0$ or $\bf \left( k_1 \geq 0 ~and~ k_2 \geq 0 \right)$. For computational reasons, I recommend ensuring that $k_2 > \varepsilon$ for a small $\varepsilon > 0$ when $k_1 < 0$. These conditions assure us that $\lim_{r \uparrow \infty} f(r) = \infty$, which guarantees that an upper bound for the bisection search will be established (referring to $r_u$ in the algorithm below). Keep in mind that applying this to a non-invertible $f$, i.e., when $k_2 < g(k_1)$, means the solution is not guaranteed to be unique. Note that this algorithm may also be applied to higher-order polynomial radial distortion models and is guaranteed to converge as long as the highest-order coefficient is positive.

function $f^{-1}(r'; \tau)$:
$\phantom{{}++{}}r_l \gets 0$
$\phantom{{}++{}}r_u \gets 2 \cdot r'$
$\phantom{{}++{}}$while $f(r_u) < r'$
$\phantom{{}++++{}}r_l \gets r_u$
$\phantom{{}++++{}}r_u \gets 2 \cdot r_u$
$\phantom{{}++{}}\hat{r} \gets \frac{1}{2} \cdot (r_l + r_u)$
$\phantom{{}++{}}q \gets f(\hat{r})$
$\phantom{{}++{}}$while $\left| q - r' \right| > \tau$
$\phantom{{}++++{}}$if $q > r'$ then $r_u \gets \hat{r}$ else $r_l \gets \hat{r}$
$\phantom{{}++++{}}\hat{r} \gets \frac{1}{2} \cdot (r_l + r_u)$
$\phantom{{}++++{}}q \gets f(\hat{r})$
$\phantom{{}++{}}$return $\hat{r}$

Method 2: Simple Iterative Method (Without Convergence Guarantees)

Source: As far as I can tell, the algorithm presented in this section was originally presented here. It is also the method used by the OpenCV undistortPoints function (method not documented but can be verified by looking at the source code).

This method has the benefit of being extremely simple to implement and converges quickly for relatively small amounts of distortion. If applying a fixed number of iterations without convergence testing, this is also a differentiable operation, which may be necessary for certain applications.

We can intuitively understand the method from the observation that the original equation may be reorganized to

\begin{equation} r = \frac{r'}{1 + k_1 r^2 + k_2 r^4} \end{equation}

which gives rise to the iteration

\begin{equation} r_{0} := r', ~~~r_{n+1} := \frac{r'}{1 + k_1 r_{n}^2 + k_2 r_{n}^4}. \end{equation}

While this usually converges for small and moderate distortion, it won't always converge for large distortions, even when $f$ is invertible. For example, if $(k_1, k_2) = (-0.4, 0.12)$ and $r' = 2$, we get oscillatory behavior as seen in the following figure:

failure to converge

The method can be augmented to operate with a reduced step size. For example, let $\alpha \in (0, 1]$, then

\begin{equation} r_{0} := r', ~~~r_{n+1} := (1 - \alpha) \cdot r_{n} + \alpha \cdot \frac{r'}{1 + k_1 r_{n}^2 + k_2 r_{n}^4}. \end{equation}

Unfortunately, a good value for $\alpha$ depends on the specific values of $k_1$, $k_2$, and $r'$. I have yet to work out a bound on $\alpha$ or a schedule that guarantees convergence. Based on empirical testing, I propose the conjecture that there exists some $\alpha$ that will always result in convergence, though choosing $\alpha$ too small results in very slow convergence.

function $f^{-1}(r'; N, \alpha, \tau)$:
$\phantom{{}++{}}\hat{r} \gets r'$
$\phantom{{}++{}}n \gets 0$
$\phantom{{}++{}}$while $\left|f(\hat{r}) - r'\right| > \tau$ and $n < N$
$\phantom{{}++++{}}\hat{r} \gets (1 - \alpha) \cdot \hat{r} + \alpha \cdot r' / \left(1 + k_1\cdot \hat{r}^2 + k_2\cdot \hat{r}^4 \right)$
$\phantom{{}++++{}}n \gets n + 1$
$\phantom{{}++{}}$return $\hat{r}$

For reference, the parameters that result in behavior equivalent to OpenCV are $N = 10$, $\alpha = 1$, and $\tau = 0$.


[1] Lopez, M., Mari, R., Gargallo, P., Kuang, Y., Gonzalez-Jimenez, J., & Haro, G. (2019). Deep single image camera calibration with radial distortion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 11817-11825).

jodag
  • 844
  • Does the second method have a name? That would make it easier to document when implementing. – GraftCraft Nov 19 '24 at 10:55
  • 1
    @GraftCraft It's a form of fixed-point iteration, so you could call it that. – jodag Nov 19 '24 at 14:47
  • I just looked a bit more at it. Isn't it the secant method? – GraftCraft Nov 19 '24 at 15:39
  • @GraftCraft I don't think this is the secant method. The secant method relies on the previous two estimates while the method above only relies on the previous estimate. – jodag Nov 19 '24 at 15:51
  • I think one of the estimates disappears if we use the method for a function $g(r) := f(r) - r'$ and starting points $r_0 = 0$ and $r_1 = r'$. – GraftCraft Nov 20 '24 at 09:30
  • 1
    @GraftCraft With those initial conditions the fixed-point and secant methods give the same first step, but they diverge on the second step. For example, if we let $k_1=1, k_2=0, r'=1$, then the secant method's second step is $r_3 = 7/11$ and the fixed-point iteration's second step is $r_2 = 4/5$. – jodag Nov 20 '24 at 16:24
4

The map $f:\>(x,y)\mapsto(x',y')$ is rotationally symmetric: It maps concentric circles of radius $r$ to concentric circles of radius $r'=r(1+c_1 r^2+c_2 r^4)$, and ${\rm arg}(x',y')={\rm arg}(x,y)$. We therefore just have to invert the function $$\psi:\quad r\mapsto s=r(1+c_1 r^2+c_2 r^4)\ ,$$ where I have written $s$ instead of $r'$. This can be done with a power series to any desired accuracy. One obtains $$r=s\bigl(1-c_1 s^2+(3c_1^2-c_2)s^4-4(3c_1^3-2c_1c_2)s^6+ \ ?s^8\bigr)\ ,$$ so that $f^{-1}$ appears as $$\eqalign{x&=x'(1-c_1r'^2+(3c_1^2-c_2)r'^4-\ldots) \cr y&=y'(1-c_1r'^2+(3c_1^2-c_2)r'^4-\ldots) \ .\cr}$$

  • is it $-4(3c_1^3-2c_1c_2)s^6$ instead of $-4(3c_1^2-2c_1c_2)s^6$ (for homogeneity reasons) ? – piercus Nov 14 '19 at 09:24
  • @piercus: You are right; there was a typo there. I did the inversion (with Mathematica) again, and the exponent $3$ appeared. – Thank you for spotting and reporting the error. – Christian Blatter Nov 14 '19 at 09:55
2

Strating from @Christian Blatter, writing for example $$r(1+c_1 r^2+c_2 r^4)=r(1+c_1 r^2+c_2 r^4)+O(r^{12})$$ and using series reversion, we should get $$r=s\left(1+\sum_{n=1}^{n=5} a_n\,s^{2n}\right)+O(s^{12})$$ where the coefficients $$\left( \begin{array}{cc} n & a_n \\ 1 & -c_1 \\ 2 & 3 c_1^2-c_2 \\ 3 & 8 c_1 c_2-12 c_1^3 \\ 4 & 5 \left(11 c_1^4-11 c_2 c_1^2+c_2^2\right) \\ 5 & -13 c_1 \left(21 c_1^4-28 c_2 c_1^2+6 c_2^2\right) \end{array} \right)$$

1

For moderate distortion, you can use a single step of Newton, starting with the approximation $r'_0\approx r=\sqrt{x^2+y^2}$.

Then

$$r'_1={r'_0}-\frac{r'_0(1+k_1{r'_0}^2+k_2{r'_0}^4)-r}{1+3k_1{r'_0}^2+5k_2{r'_0}^4}.$$

Then as there is no tangential distortion,

$$x'_1=x\frac{r'_1}r,y'_1=y\frac{r'_1}r.$$

For stronger distortion, you can add a few iterations (or change lens :-) )

  • I am personally using a different approach: rather than inverting the distortion function, I solve the distortion problem with the distorted and undistorted data swapped. This directly gives an "undistortion model" of the same accuracy as the forward one. –  Apr 23 '20 at 07:03
0

please see this link (http://www.mdpi.com/1424-8220/16/6/807/pdf).

In this article, the authors present a new approach to calculating the inverse of radial distortions. The method presented there provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones.

  • As links can break unpredictably it would be useful if you could put a little more detail in your answer about what the new approach is and how it works. – postmortes Jun 24 '17 at 05:50
  • For sure, the paper looks interesting. As postmortes commented, it would be very good you add some details in your answer. In any manner, thanks for the link to this very recent paper. – Claude Leibovici Jun 24 '17 at 06:11
  • It appears this paper has rediscovered the solution @ChristianBlatter though it does add more coefficients and for higher order polynomials. You can compare the coefficients with those presented in Appendix C of that paper. – jodag Apr 22 '20 at 16:41