3

Say that we have two lines in two dimensions along the points $\vec{m} + s\vec{k}$ and $\vec{n} + t\vec{j}$ and we want to find the $s$ and $t$ for where these lines intersect. The obvious solution would be to solve the equations for the vector components:

$$\begin{cases} m_x + sk_x = n_x + tj_x \\ m_y + sk_y = n_y + tj_y \\ \end{cases}$$

However, in order to solve these, we have to divide by either $k_x$, $j_x$, $k_y$ or $j_y$ at some point, which introduces a special case for where this is zero.

Clearly, however, the basic geometry of the problem only introduces the intrinsic special case where the lines are parallel; I don't think I should have to care about the lines being parallel to some arbitrary coordinate axes. In particular, in order to solve these in computer code, I'd like to avoid introducing any unnecessary special cases which make the code ugly.

How to solve this problem without introducing any such arbitrary special cases?

I have one solution, in which I multiply the vectors defining the lines by a matrix which transforms $\vec{m}$ to $(0, 0)$ and $\vec{m} + \vec{k}$ to $(1, 0)$ (or, well, actually the homogenous variants of them with a $1$-element added to each, used as I am to OpenGL), as such:

$$ A = \begin{bmatrix} lk_x & lk_y & -l(m_xk_x + m_yk_y) \\ -lk_y & lk_x & l(m_xk_y - m_yk_x) \\ 0 & 0 & 1 \\ \end{bmatrix} $$ where $$ l = \frac{1}{|k|^2} $$

When I transform $\vec{n}$ and $\vec{j}$ with the same matrix and then solve the component-wise equations, I only get the special case where $(Aj)_y = 0$, which, of course, corresponds to the intrinsic special case of the lines being parallel.

This is nice and works and all, but I feel it is both a bit inelegant, and also suboptimal performance-wise when it comes to being implemented on a computer. What I really want to ask, then, is whether there's a simpler solution to this problem than mine.

Dolda2000
  • 133

2 Answers2

3

If you are used to homogenous coordinates, then the equation of a line can be read off the cross product of the points defining it, as I explained in more detail here. Conversely, the point of intersection can be obtained from the cross product of two lines. So you'd compute

$$ \left(\begin{pmatrix}m_x\\m_y\\1\end{pmatrix}\times \begin{pmatrix}k_x\\k_y\\0\end{pmatrix}\right)\times \left(\begin{pmatrix}n_x\\n_y\\1\end{pmatrix}\times \begin{pmatrix}j_x\\j_y\\0\end{pmatrix}\right) $$

Dehomogenize (if you need to) and you have the point of intersection. In the case of parallel lines, the last coordinate of the resulting vector will be zero, indicating a point at infinity. And if the two lines are identical, then you obtain the null vector.

If you really need the parameters $s$ and $t$ themselves, and not just the point of intersection, then you could use the dehomogenized point of intersection, call that $\vec p$ and compute something like

$$ s=\frac{\langle\vec p-\vec m,\vec k\rangle}{\lVert\vec k\rVert^2} =\frac{(p_x-m_x)k_x + (p_y-m_y)k_y}{k_x^2+k_y^2} \qquad t=\frac{\langle\vec p-\vec n,\vec j\rangle}{\lVert\vec j\rVert^2} $$

A completely different approach to solving your problem would be using Cramer's rule:

\begin{align*} s\,k_x - t\,j_x &= n_x-m_x \\ s\,k_y - t\,j_y &= n_y-m_y \end{align*}

\begin{align*} s &= \frac {\begin{vmatrix}n_x-m_x&-j_x\\n_y-m_y & -j_y\end{vmatrix}} {\begin{vmatrix}k_x & -j_x\\k_y&-j_y\end{vmatrix}} & t &= \frac {\begin{vmatrix}k_x&n_x-m_x\\k_y&n_y-m_y\end{vmatrix}} {\begin{vmatrix}k_x & -j_x\\k_y&-j_y\end{vmatrix}} \end{align*}

Again you will only face a division by zero if your lines are parallel.

MvG
  • 44,006
2

How about this:

The two lines $\vec{m} + s\vec{k}$ and $\vec{n} + t\vec{j}$ intersect at some point if and only if there is an ordered pair $(t, s)$ such that

$\vec{m} + s\vec{k} = \vec{n} + t\vec{j}; \tag{1}$

writing (1) as

$\vec m - \vec n = t \vec j - s \vec k, \tag{2}$

we see that it may be cast as the matrix-vector equation

$\vec m - \vec n = [\vec j \; -\vec k] \begin{pmatrix} t \\ s \end{pmatrix}, \tag{3}$

where $[\vec j \; -\vec k]$ is the $2 \times 2$ matrix whose columns are $\vec j$ and $-\vec k$, that is

$[\vec j \; -\vec k] = \begin{bmatrix} j_x & -k_x \\ j_y & -k_y \end{bmatrix}. \tag{4}$

To solve (3), we merely need invert the matrix (4); but that is easy to do by virtue of its small size. We have

$\det [\vec j \; -\vec k] = j_y k_x - j_x k_y, \tag{5}$

so

$[\vec j \; -\vec k]^{-1} = \dfrac{1}{j_y k_x - j_x k_y} \begin{bmatrix} -k_y & k_x \\ -j_y & j_x \end{bmatrix}, \tag{6}$

which is well-defined as long as $\det [\vec j \; -\vec k] \ne 0$. If this is the case, then from (3) we have

$\begin{pmatrix} t \\ s \end{pmatrix} = [\vec j \; -\vec k]^{-1}(\vec m - \vec n) \tag{7}$

determining $s$ and $t$ uniquely.

In the event that $\det [\vec j \; -\vec k] = 0$, the columns of $[\vec j \; -\vec k]$, $\vec j$ and $\vec k$, are linearly dependent, i.e., collinear. If $\vec m - \vec n$ is also collinear with $\vec j$ and $\vec k$ and $\vec j \ne 0$ we have $\vec k = \alpha \vec j$, whence $t\vec j - s \vec k = t\vec j - s\alpha \vec j = \vec m - \vec n$ or $(t - \alpha s) \vec j = \vec m - \vec n$; since there is a unique $\beta$ with $\vec m - \vec n = \beta \vec j$, we see that $t - \alpha s = \beta$ or $t = \beta + \alpha s$; there are an infinite number of solutions; the lines in fact coincide in this case. A similar argument applies to the case $\vec k \ne 0$. If $\vec m - \vec n$ is not collinear with $\vec j$ or $\vec k$, then there is no intersection solution and the lines are parallel.

The process given mathematical description above may of course by coded up and executed. While it does require branching on the condition $\det [\vec j \; -\vec k] \ne 0$, this is probably a better place to branch than on any specific component(s) of $\vec j$ or $\vec k$, since as we have seen the subsequent switching between special cases is well-organized by the geometrical considerations of collinearity etc. Finally, the determination of the linear dependence of $\vec m - \vec n$ on $\vec j$ or $\vec k$ can be determined by taking $\det [(\vec m - \vec n) \; \vec j]$ etc. The computaions involved are, overall, not large and/or complex; a decent CPU should be able to manipulate $2 \times 2$ matrices hella fast!

We haven't altogether eliminated special cases, buy they seem to be under control!

Hope this helps. Cheerio,

and as always,

Fiat Lux!!!

Robert Lewis
  • 72,871
  • Thanks! This solution is probably about as simple as they come. :) – Dolda2000 Mar 18 '14 at 06:05
  • @Dolda2000: You, sir, are most welcome! And yeah, it is pretty simple, ain't it? You can also thank my time in computer graphics research; we were always doing stuff like this! And thanks for the "acceptance"! Regards, RKL. – Robert Lewis Mar 18 '14 at 06:31