15

If I have an equation of the form $$e^{ax} + e^{bx} = c,$$ where $a$, $b$, and $c$ are constants, how can I simplify the equation to solve for $x$?

Taking the logarithm of both sides is tricky, since I know $\log(ab) = log(a) + log(b)$, but I don't know how to simplify $\log(a + b)$...

Later
  • 750
jamaicanworm
  • 4,694
  • 11
  • 36
  • 53
  • 5
    It's equivalent to $v^a+v^b=c$ in which $v=e^x$. Even for $a,b$ integer-valued I wouldn't expect any general formula for this. – anon Apr 09 '12 at 03:26
  • 2
    Are you simply trying to solve such an equation or it appeared in a context? If you had an homework problem for example, and got stuck on this, perhaps your mistake is elsewhere. If this is just a curiosity then I agree with anon's comment. – Patrick Da Silva Apr 09 '12 at 03:30
  • @PatrickDaSilva This arouse from a computer science research problem, so there is no mistake :) – jamaicanworm Apr 09 '12 at 03:35

5 Answers5

12

Write the equation as $z + r z^s = 1$ where $z = e^{ax}/c$, $r = c^{b/a-1}$, $s = b/a$. There is a series for a solution of this, which should converge for sufficiently small $r$:

$$ z = \sum_{k=0}^\infty \frac{(-1)^k a_k}{k!} r^k \ \text{where} \ a_k = \prod_{j=0}^{k-2} (ks-j)$$

(taking $a_0 = a_1 = 1$)

Robert Israel
  • 470,583
  • Any lower bounds on the radius of convergence? Because if there isn't any, it's kind of "risky" to try this solution. Good job though, I expected numerical algorithms would generate solutions such as series solutions, but then again I didn't expect something so explicit! It's great. What surprises me though is that you have only "one root" of the equation, where as when $s$ is an integer, you expect many complex roots. Any explanations on this? – Patrick Da Silva Apr 09 '12 at 20:36
  • @Patrick: If I used Stirling's formula correctly, the radius of convergence should be $s^s(1-s)^{1-s}$ assuming $0<s<1$. – robjohn Apr 09 '12 at 21:48
  • Bounds on the radius of convergence can be computed using the asymptotics of $a_k$. One that works for all $s$ is $$\left|\frac{a_k}{k!}\right| << \frac{(k(|s|+1))^{k-1}}{k^{k+1/2} e^{-k}}$$ so it converges for $|r| < 1/(e (|s|+1))$. This can be improved, e.g. for $0 < s < 1$. – Robert Israel Apr 09 '12 at 21:49
  • Yes, this is just one branch of the solution, the one that approaches $1$ as $r \to 0$. For example, with $s=2$ you have the two solutions $x_{\pm} = \frac{-1 \pm \sqrt{1+4r}}{2r}$; this is the "$+$" solution, and it is analytic in $r$ for $|r| < 1/4$ (note that the singularity at $0$ is removable: $\frac{-1+\sqrt{1-4r}}{2r} = \frac{2}{1+\sqrt{1+4r}}$), but has a branch point at $r=-1/4$, so the radius of convergence is $1/4$. – Robert Israel Apr 09 '12 at 22:01
  • @Patrick: seeing as $s$ is not limited to $0<s<1$, I get the radius of convergence for $s\ge1$ to be $(s-1)^{s-1}/s^s$. This is greater than, but limits to, $1/(es)$. – robjohn Apr 09 '12 at 22:21
8

Following up with Alex's Becker's answer, you can turn your equation into an equation of the form $$ y^a + y^b = c, $$ which, for $a$ and $b$ distinct positive integers with one of them greater or equal to $5$, we know by Galois theory that there exists no solution in terms of traditional arithmetic (i.e. addition, subtraction, multiplication, division, and taking $n^{\text{th}}$ roots) to this polynomial equation. I've tried to find a website that speaks about it but a quick look over google and wikipedia gave me nothing ; this is a very well known result though. Therefore we expect no general solution to your equation, because it would imply very specific results for which we know there exists no general method to solve.

Hope that helps,

EDIT : There wasn't enough space in the comment box to detail this.

If you want computer accuracy, you can use numerical methods. Find a root of $f(x) = c - e^{ax} - e^{bx}$ using, for instance, Newton's method. But analytically I have not much hope. There is one thing you could do though : using the Taylor expansion of $e^x$, $$ 0 = e^{ax} + e^{bx} - c \ge (1 + ax) + (1 + bx) - c = (2-c) + (a+b) x, $$ which gives you a rough upper bound on $x$ like this : $$ x \le \frac{c-2}{a+b}. $$ I have no idea how to get a lower bound though. Note that this bound feels very crappy after you give some though about it ; fix $a=b=1$, which means you're trying to solve $2e^x = c$, which means $e^x = c/2$ and $x = \log(c/2) < \frac{c-2}2$. Here's an idea of how crappy this bound is :

enter image description here

We see that for $c > 4$, it's already very crappy. Anyway.

  • Thanks. As I commented on Alex's answer, then is there a known way to estimate $x$, and/or a way to find a lower bound for $x$? – jamaicanworm Apr 09 '12 at 03:41
  • 2
    @jamaicanworm I'm commenting here because Newton's method is probably the best and easiest way to go in most cases. However, if you are specifically looking for a rigorous lower bound you may want to look into real semialgebraic methods. Warning: they are very nasty. – Alex Becker Apr 09 '12 at 03:55
4

As others have pointed out, there isn't a formula to solve this type of equation. However, I have developed my own numeric algorithm for solving any equation of the form $$f(x) = A_1 e^{B_1x} + A_2 e^{B_2x} + \ldots + A_N e^{B_Nx} = 0$$ and find all real values of x. Unlike some of the suggestions like using Newton's method, this will always converge and never miss any roots.

If one of the B terms is set to 0 then this will contain a constant like in the question. I'm not a mathematician so I don't know if such a method has already been described. If not, I will christen it "Eng's method" after yours truly.

The basic method is as follows:

  1. Sort the terms by ascending value of exponent: $B_1 < B_2 < \ldots < B_N$

  2. Find a range where there could possibly be a root. To do so, consider that as x increases, the Nth term will begin growing faster than all other terms, so find a value of x where $$ |A_N e^{B_Nx}| > |\sum_{i=1}^{N-1}A_ie^{B_ix}| $$ Actually we will do the following. Count the number of terms with an opposite sign as $A_n$. Call this number $P$. For all $P$ terms $A_n$, calculate $$|A_N e^{B_Nx_i}| = P\cdot|A_i e^{B_ix_i}|$$ $$x_i = \frac{\ln(P\cdot|A_i/A_N|)}{B_N - B_i} $$ $$x_{max} = max(x_i)$$ So basically at $x_{max}$ we have guaranteed that the fastest growing term is growing $P$ times faster than any other term with opposite sign. So there will definitely not be a root for $x > x_{max}$.

  3. Using the same sort of reasoning, looking at the slowest growing term $A_1e^{b_1x}$ and find an x where this term is $Q$ times more than all other terms with opposite sign. The minimum such term is $x_{min}$. Since the slowest growing term is also the slowest to shrink as we move towards $x = -\infty $, we can confidentially say that for $x < x_{min}$ the sign of the first term will dominate and we will never cross $f(x) = 0$.

  4. Take repeated derivatives (k of them) of $f(x)$. $$\frac{d^{k}}{d x^k}f(x) = B_1^{k}A_1e^{B_1x} + B_2^{k}A_2e^{B_2x} + \ldots + B_N^{k}A_Ne^{B_Nx}$$ Note that the more derivatives we take, the faster the coefficients grow for the terms with the larger $|B_i|$. For a large enough value of k, we can see that one of two things will happen: $$|B_N^{k}A_Ne^{b_Nx_{min}}| > |\sum_{i=1}^{N-1}B_i^{k}A_ie^{b_ix_{min}}|$$ or $$|B_1^{k}A_1e^{b_1x_{min}}| > |\sum_{i=2}^{N}B_i^{k}A_ie^{b_ix_{max}}|$$ Basically, what ends up happening is that either the fastest growing exponential term (Nth term since we order them this way) ends up starting out dominant over the range of interest or the slowest shrinking term ends up dominant over the range of interest. (This can happen if we have something like $-40e^{-0.73x}+5e^{-0.67x}-0.1e^{0.125x}-0.2=0$). In either case, we can keep increasing k until one of these two conditions is met.

  5. At this point we know that for the kth derivative a single term is dominant over the entire range where there could possibly be a root. What this means is that the sign of the kth derivative is either always positive or always negative for $x_{min} < x < x_{max}$. Therefore it follows that the (k-1)th derivative must be monatomic over this range. This is great news because for a monatomic function we can find numerically find roots using algorithms like bisection or Brent's method. In fact, all we have to do is evaluate the (k-1)th derivative at $x=x_{min}$ and at $x=x_{max}$. If these endpoints are the same sign, then we know that the (k-1)th derivative is also the same sign over the entire range. However, if they are the opposite sign, then we can use bisection to find the root of the (k-1)th derivative with guaranteed convergence to any desired degree of accuracy. If we can find $x_{d-root}$ where $\frac{d^{k-1}}{d x^{k-1}}f(x_{d-root}) = 0$ then this will leave us with two range, $x_{min} < x < x_{d-root}$ and $x_{d-root} < x < x_{max}$.

  6. Now we either know there is one range where the (k-1)th derivative has a consistent sign or two ranges where the (k-1)th derivative has a consistent sign for the whole range (but one range is positive and the other is negative). Either way, we know that on these one or two ranges we can use bisection (or another bracketed root finding method) to find if and where $\frac{d^{k-2}}{d x^{k-2}}f(x) = 0$.

  7. We continue in this manner, breaking our original range into smaller ranges when we find that the current Kth derivative is 0 and then proceeding to use bracketed root finding at one less level derivative. At some point, we will keep backing out of layers of derivatives until we are just doing bracketed root finding on the original function.

Note: There are many ways this could be further optimized. I have currently coded the algorithm and posted some proof of concept code on Github solve-exponentials.

TLDR;

Find a range where roots could possibly occur. Take a bunch (N) derivatives until its obvious that one term is dominant in the range where roots occur. (With exponents this always happens eventually). This tells us that the (N-1)th derivative is a monatomic function so we can find a root (if it has one) via bisection. Now we have some more regions on which we know the the (N-2) derivative is a monatomic function. Apply logic recursively until we have regions of the original curve where we can find roots using bisection. This way we won't miss any roots.

bruceceng
  • 161
  • It loses me in step 2 why the ratio of the two absolute values $A_N e^{b_N x} $ and $A_i e^{b_i x}$ is $P$ and that the equation is not true for every $x$ but only certain values of $x$, so taking derivatives on both sides and direct comparison makes no sense. For example taking derivatives of $x^2 = 2$ gives $x=0$. – wilsonw Apr 27 '24 at 13:46
  • Oh I get the $P$ now you are setting up the variae $x_i$. The derivative thing still makes no sense. – wilsonw Apr 27 '24 at 13:50
  • @wilsonw The derivatives only work because the terms have exponents. So the derivative of A * exp(bx) = A b* exp(b*x). So by taking repeated derivatives, the term with the largest b will eventually also have the largest pre-exponential factor. Then this term will dominate all the other terms (starts largest and grows fastest). Then at this point you can be sure that there are no sign changes for that Nth derivative which means that the N-1 derivative function will be increasing everywhere (or decreasing everywhere) which means we can see if it has any roots just by evaluating the endpoints. – bruceceng Apr 29 '24 at 02:50
3

If $a/b$ is 2 or 1/2, then it will reduce to a quadratic. This can be useful if you have an experiment and you can control the time at which measurements are taken to be at $t_0$, $t_0+d$, and $t_0+2d$, and the process proceeds exponentially (like Newton's law of cooling).

marty cohen
  • 110,450
2

Unfortunately, no elementary solution exists for general $a,b$. This is because solving $$e^{ax}+e^{by}=c$$ is equivalent to solving $y^{a/b}+y=c$ where $y=e^{bx}$, and even in the case $a/b=5$ the solution is expressed in terms of Bring radicals.

Alex Becker
  • 61,883