But WHERE do the Taylor Series and Lagrange Error Bound even come from?!

by Justin Skycak on

An intuitive derivation.

See below for the video version of this post:



If you’ve taken calculus, then you’re probably familiar with the idea that lots of functions can be written as infinite polynomials, called Taylor series. For example, the Taylor series of the exponential function, $e^x$, is given by

$\begin{align*} e^x = 1 + x + \frac{1}{2}x^2 + \frac{1}{3!}x^3 + \frac{1}{4!} x^4 + \cdots \end{align*}$


In general, there’s a formula for the Taylor series of a function $f(x)$ centered around $x=a$:

$\begin{align*} f(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + \cdots \end{align*}$


These are infinite Taylor series. But often we want to approximate a function with only up to the $n^\text{th}$ degree of the Taylor series.

$\begin{align*} f(x) \approx f(a) + f'(a)(x-a) + \frac{f''(a)}{2}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n \end{align*}$


This approximation will often be off by a small amount, and there is a bound on that amount of error, which is called the Lagrange error bound.

$\begin{align*} \text{error } &\leq \frac{M}{(n+1)!} |x-a|^{n+1} \\ &\text{where } M \text{ is the maximum value} \\ &\text{of } |f^{(n+1)}| \text{ on } [a,x] \end{align*}$


Now, you may have done a few numerical examples and seen that indeed, Taylor series can be pretty good estimations. You may have even proven the Taylor series formula. I’ve done that too, but I’ve never really found that entirely satisfying, because how could somebody even come up with this series in the first place? You don’t just magically think of a series that has all these cool properties without some prior intuition. You need some sort of idea in your mind, that guides your thinking on the way to coming up with this Taylor series. And that’s what we’re going to learn about in this post.

But first of all, let’s get ourselves in the right frame of mind. Suppose you’re a mathematician back in the 17th century, and there’s this cool new toy called “calculus” that people are playing with. You can differentiate, and you can integrate, and there’s this cool theorem that links both of those ideas together, called the Fundamental Theorem of Calculus:

$\begin{align*} \int_a^b f'(x) \, dx = f(b) - f(a) \end{align*}$


Being a mathematician, you want to discover something new that nobody has seen before – and one thing that often leads people to new discoveries is thinking about something in a new way. So, let’s think about this Fundamental Theorem of Calculus in a new way.

If we replace the $b$ with an $x$, and solve for $f(x)$, then it’s kind of like we’re saying that $f(x)$ can be approximated by $f(a)$, and there’s some sort of error in that approximation.

$\begin{align*} \int_a^x f'(x) \, dx &= f(x) - f(a) \\ f(x) &= \underbrace{f(a)}_{\text{approximation}} + \underbrace{\int_a^x f'(x) \, dx}_{\text{error}} \end{align*}$


It makes sense that the error should be related to the derivative. If we are approximating $f(x)$ by a constant, $f(a)$, then we’re saying $f(x)$ is not changing at all. But often, that’s not the case, and the amount by which $f(x)$ is changing at any point is given by the derivative $f’(x)$. It’s like we’re saying $f(x)$ can be approximated by a constant, and then the error given by that approximation comes from integrating the derivative, which represents the extent to which $f(x)$ is not constant.

Now, what if we want to get a better approximation for $f(x)$? Maybe instead of just a constant, we want to approximate $f(x)$ as a line. If we do that, then the error should represent the degree to which $f(x)$ is nonlinear, which would be represented by the second derivative, $f’^\prime(x)$.

We write down something similar to the Fundamental Theorem of Calculus, but this time, instead of integrating the first derivative, we double-integrate the second derivative.

$\begin{align*} \int_a^x \underbrace{\int_a^x f''(x) \, dx}_{\text{inner integral}} \, dx \end{align*}$


This looks a little crazy, but the inner integral is very similar to what we had before. The only difference is that we have an extre prime on the $f$, so the result will get an extra prime as well.

$\begin{align*} \int_a^x f'(x) \, dx &= f(x) - f(a) \\ \int_a^x f''(x) \, dx &= f'(x) - f'(a) \\ \end{align*}$


We substitute this result for the inner integral, and split up the outer integral over the subtraction.

$\begin{align*} \int_a^x \int_a^x f''(x) \, dx \, dx &= \int_a^x [ f'(x) - f'(a) ] \, dx \\ &= \int_a^x f'(x) \, dx - \int_a^x f'(a) \, dx \end{align*}$


The left integral is exactly like we had before, and for the right integral, remember that $f’(a)$ is just a constant, so we can factor it out.

$\begin{align*} \int_a^x \int_a^x f''(x) \, dx \, dx &= f(x)-f(a) - f'(a) \int_a^x 1 \, dx \\ &= f(x)-f(a) - f'(a)(x-a) \end{align*}$


We can solve for $f(x)$ in this equation:

$\begin{align*} \int_a^x \int_a^x f''(x) \, dx \, dx &= f(x)-f(a) - f'(a)(x-a) \\ f(x) &= f(a) + f'(a)(x-a) + \int_a^x \int_a^x f''(x) \, dx \, dx \end{align*}$


This is the result that we were hoping for! There is a linear approximation, and the error in that approximation is represented by the double integral term. The double integral term contains $f’^\prime(x)$, which represents the degree to which $f(x)$ is nonlinear

$\begin{align*} f(x) &= \underbrace{f(a) + f'(a)(x-a)}_{\text{approximation}} + \underbrace{\int_a^x \int_a^x f''(x) \, dx \, dx}_{\text{error}} \end{align*}$


If you were to continue this process over and over again, say you differentiate $n+1$ times and integrate $n+1$ times, you’d end up with an $n^\text{th}$ degree polynomial approximation for $f(x)$, with an error term consisting of $n+1$ integrals of the $(n+1)^\text{st}$ derivative. The approximation is exactly the $n^\text{th}$ degree Taylor polynomial,

$\begin{align*} f(x) \approx \underbrace{f(a) + f'(a)(x-a) + \frac{f''(a)}{2}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n}_{n^\text{th} \text{ degree Taylor polynomial}} + \underbrace{\int_a^x \cdots \int_a^x}_{n+1 \text{ integrals}} f^{(n+1)}(x) \, dx \cdots dx \end{align*}$


This is the intuition behind where the Taylor polynomial comes from. But what about the Lagrange error bound? The expression

$\begin{align*} \underbrace{\int_a^x \cdots \int_a^x}_{n+1 \text{ integrals}} f^{(n+1)}(x) \, dx \cdots dx \end{align*}$


doesn’t look too familiar at this point, but it’s actually only one step away from the Lagrange error bound.

Let’s write our error term again, and see if we can place a bound on its magnitude. The highest possible value that this integral can come out to, is the integral of the maximum magnitude of the $(n+1)^\text{st}$ derivative. If we define

$\begin{align*} M &= \text{maximum value} \\ &\text{of } |f^{(n+1)}| \text{ on } [a,x] \end{align*}$


then we can place a bound on the magnitude of the error term:

$\begin{align*} \left| \int_a^x \cdots \int_a^x f^{(n+1)}(x) \, dx \cdots dx \right| &\leq \left| \int_a^x \cdots \int_a^x M \, dx \cdots dx \right| \\ &= M \left| \int_a^x \cdots \int_a^x 1 \, dx \cdots dx \right| \end{align*}$


The innermost integral is simply $x-a$.

$\begin{align*} \int_a^x 1 \, dx &= x-a \end{align*}$


If you integrate that again, you get $\frac{(x-a)^2}{2}$,

$\begin{align*} \int_a^x [x-a] \, dx = \frac{(x-a)^2}{2} \end{align*}$


and if you integrate that again, you get $\frac{(x-a)^3}{3 \cdot 2}$, and $3 \cdot 2$ is the same as $3!$.

$\begin{align*} \int_a^x \frac{(x-a)^2}{2} \, dx = \frac{(x-a)^3}{3!} \end{align*}$


If you keep on integrating all of those $n+1$ integrals, you get a result of $\frac{(x-a)^{n+1}}{(n+1)!}$.

Now we just write in the rest of our expression with the $M$ and the absolute value.

$\begin{align*} \left| \int_a^x \cdots \int_a^x f^{(n+1)}(x) \, dx \cdots dx \right| &\leq M \left|\frac{(x-a)^{n+1}}{(n+1)!} \right| \end{align*}$


We know that $M$ is nonnegative based on how we defined it, and the $(n+1)!$ is also nonnegative, so we can bring those out of the absolute value, and it’s just the $x-a$ that could potentially be negative. And there we have it, the Lagrange error bound!

$\begin{align*} \left| \int_a^x \cdots \int_a^x f^{(n+1)}(x) \, dx \cdots dx \right| &\leq \underbrace{M \frac{|x-a|^{n+1}}{(n+1)!}}_{\text{Lagrange error bound}} \end{align*}$