Section Power Series
From the first semester of calculus, we know that the best linear (\(1^{st}\) degree polynomial) approximation to the function, \(f(x) = e^x\) at the point \(P=(2,e^2)\) is the tangent line to the function at this point. Let's call that approximation \(L\) and note that \(L(2) = f(2)\) and \(L'(2) = f'(2)\text{.}\) That is, \(L\) and \(f\) intersect at \(P\) and \(L\) and \(f\) share the same derivative at \(P\text{.}\) What is the best quadratic (\(2^{nd}\) degree polynomial) approximation? If the best first degree approximation to the curve agrees at the point and in the first derivative, then the best second degree approximation should agree with the function at the point, in the first derivative, and in the second derivative.
Example 6.73.
Compute best linear and quadratic approximations to \(f(x) = x^3-16x\) at \(x=1\text{.}\)
Problem 6.74.
Let \(f(x) = e^x.\) Find the best linear approximation, \(L(x) = mx+ b,\) to \(f\) at \((2,e^2).\) Find the best quadratic approximation, \(Q(x) = ax^2 + bx +c,\) to \(f\) at \((2,e^2).\) Graph all three functions on the same pair of coordinate axes.
Problem 6.75.
Let \(f(x) = \cos(x).\) Find the best linear (L, \(1^{st}\) degree), quadratic (Q, \(2^{nd}\) degree), and quartic (C, \(4^{th}\) degree) approximations to \(f\) at \((\pi, \cos(\pi)).\) Sketch the graph of \(f\) and all three approximations on the same pair of coordinate axes. Compute and compare \(\dsp f(\frac{3\pi}{2})\text{,}\) \(\dsp L(\frac{3\pi}{2})\text{,}\) \(\dsp Q(\frac{3\pi}{2})\text{,}\) and \(\dsp C(\frac{3\pi}{2}).\)
From our work on geometric series, we know that
for \(|t| \lt 1\text{.}\) Therefore the function \(\dsp g(t) = \frac{1}{1-t} \mbox{ where } |t| \lt 1\) can be written as a series,
Replacing \(t\) by \(-x^2\text{,}\) we have
Based on these two observations, we see that we can write at least two rational functions as infinite series (or infinite polynomials). And we can write every polynomial as an infinite series, since
where \(a_i = 0\) for \(i > N\text{.}\) Our goal is a systematic way to write any differentiable function (like \(sin\) or \(cos\) or \(\ln\)) as an infinite series.
Definition 6.76.
\(0^0 = 1\)
Notation. To keep from confusing the powers of \(f\) with the derivatives of \(f\text{,}\) we use \(f^{(n)}\) to represent \(f\) if \(n=0\) and the \(n^{th}\) derivative of \(f\) if \(n \ge 1.\) Therefore when \(n \geq 1\text{,}\) \(f^{(n)}(c)\) means the \(n^{th}\) derivative of \(f\) evaluated at the number \(c\text{.}\)
Problem 6.77.
Let \(\displaystyle{f(x) = \sum_{i=0}^{\infty} a_ix^i}.\) Write out \(S_6\text{,}\) the \(6^{th}\) partial sum of this series. Compute \(S_6'\) and \(S_6''\text{.}\) Compute \(S_6'(0)\) and \(S_6''(0).\)
Problem 6.78.
Let \(\displaystyle{f(x) = \sum_{i=0}^{\infty} a_ix^i}.\) Compute \(f'\) and \(f''.\) Compute \(f(0), f'(0), f''(0), \dots\text{?}\) If \(n\) is any positive integer, what is the \(n^{th}\) derivative at \(0\text{,}\) \(f^{(n)}(0)\text{?}\)
Definition 6.79.
If \(f\) is a function with \(N\) derivatives, then the \(N^{th}\) degree Taylor polynomial of \(f\) expanded at \(c\) is defined by
Example 6.80.
Compute the Taylor series polynomials of degree 1, 2, 3 and 4 for \(f(x) = 1/x\) expanded at 1. Compute the Taylor series polynomial for the same function expanded at 1.
Problem 6.81.
Suppose that \(\dsp T(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(c)}{n!}(x-c)^n.\) Write out the first few terms of this series and compute \(T'(c)\text{,}\) \(T''(c)\text{,}\) \(T'''(c)\text{,}\) ….
Definition 6.82.
The series in the last problem is called the Taylor Series for \(f\) expanded at c. When \(c=0\) this is called the McLaurin Series for \(f\).
Problem 6.83.
Let \(f(x) = e^x.\) Compute the first, second, and third degree Taylor polynomials for \(f\) expanded at 2. Compare to the result of problem 6.74.
Example 6.84.
Compute the Taylor series for \(f(x)=1/x\) at \(c=1\) and use the Ratio Test to determine the interval of convergence. Also check convergence at both endpoints of the interval of convergence.
Problem 6.85.
For each function, write the infinite degree Taylor polynomial as a series.
\(f(x) = e^x\) expanded at 0
\(f(x) = \sin(x)\) expanded at 0
Problem 6.86.
Compute the Taylor series for \(\dsp f(x) = \frac{\cos(x)}{x}\) expanded at 0 by first computing the Taylor series for \(\cos(x)\) expanded at \(x=0\) and then multiplying this entire series by \(1/x\text{.}\)
Problem 6.87.
Let \(f(x) = \ln(x).\) Compute the first, second, and third degree Taylor polynomial for \(f\) expanded at 1. Compute \(|f(2.5) - T_N(2.5)|\) for \(N=1,2,3.\)
We now can write certain functions as infinite series, but we know that not every infinite series converges. Therefore, our expressions only make sense for the values of \(x\) for which the series converges. For this reason, every time we write down an infinite series to represent a function, we need to determine the values of \(x\) for which the series converges. This is called interval of convergence (or domain) of the series.
Definition 6.88.
A { power series expanded at \(c\)} is any series of the form \(\displaystyle{\sum_{n=0}^{\infty} a_n(x-c)^n}\text{.}\) Taylor and McLaurin series are special cases of power series. The largest interval for which a power series converges is called its interval of convergence. Half the length of this interval is called the radius of convergence.
Problem 6.89.
Use the ratio test to find the interval and radius of convergence for each power series.
\(\dsp \sum_{n=0}^{\infty} \frac{x^n}{n+1}\)
\(\dsp \sum_{n=0}^{\infty} \frac{x^n}{n!}\)
\(\dsp \sum_{n=0}^{\infty} n!(x+2)^n\)
The next theorem says that the interval of convergence of a power series expanded at the point \(c\) is either (i) only the one point, \(c\) (bad, since this means the series is useless), (ii) an interval of radius \(R\) centered at \(c,\) (good, and the endpoints might be contained in the domain), or (iii) all real numbers (very good).
Theorem 6.90.
Power Series Theorem. For the power series \(\displaystyle{\sum_{n=0}^{\infty} a_n(x-c)^n}\text{,}\) one and only one of the following is true.
The series converges only at \(c\text{.}\) (bad)
There exists a positive number \(R\) called the radius of convergence such that \(\displaystyle{\sum_{n=0}^{\infty} a_n(x-c)^n}\) converges absolutely for \(|x - c| \lt R\text{.}\) (good)
The series converges absolutely for all \(x\text{.}\) (great)
When we write the power series for a function (like \(e^x\)) and it converges for all values of \(x\) then we simply have a different way to write the function that turns out to be both computationally useful, because of convergence, and theoretically useful. Speaking loosely, \(e^x\) is just an infinite polynomial — how cool is that?!
Problem 6.91.
Find the interval of convergence for each of the following power series.
\(\dsp \sum_{n=0}^{\infty} \frac{(x-3)^n}{\ln(n+2)}\)
\(\dsp \sum_{n=1}^{\infty} \frac{(-x)^n}{n}\)
\(\dsp \sum_{n=0}^{\infty} \frac{\log(n+1)}{(n+1)^4} (x+1)^n\)
\(\dsp \sum_{n=0}^{\infty} \frac{n^3}{3^n} (x+2)^n\)
The following theorem says that if you have a power series, then its derivative is exactly what you want it to be! Just take the derivative of the series term by term and the new infinite series you get is the derivative of the first and is defined on the same domain.
Theorem 6.92.
Power Series Derivative Theorem. If \(\displaystyle{f(x) = \sum_{n=0}^{\infty} a_n(x-c)^n}\) for all \(x\) in the interval of convergence \((c-R, c+R)\text{,}\) then
\(f\) is differentiable in \((c-R, c+R)\text{,}\)
\(\displaystyle{f'(x) = \sum_{n=1}^{\infty} na_n(x-c)^{n-1}}\) for all \(x\) in \((c-R, c+R)\text{,}\) and
the interval of convergence of \(f'\) is also \((c-R, c+R)\text{.}\)
Problem 6.93.
Suppose that \(\displaystyle{f(x)=\sum_{n=0}^{\infty} a_n(x-c)^n}\) with radius of convergence \(r > 0\text{.}\) Show that \(\dsp a_n\) must equal \(\dsp \frac{f^{(n)}(c)}{n!}\) for \(n = 0, 1, 2,\) and \(3.\)
How does the calculator on your phone approximate numbers like \(\sqrt{5}\) or \(\sin(\pi/7)\text{?}\) It uses Taylor series. For example, if we want to approximate \(\sqrt{5}\) accurate to 3 decimal places, we could compute the Taylor series for \(f(x) = \sqrt{x}\) expanded about \(c=4\) because 4 is the integer closest to 5 for which we actually know the value of \(\sqrt{4}\text{.}\) Then we can use this polynomial to approximate \(\sqrt{5}.\) But there's a problem. While the Taylor series approximates the function, it's pretty hard to store an infinite polynomial in a phone. So, how can we determine what degree Taylor polynomial we need to compute in order to assure a given accuracy? We look at the remainder of the series. For any positive integer, \(N,\) \(f\) can be rewritten as the sum of the Taylor polynomial of degree \(N\) plus the remainder of the series by writing,
or
where
and
Since \(f(x) = T_N(x) + R_N(x),\) for all \(x\) in the interval of convergence, the error for any \(x\) in the interval of convergence is just \(| f(x) - T_N(x)| = |R_N(x)|.\) If we could just approximate the size of the remainder term, we would know how accurate the \(N^{th}\) degree Taylor polynomial was. The next theorem says that the remainder, even though it is an infinite sum, can be written completely in terms of the \((N+1)^{st}\) derivative, so if we can find a bound for the \((N+1)^{st}\) derivative then we can get a bound on the error of the Taylor series.
Theorem 6.94.
Taylor Series Error Theorem. Let \(f\) be a function such that \(f^{(i)}\) is continuous for each \(i=0, 1, 2, \cdots, N\) on \([a, b]\) and \(f^{(N+1)}(x)\) exists for all \(x\) in \((a, b)\text{.}\) Then \(\dsp R_N(x)=\frac{f^{(N+1)}(k)}{(N+1)!}(x-c)^{N+1}\) for some \(k\) between \(x\) and \(c\text{.}\)
Problem 6.95.
Let \(f(x) = \sqrt{x}\) and compute \(T_3(x)\text{,}\) the third degree Taylor series for \(f\) expanded at 4. Estimate the error \(|f(5)-T_3(5)|\) by using Theorem 6.94 to find the maximum of \(|R_3(5)|\text{.}\) Now use your calculator to compute \(|f(5)-T_3(5)|\) and see how this compares to your error estimate.
Problem 6.96.
Let \(f(x) = \sin(x)\) and compute \(T_3(x)\text{,}\) the third degree McLaurin series \(f.\) Estimate the error \(|f(\frac{\pi}{7})-T_3(\frac{\pi}{7})|\) by using Theorem 6.94 to find the maximum of \(|R_3(\frac{\pi}{7})|\text{.}\) Now use your calculator to compute \(|f(\frac{\pi}{7})-T_3(\frac{\pi}{7})|\) and see how this compares to your error estimate.