All of the finite-difference formulas in the previous section based on equally spaced nodes converge as the node spacing h h h decreases to zero. However, note that to discretize a function over an interval [ a , b ] [a,b] [ a , b ] , we use h = ( b − a ) / n h=(b-a)/n h = ( b − a ) / n , which implies n = ( b − a ) / h = O ( h − 1 ) n=(b-a)/h=O(h^{-1}) n = ( b − a ) / h = O ( h − 1 ) . As h → 0 h\to 0 h → 0 , the total number of nodes needed grows without bound. So we would like to make h h h as large as possible while still achieving some acceptable accuracy.
Definition 5.5.1 (Truncation error of a finite-difference formula)
For the finite-difference method (5.4.1) with weights a − p , … , a q a_{-p},\ldots,a_{q} a − p , … , a q , the truncation error is
τ f ( h ) = f ′ ( 0 ) − 1 h ∑ k = − p q a k f ( k h ) . \tau_f(h) = f'(0) - \frac{1}{h} \sum_{k=-p}^{q} a_k f(kh). τ f ( h ) = f ′ ( 0 ) − h 1 k = − p ∑ q a k f ( kh ) . The method is said to be convergent if τ f ( h ) → 0 \tau_f(h)\to 0 τ f ( h ) → 0 as h → 0 h\to 0 h → 0 .
Although we are measuring the truncation error only at x = 0 x=0 x = 0 , it could be defined for other x x x as well. The definition adjusts naturally to use f ′ ′ ( 0 ) f''(0) f ′′ ( 0 ) for difference formulas targeting the second derivative.
All of the finite-difference formulas given in Finite differences are convergent.
5.5.1 Order of accuracy ¶ Of major interest is the rate at which τ f → 0 \tau_f\to 0 τ f → 0 in a convergent formula.
Hence the forward-difference formula in Example 5.5.1 has order of accuracy equal to 1; i.e., it is first-order accurate . All else being equal, a higher order of accuracy is preferred, since O ( h m ) O(h^m) O ( h m ) vanishes more quickly for larger values of m m m . As a rule, including more function values in a finite-difference formula (i.e., increasing the number of weights in (5.4.1) ) increases the order of accuracy, as can be seen in Table 5.4.1 and Table 5.4.2 .
Order of accuracy is calculated by expanding τ f \tau_f τ f in a Taylor series about h = 0 h=0 h = 0 and ignoring all but the leading term.[1]
We compute the truncation error of the centered difference formula (5.4.8) :
τ f ( h ) = f ′ ( 0 ) − f ( h ) − f ( − h ) 2 h = f ′ ( 0 ) − ( 2 h ) − 1 [ ( f ( 0 ) + h f ′ ( 0 ) + 1 2 h 2 f ′ ′ ( 0 ) + 1 6 h 3 f ′ ′ ′ ( 0 ) + O ( h 4 ) ) − ( f ( 0 ) − h f ′ ( 0 ) + 1 2 h 2 f ′ ′ ( 0 ) − 1 6 h 3 f ′ ′ ′ ( 0 ) + O ( h 4 ) ) ] = − ( 2 h ) − 1 [ 1 3 h 3 f ′ ′ ′ ( 0 ) + O ( h 4 ) ] = O ( h 2 ) . \begin{split}
\tau_f(h) &= f'(0) - \frac{ f(h)-f(-h)}{2h}\\
&= f'(0) - (2h)^{-1} \left[ \bigl( f(0) + h f'(0) + \tfrac{1}{2}h^2f''(0)+ \tfrac{1}{6}h^3f'''(0)+ O(h^4) \bigr) \right.\\
&\qquad - \left. \bigl( f(0) - h f'(0) + \tfrac{1}{2}h^2f''(0) - \tfrac{1}{6}h^3f'''(0)+O(h^4) \bigr) \right] \\
&= -(2h)^{-1} \left[ \tfrac{1}{3}h^3f'''(0) + O(h^4) \right] = O(h^2).
\end{split} τ f ( h ) = f ′ ( 0 ) − 2 h f ( h ) − f ( − h ) = f ′ ( 0 ) − ( 2 h ) − 1 [ ( f ( 0 ) + h f ′ ( 0 ) + 2 1 h 2 f ′′ ( 0 ) + 6 1 h 3 f ′′′ ( 0 ) + O ( h 4 ) ) − ( f ( 0 ) − h f ′ ( 0 ) + 2 1 h 2 f ′′ ( 0 ) − 6 1 h 3 f ′′′ ( 0 ) + O ( h 4 ) ) ] = − ( 2 h ) − 1 [ 3 1 h 3 f ′′′ ( 0 ) + O ( h 4 ) ] = O ( h 2 ) . Thus, this method has order of accuracy equal to 2.
Convergence of finite differencesLet’s observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = x -> sin(exp(x + 1))
exact_value = exp(1) * cos(exp(1))
We’ll compute the formulas in parallel for a sequence of h h h values.
h = [5 / 10^n for n in 1:6]
FD1 = [];
FD2 = [];
for h in h
push!(FD1, (f(h) - f(0)) / h)
push!(FD2, (f(h) - f(-h)) / 2h)
end
pretty_table([h FD1 FD2], header=["h", "FD1", "FD2"])
All that’s easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
error_FD1 = @. exact_value - FD1
error_FD2 = @. exact_value - FD2
table = [h error_FD1 error_FD2]
pretty_table(table, header=["h", "error in FD1", "error in FD2"])
In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h → 0 h\to 0 h → 0 ) be a straight line whose slope is the order of accuracy. However, it’s conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
plot(h, abs.([error_FD1 error_FD2]), m=:o, label=["FD1" "FD2"],
xflip=true, xaxis=(:log10, L"h"), yaxis=(:log10, "error"),
title="Convergence of finite differences", leg=:bottomleft)
# Add lines for perfect 1st and 2nd order.
plot!(h, [h h .^ 2], l=:dash, label=[L"O(h)" L"O(h^2)"])
Convergence of finite differencesLet’s observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = @(x) sin(exp(x + 1));
exact_value = exp(1) * cos(exp(1))
We’ll compute the formulas in parallel for a sequence of h h h values.
h = 5 ./ 10.^(1:6)';
FD1 = zeros(size(h));
FD2 = zeros(size(h));
for i = 1:length(h)
h_i = h(i);
FD1(i) = (f(h_i) - f(0) ) / h_i;
FD2(i) = (f(h_i) - f(-h_i)) / (2*h_i);
end
disp(table(h, FD1, FD2))
All that’s easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
err1 = abs(exact_value - FD1);
err2 = abs(exact_value - FD2);
disp(table(h, err1, err2, variableNames=["h", "error in FD1", "error in FD2"]))
In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h → 0 h\to 0 h → 0 ) be a straight line whose slope is the order of accuracy. However, it’s conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
clf
loglog(h, abs([err1 err2]), "o-")
set(gca, "xdir", "reverse")
order1 = 0.1 * err1(end) * (h / h(end)) .^ 1;
order2 = 0.1 * err2(end) * (h / h(end)) .^ 2;
hold on
loglog(h, order1, "--", h, order2, "--")
xlabel("h"); ylabel("error")
title("Convergence of finite differences")
legend("FD1", "FD2", "O(h)", "O(h^2)")
Convergence of finite differencesLet’s observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = lambda x: sin(exp(x + 1))
exact_value = exp(1) * cos(exp(1))
We’ll compute the formulas in parallel for a sequence of h h h values.
h_ = array([5 / 10**(n+1) for n in range(6)])
FD = zeros((len(h_), 2))
for (i, h) in enumerate(h_):
FD[i, 0] = (f(h) - f(0)) / h
FD[i, 1] = (f(h) - f(-h)) / (2*h)
results = PrettyTable()
results.add_column("h", h_)
results.add_column("FD1", FD[:, 0])
results.add_column("FD2", FD[:, 1])
print(results)
All that’s easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
errors = FD - exact_value
results = PrettyTable()
results.add_column("h", h_)
results.add_column("error in FD1", errors[:, 0])
results.add_column("error in FD2", errors[:, 1])
print(results)
In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h → 0 h\to 0 h → 0 ) be a straight line whose slope is the order of accuracy. However, it’s conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
plot(h_, abs(errors), "o-", label=["FD1", "FD2"])
gca().invert_xaxis()
# Add lines for perfect 1st and 2nd order.
loglog(h_, h_, "--", label="$O(h)$")
loglog(h_, h_**2, "--", label="$O(h^2)$")
xlabel("$h$")
ylabel("error")
legend()
5.5.2 Stability ¶ The truncation error τ f ( h ) \tau_f(h) τ f ( h ) of a finite-difference formula is dominated by a leading term O ( h m ) O(h^m) O ( h m ) for an integer m m m . This error decreases as h → 0 h\to 0 h → 0 . However, we have not yet accounted for the effects of roundoff error. To keep matters as simple as possible, let’s consider the forward difference
δ ( h ) = f ( x + h ) − f ( x ) h . \delta(h) = \frac{f(x+h)-f(x)}{h}. δ ( h ) = h f ( x + h ) − f ( x ) . As h → 0 h\to 0 h → 0 , the numerator approaches zero even though the values f ( x + h ) f(x+h) f ( x + h ) and f ( x ) f(x) f ( x ) are not necessarily near zero. This is the recipe for subtractive cancellation error! In fact, finite-difference formulas are inherently ill-conditioned as h → 0 h\to 0 h → 0 . To be precise, recall that the condition number for the problem of computing f ( x + h ) − f ( x ) f(x+h)-f(x) f ( x + h ) − f ( x ) is
κ ( h ) = max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } ∣ f ( x + h ) − f ( x ) ∣ , \kappa(h) = \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ |f(x+h)-f(x) | }, κ ( h ) = ∣ f ( x + h ) − f ( x ) ∣ max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } , implying a relative error of size κ ( h ) ϵ mach \kappa(h) \epsilon_\text{mach} κ ( h ) ϵ mach in its computation. Hence the numerical value we actually compute for δ is
δ ~ ( h ) = f ( x + h ) − f ( x ) h ( 1 + κ ( h ) ϵ mach ) = δ ( h ) + max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } ∣ f ( x + h ) − f ( x ) ∣ ⋅ f ( x + h ) − f ( x ) h ⋅ ϵ mach . \begin{align*}
\tilde{\delta}(h) &= \frac{f(x+h)-f(x)}{h}\, (1+\kappa(h)\epsilon_\text{mach}) \\
&= \delta(h) + \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ |f(x+h)-f(x) | }\cdot \frac{f(x+h)-f(x)}{h} \cdot \epsilon_\text{mach}.\\
\end{align*} δ ~ ( h ) = h f ( x + h ) − f ( x ) ( 1 + κ ( h ) ϵ mach ) = δ ( h ) + ∣ f ( x + h ) − f ( x ) ∣ max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } ⋅ h f ( x + h ) − f ( x ) ⋅ ϵ mach . Hence as h → 0 h\to 0 h → 0 ,
∣ δ ~ ( h ) − δ ( h ) ∣ = max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } h ϵ mach ∼ ∣ f ( x ) ∣ ϵ mach ⋅ h − 1 . \bigl| \tilde{\delta}(h) - \delta(h) \bigr| = \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ h}\,\epsilon_\text{mach} \sim |f(x)|\, \epsilon_\text{mach}\cdot h^{-1}. ∣ ∣ δ ~ ( h ) − δ ( h ) ∣ ∣ = h max { ∣ f ( x + h ) ∣ , ∣ f ( x ) ∣ } ϵ mach ∼ ∣ f ( x ) ∣ ϵ mach ⋅ h − 1 . Combining the truncation error and the roundoff error leads to
∣ f ′ ( x ) − δ ~ ( h ) ∣ ≤ ∣ τ f ( h ) ∣ + ∣ f ( x ) ∣ ϵ mach h − 1 . \bigl| f'(x) - \tilde{\delta}(h) \bigr| \le \bigl| \tau_f(h) \bigr| + \bigl|f(x) \bigr|\, \epsilon_\text{mach} \, h^{-1}. ∣ ∣ f ′ ( x ) − δ ~ ( h ) ∣ ∣ ≤ ∣ ∣ τ f ( h ) ∣ ∣ + ∣ ∣ f ( x ) ∣ ∣ ϵ mach h − 1 . Equation (5.5.8) indicates that while the truncation error τ vanishes as h h h decreases, the roundoff error actually increases thanks to the subtractive cancellation. At some value of h h h the two error contributions will be of roughly equal size. This occurs when
∣ f ( x ) ∣ ϵ mach h − 1 ≈ C h , or h ≈ K ϵ mach , \bigl|f(x)\bigr|\, \epsilon_\text{mach}\, h^{-1} \approx C h, \quad \text{or} \quad h \approx K \sqrt{\rule[0.05em]{0mm}{0.4em}\epsilon_\text{mach}}, ∣ ∣ f ( x ) ∣ ∣ ϵ mach h − 1 ≈ C h , or h ≈ K ϵ mach , for a constant K K K that depends on x x x and f f f , but not h h h . In summary, for a first-order finite-difference method, the optimum spacing between nodes is proportional to ϵ mach 1 / 2 \epsilon_\text{mach}^{\,\,1/2} ϵ mach 1/2 . (This observation explains the choice of δ
in Function 4.6.1 .)
For a method of truncation order m m m , the details of the subtractive cancellation are a bit different, but the conclusion generalizes.
For computing with a finite-difference method of order m m m in the presence of roundoff, the optimal spacing of nodes satisfies
h opt ≈ ϵ mach 1 / ( m + 1 ) , h_\text{opt} \approx \epsilon_\text{mach}^{\,\,1/(m+1)}, h opt ≈ ϵ mach 1/ ( m + 1 ) , and the optimum total error is roughly ϵ mach m / ( m + 1 ) \epsilon_\text{mach}^{\,\, m/(m+1)} ϵ mach m / ( m + 1 ) .
A different statement of the conclusion is that for a first-order formula, at most we can expect accuracy in only about half of the available machine digits. As m m m increases, we get ever closer to using the full accuracy available. Higher-order finite-difference methods are both more efficient and less vulnerable to roundoff than low-order methods.
Roundoff error in finite differencesLet f ( x ) = e − 1.3 x f(x)=e^{-1.3x} f ( x ) = e − 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f ′ ( 0 ) = − 1.3 f'(0)=-1.3 f ′ ( 0 ) = − 1.3 .
f = x -> exp(-1.3 * x);
exact = -1.3
h = [1 / 10^n for n in 1:12]
FD1, FD2, FD4 = [], [], []
for h in h
nodes = h * (-2:2)
vals = @. f(nodes)
push!(FD1, dot([0 0 -1 1 0] / h, vals))
push!(FD2, dot([0 -1 / 2 0 1 / 2 0] / h, vals))
push!(FD4, dot([1 / 12 -2 / 3 0 2 / 3 -1 / 12] / h, vals))
end
table = [h FD1 FD2 FD4]
pretty_table(table[1:4, :], header=["h", "FD1", "FD2", "FD4"])
They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
err = @. abs([FD1 FD2 FD4] - exact)
plot(h, err, m=:o, label=["FD1" "FD2" "FD4"],
xaxis=(:log10, L"h"), xflip=true, yaxis=(:log10, "error"),
title="FD error with roundoff", legend=:bottomright)
# Add line for perfect 1st order.
plot!(h, 0.1 * eps() ./ h, l=:dash, color=:black, label=L"O(h^{-1})")
Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
Roundoff error in finite differencesLet f ( x ) = e − 1.3 x f(x)=e^{-1.3x} f ( x ) = e − 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f ′ ( 0 ) = − 1.3 f'(0)=-1.3 f ′ ( 0 ) = − 1.3 .
f = @(x) exp(-1.3 * x);
exact = -1.3;
h = 10 .^ (-(1:12))';
FD = zeros(length(h), 3);
for i = 1:length(h)
h_i = h(i);
nodes = h_i * (-2:2);
vals = f(nodes);
FD(i, 1) = dot([0 0 -1 1 0] / h_i, vals);
FD(i, 2) = dot([0 -1/2 0 1/2 0] / h_i, vals);
FD(i, 3) = dot([1/12 -2/3 0 2/3 -1/12] / h_i, vals);
end
format long
disp(table(h, FD(:, 1), FD(:, 2), FD(:, 3), variableNames=["h", "FD1", "FD2", "FD4"]))
They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
err = abs(FD - exact);
clf
loglog(h, err, "o-")
set(gca, "xdir", "reverse")
order1 = 0.1 * err(end, 1) * (h / h(end)) .^ (-1);
hold on
loglog(h, order1, "k--")
xlabel("h"); ylabel("error")
title("FD error with roundoff")
legend("FD1", "FD2", "FD4", "O(1/h)", "location", "northeast")
Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
Roundoff error in finite differencesLet f ( x ) = e − 1.3 x f(x)=e^{-1.3x} f ( x ) = e − 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f ′ ( 0 ) = − 1.3 f'(0)=-1.3 f ′ ( 0 ) = − 1.3 .
f = lambda x: exp(-1.3 * x)
exact = -1.3
h_ = array([1 / 10**(n+1) for n in range(12)])
FD = zeros((len(h_), 3))
for (i, h) in enumerate(h_):
nodes = h * linspace(-2, 2, 5)
vals = f(nodes)
FD[i, 0] = dot(array([0, 0, -1, 1, 0]) / h, vals)
FD[i, 1] = dot(array([0, -1/2, 0, 1/2, 0]) / h, vals)
FD[i, 2] = dot(array([1/12, -2/3, 0, 2/3, -1/12]) / h, vals)
results = PrettyTable()
results.add_column("h", h_)
results.add_column("FD1", FD[:, 0])
results.add_column("FD2", FD[:, 1])
results.add_column("FD4", FD[:, 2])
print(results)
They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
loglog(h_, abs(FD[:, 0] + 1.3), "-o", label="FD1")
loglog(h_, abs(FD[:, 1] + 1.3), "-o", label="FD2")
loglog(h_, abs(FD[:, 2] + 1.3), "-o", label="FD4")
gca().invert_xaxis()
plot(h_, 0.1 * 2 ** (-52) / h_, "--", color="k", label="$O(h^{-1})$")
xlabel("$h$")
ylabel("total error")
title("FD error with roundoff")
legend()
Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
5.5.3 Exercises ¶ ⌨ Evaluate the centered second-order finite-difference approximation to f ′ ( 4 π / 5 ) f'(4\pi/5) f ′ ( 4 π /5 ) for f ( x ) = cos ( x 3 ) f(x)=\cos(x^3) f ( x ) = cos ( x 3 ) and h = 2 − 1 , 2 − 2 , … , 2 − 8 h=2^{-1},2^{-2},\ldots,2^{-8} h = 2 − 1 , 2 − 2 , … , 2 − 8 . On a log-log graph, plot the error as a function of h h h and compare it graphically to second-order convergence.
✍ Derive the first two nonzero terms of the Taylor series at h = 0 h=0 h = 0 of the truncation error τ f ( h ) \tau_{f}(h) τ f ( h ) for the formula (5.4.3) .
✍ Calculate the first nonzero term in the Taylor series of the truncation error τ f ( h ) \tau_{f}(h) τ f ( h ) for the finite-difference formula defined by the second row of Table 5.4.2 .
✍ Calculate the first nonzero term in the Taylor series of the truncation error τ f ( h ) \tau_{f}(h) τ f ( h ) for the finite-difference formula defined by the third row of Table 5.4.2 .
only
- Unknown Directive
✍ Show that the formula (5.4.12) is second-order accurate.
only
- Unknown Directive
✍ A different way to derive finite-difference formulas is the method of undetermined coefficients . Starting from (5.4.1) ,
f ′ ( x ) ≈ 1 h ∑ k = − p q a k f ( x + k h ) , f'(x) \approx \frac{1}{h}\sum_{k=-p}^q a_k f(x+kh), f ′ ( x ) ≈ h 1 k = − p ∑ q a k f ( x + kh ) , let each f ( x + k h ) f(x+k h) f ( x + kh ) be expanded in a series around h = 0 h=0 h = 0 . When the coefficients of powers of h h h are collected, one obtains
1 h ∑ k = − p q a k f ( x + k h ) = b 0 h + b 1 f ′ ( x ) + b 2 f ′ ′ ( x ) h + ⋯ , \frac{1}{h} \sum_{k=-p}^q a_k f(x+kh) = \frac{b_0}{h} + b_1 f'(x) + b_2 f''(x)h + \cdots, h 1 k = − p ∑ q a k f ( x + kh ) = h b 0 + b 1 f ′ ( x ) + b 2 f ′′ ( x ) h + ⋯ , where
b i = ∑ k = − p q k i a k . b_i = \sum_{k=-p}^q k^i a_k. b i = k = − p ∑ q k i a k . In order to make the result as close as possible to f ′ ( x ) f'(x) f ′ ( x ) , we impose the conditions
b 0 = 0 , b 1 = 1 , b 2 = 0 , b 3 = 0 , … , b p + q = 0. b_0 = 0,\, b_1=1,\, b_2=0,\, b_3=0,\,\ldots,\,b_{p+q}=0. b 0 = 0 , b 1 = 1 , b 2 = 0 , b 3 = 0 , … , b p + q = 0. This provides a system of linear equations for the weights.
(a) For p = q = 2 p=q=2 p = q = 2 , write out the system of equations for a − 2 a_{-2} a − 2 , a − 1 a_{-1} a − 1 , a 0 a_0 a 0 , a 1 a_1 a 1 , a 2 a_2 a 2 .
(b) Verify that the coefficients from the appropriate row of Table 5.4.1 satisfy the equations you wrote down in part (a).
(c) Derive the finite-difference formula for p = 1 p=1 p = 1 , q = 2 q=2 q = 2 using the method of undetermined coefficients.