We now attend to the central problem of this chapter: Given a square, n × n n\times n n × n matrix A \mathbf{A} A and an n n n -vector b \mathbf{b} b , find an n n n -vector x \mathbf{x} x such that A x = b \mathbf{A}\mathbf{x}=\mathbf{b} Ax = b . Writing out these equations, we obtain
A 11 x 1 + A 12 x 2 + ⋯ + A 1 n x n = b 1 , A 21 x 1 + A 22 x 2 + ⋯ + A 2 n x n = b 2 , ⋮ A n 1 x 1 + A n 2 x 2 + ⋯ + A n n x n = b n . \begin{split}
A_{11}x_1 + A_{12}x_2 + \cdots + A_{1n}x_n &= b_1, \\
A_{21}x_1 + A_{22}x_2 + \cdots + A_{2n}x_n &= b_2, \\
\vdots \\
A_{n1}x_1 + A_{n2}x_2 + \cdots + A_{nn}x_n &= b_n.
\end{split} A 11 x 1 + A 12 x 2 + ⋯ + A 1 n x n A 21 x 1 + A 22 x 2 + ⋯ + A 2 n x n ⋮ A n 1 x 1 + A n 2 x 2 + ⋯ + A nn x n = b 1 , = b 2 , = b n . If A \mathbf{A} A is invertible, then the mathematical expression of the solution is x = A − 1 b \mathbf{x}=\mathbf{A}^{-1}\mathbf{b} x = A − 1 b because
A − 1 b = A − 1 ( A x ) = ( A − 1 A ) x = I x = x . \begin{split}
\mathbf{A}^{-1}\mathbf{b} = \mathbf{A}^{-1} (\mathbf{A} \mathbf{x}) = (\mathbf{A}^{-1}\mathbf{A}) \mathbf{x} = \mathbf{I} \mathbf{x}
= \mathbf{x}.
\end{split} A − 1 b = A − 1 ( Ax ) = ( A − 1 A ) x = Ix = x . When A \mathbf{A} A is singular, then A x = b \mathbf{A}\mathbf{x}=\mathbf{b} Ax = b may have no solution or
infinitely many solutions.
If we define
S = [ 0 1 0 0 ] , \mathbf{S} = \begin{bmatrix}
0 & 1\\0 & 0
\end{bmatrix}, S = [ 0 0 1 0 ] , then it is easy to check that for any real value of α we have
S [ α 1 ] = [ 1 0 ] . \mathbf{S}
\begin{bmatrix}
\alpha \\ 1
\end{bmatrix}
=
\begin{bmatrix}
1 \\ 0
\end{bmatrix}. S [ α 1 ] = [ 1 0 ] . Hence the linear system S x = b \mathbf{S}\mathbf{x}=\mathbf{b} Sx = b with b = [ 1 0 ] \mathbf{b}=\begin{bmatrix} 1\\0\end{bmatrix} b = [ 1 0 ] has infinitely many solutions. For most other choices of b \mathbf{b} b , the system has no solution.
2.3.1 Don’t use the inverse ¶ Matrix inverses are indispensable for mathematical discussion and derivations. However, as you may remember from a linear algebra course, they are not trivial to compute from the entries of the original matrix. You might be surprised to learn that matrix inverses play almost no role in scientific computing.
In fact, when we encounter an expression such as x = A − 1 b \mathbf{x} = \mathbf{A}^{-1} \mathbf{b} x = A − 1 b in computing, we interpret it as “solve the linear system A x = b \mathbf{A} \mathbf{x} = \mathbf{b} Ax = b ” and apply whatever algorithm is most expedient based on what we know about A \mathbf{A} A .
As demonstrated in Demo 2.1.1 , the backslash (the \
symbol, not to be confused with the slash /
used in web addresses) invokes a linear system solution.
In MATLAB, the backslash operator \
is used to solve linear systems.
In Python, the numpy.linalg.solve
function is used to solve linear systems.
Solving linear systemsFor a square matrix A \mathbf{A} A , the syntax A \ b
is mathematically equivalent to A − 1 b \mathbf{A}^{-1} \mathbf{b} A − 1 b .
A = [1 0 -1; 2 2 1; -1 -3 0]
One way to check the answer is to compute a quantity known as the residual . It is (ideally) close to machine precision (relative to the elements in the data).
If the matrix A \mathbf{A} A is singular, you may get an error.
A = [0 1; 0 0]
b = [1, -1]
x = A \ b
In this case we can check that the rank of A \mathbf{A} A is less than its number of columns, indicating singularity.
The function rank
computes the rank of a matrix. However, it is numerically unstable for matrices that are nearly singular, in a sense to be defined in a later section.
A linear system with a singular matrix might have no solution or infinitely many solutions, but in either case, backslash will fail. Moreover, detecting singularity is a lot like checking whether two floating-point numbers are exactly equal: because of roundoff, it could be missed. In Conditioning of linear systems we’ll find a robust way to fully describe this situation.
Solving linear systemsFor a square matrix A \mathbf{A} A , the syntax A \ b
is mathematically equivalent to A − 1 b \mathbf{A}^{-1} \mathbf{b} A − 1 b .
A = [1 0 -1; 2 2 1; -1 -3 0]
One way to check the answer is to compute a quantity known as the residual . It is (ideally) close to machine precision (relative to the elements in the data).
If the matrix A \mathbf{A} A is singular, you may get a warning and nonsense result.
A = [0 1; 0 0]
b = [1; -1]
x = A \ b
In this case, we can check that the rank of A \mathbf{A} A is less than its number of columns, indicating singularity.
The function rank
computes the rank of a matrix. However, it is numerically unstable for matrices that are nearly singular, in a sense to be defined in a later section.
A linear system with a singular matrix might have no solution or infinitely many solutions, but in either case, backslash will fail. Moreover, detecting singularity is a lot like checking whether two floating-point numbers are exactly equal: because of roundoff, it could be missed. In Conditioning of linear systems we’ll find a robust way to fully describe this situation.
Solving linear systemsFor a square matrix A A A , the command solve(A, B)
is mathematically equivalent to A − 1 b \mathbf{A}^{-1} \mathbf{b} A − 1 b .
A = array([[1, 0, -1], [2, 2, 1], [-1, -3, 0]])
b = array([1, 2, 3])
One way to check the answer is to compute a quantity known as the residual . It is (ideally) close to machine precision(relative to the elements in the data).
residual = b - A @ x
print(residual)
If the matrix A \mathbf{A} A is singular, you may get an error.
A = array([[0, 1], [0, 0]])
b = array([1, -1])
solve(A, b) # error, singular matrix
A linear system with a singular matrix might have no solution or infinitely many solutions, but in either case, a numerical solution becomes trickier. Detecting singularity is a lot like checking whether two floating-point numbers are exactly equal: because of roundoff, it could be missed. We’re headed toward a more robust way to fully describe this situation.
2.3.2 Triangular systems ¶ The solution process is especially easy to demonstrate for a system with a triangular matrix . For example, consider the lower triangular system
[ 4 0 0 0 3 − 1 0 0 − 1 0 3 0 1 − 1 − 1 2 ] x = [ 8 5 0 1 ] . \begin{bmatrix}
4 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 \\
-1 & 0 & 3 & 0 \\
1 & -1 & -1 & 2
\end{bmatrix} \mathbf{x} =
\begin{bmatrix}
8 \\ 5 \\ 0 \\ 1
\end{bmatrix}. ⎣ ⎡ 4 3 − 1 1 0 − 1 0 − 1 0 0 3 − 1 0 0 0 2 ⎦ ⎤ x = ⎣ ⎡ 8 5 0 1 ⎦ ⎤ . The first row of this system states simply that 4 x 1 = 8 4x_1=8 4 x 1 = 8 , which is easily solved as x 1 = 8 / 4 = 2 x_1=8/4=2 x 1 = 8/4 = 2 . Now, the second row states that 3 x 1 − x 2 = 5 3x_1-x_2=5 3 x 1 − x 2 = 5 . As x 1 x_1 x 1 is already known, it can be replaced to find that x 2 = − ( 5 − 3 ⋅ 2 ) = 1 x_2 = -(5-3\cdot 2)=1 x 2 = − ( 5 − 3 ⋅ 2 ) = 1 . Similarly, the third row gives x 3 = ( 0 + 1 ⋅ 2 ) / 3 = 2 / 3 x_3=(0+1\cdot 2)/3 = 2/3 x 3 = ( 0 + 1 ⋅ 2 ) /3 = 2/3 , and the last row yields x 4 = ( 1 − 1 ⋅ 2 + 1 ⋅ 1 + 1 ⋅ 2 / 3 ) / 2 = 1 / 3 x_4=(1-1\cdot 2 + 1\cdot 1 + 1\cdot 2/3)/2 = 1/3 x 4 = ( 1 − 1 ⋅ 2 + 1 ⋅ 1 + 1 ⋅ 2/3 ) /2 = 1/3 . Hence the solution is
x = [ 2 1 2 / 3 1 / 3 ] . \mathbf{x} =
\begin{bmatrix} 2 \\ 1 \\ 2/3 \\ 1/3
\end{bmatrix}. x = ⎣ ⎡ 2 1 2/3 1/3 ⎦ ⎤ . The process just described is called forward substitution . In the 4 × 4 4\times 4 4 × 4 lower triangular case of L x = b \mathbf{L}\mathbf{x}=\mathbf{b} Lx = b it leads to the formulas
x 1 = b 1 L 11 , x 2 = b 2 − L 21 x 1 L 22 , x 3 = b 3 − L 31 x 1 − L 32 x 2 L 33 , x 4 = b 4 − L 41 x 1 − L 42 x 2 − L 43 x 3 L 44 . \begin{split}
x_1 &= \frac{b_1}{L_{11}}, \\
x_2 &= \frac{b_2 - L_{21}x_1}{L_{22}}, \\
x_3 &= \frac{b_3 - L_{31}x_1 - L_{32}x_2}{L_{33}}, \\
x_4 &= \frac{b_4 - L_{41}x_1 - L_{42}x_2 - L_{43}x_3}{L_{44}}.
\end{split} x 1 x 2 x 3 x 4 = L 11 b 1 , = L 22 b 2 − L 21 x 1 , = L 33 b 3 − L 31 x 1 − L 32 x 2 , = L 44 b 4 − L 41 x 1 − L 42 x 2 − L 43 x 3 . For upper triangular systems U x = b \mathbf{U}\mathbf{x}=\mathbf{b} Ux = b an analogous process of backward substitution begins by solving for the last component x n = b n / U n n x_n=b_n/U_{nn} x n = b n / U nn and working backward. For the 4 × 4 4\times 4 4 × 4 case we have
[ U 11 U 12 U 13 U 14 0 U 22 U 23 U 24 0 0 U 33 U 34 0 0 0 U 44 ] x = [ b 1 b 2 b 3 b 4 ] . \begin{bmatrix}
U_{11} & U_{12} & U_{13} & U_{14} \\
0 & U_{22} & U_{23} & U_{24} \\
0 & 0 & U_{33} & U_{34} \\
0 & 0 & 0 & U_{44}
\end{bmatrix} \mathbf{x} =
\begin{bmatrix}
b_1 \\ b_2 \\ b_3 \\ b_4
\end{bmatrix}. ⎣ ⎡ U 11 0 0 0 U 12 U 22 0 0 U 13 U 23 U 33 0 U 14 U 24 U 34 U 44 ⎦ ⎤ x = ⎣ ⎡ b 1 b 2 b 3 b 4 ⎦ ⎤ . Solving the system backward, starting with x 4 x_4 x 4 first and then proceeding in descending order, gives
x 4 = b 4 U 44 , x 3 = b 3 − U 34 x 4 U 33 , x 2 = b 2 − U 23 x 3 − U 24 x 4 U 22 , x 1 = b 1 − U 12 x 2 − U 13 x 3 − U 14 x 4 U 11 . \begin{split}
x_4 &= \frac{b_4}{U_{44}}, \\
x_3 &= \frac{b_3 - U_{34}x_4}{U_{33}}, \\
x_2 &= \frac{b_2 - U_{23}x_3 - U_{24}x_4}{U_{22}}, \\
x_1 &= \frac{b_1 - U_{12}x_2 - U_{13}x_3 - U_{14}x_4}{U_{11}}.
\end{split} x 4 x 3 x 2 x 1 = U 44 b 4 , = U 33 b 3 − U 34 x 4 , = U 22 b 2 − U 23 x 3 − U 24 x 4 , = U 11 b 1 − U 12 x 2 − U 13 x 3 − U 14 x 4 . It should be clear that forward or backward substitution fails if and only if one of the diagonal entries of the system matrix is zero. We have essentially proved the following theorem.
2.3.3 Implementation ¶ Consider how to implement the sequential process implied by Equation (2.3.7) . It seems clear that we want to loop through the elements of x \mathbf{x} x in order. Within each iteration of that loop, we have an expression whose length depends on the iteration number. This leads to a nested loop structure.
Forward substitutionAbout the code
The sum
in line 12 gives an error if i
equals 1, so that case is taken care of before the loop starts.
Forward substitutionAbout the code
The sum
in line 12 gives an error if i
equals 1, so that case is taken care of before the loop starts.
Forward substitution1
2
3
4
5
6
7
8
9
10
11
12
13
def forwardsub(L,b):
"""
forwardsub(L,b)
Solve the lower-triangular linear system with matrix L and right-hand side
vector b.
"""
n = len(b)
x = np.zeros(n)
for i in range(n):
s = L[i,:i] @ x[:i]
x[i] = ( b[i] - s ) / L[i, i]
return x
The implementation of backward substitution is much like forward substitution and is given in Function 2.3.2 .
Backward substitution1
2
3
4
5
6
7
8
9
10
11
12
13
def backsub(U,b):
"""
backsub(U,b)
Solve the upper-triangular linear system with matrix U and right-hand side
vector b.
"""
n = len(b)
x = np.zeros(n)
for i in range(n-1, -1, -1):
s = U[i, i+1:] @ x[i+1:]
x[i] = ( b[i] - s ) / U[i, i]
return x
Triangular systems of equationsIt’s easy to get just the lower triangular part of any matrix using the tril
function.
Use tril
to return a matrix that zeros out everything above the main diagonal. The triu
function zeros out below the diagonal.
A = rand(1.:9., 5, 5)
L = tril(A)
We’ll set up and solve a linear system with this matrix.
b = ones(5)
x = FNC.forwardsub(L,b)
It’s not clear how accurate this answer is. However, the residual should be zero or comparable to ϵ mach \macheps ϵ mach .
Next we’ll engineer a problem to which we know the exact answer. Use \alpha
Tab and \beta
Tab to get the Greek letters.
The notation 0=>ones(5)
creates a Pair
. In diagm
, pairs indicate the position of a diagonal and the elements that are to be placed on it.
α = 0.3;
β = 2.2;
U = diagm( 0=>ones(5), 1=>[-1, -1, -1, -1] )
U[1, [4, 5]] = [ α - β, β ]
U
x_exact = ones(5)
b = [α, 0, 0, 0, 1]
Now we use backward substitution to solve for x \mathbf{x} x , and compare to the exact solution we know already.
x = FNC.backsub(U,b)
err = x - x_exact
Everything seems OK here. But another example, with a different value for β, is more troubling.
α = 0.3;
β = 1e12;
U = diagm( 0=>ones(5), 1=>[-1, -1, -1, -1] )
U[1, [4, 5]] = [ α - β, β ]
b = [α, 0, 0, 0, 1]
x = FNC.backsub(U,b)
err = x - x_exact
It’s not so good to get 4 digits of accuracy after starting with 16! The source of the error is not hard to track down. Solving for x 1 x_1 x 1 performs ( α − β ) + β (\alpha-\beta)+\beta ( α − β ) + β in the first row. Since ∣ α ∣ |\alpha| ∣ α ∣ is so much smaller than ∣ β ∣ |\beta| ∣ β ∣ , this a recipe for losing digits to subtractive cancellation.
Triangular systems of equationsIt’s easy to get just the lower triangular part of any matrix using the tril
function.
Use tril
to return a matrix that zeros out everything above the main diagonal. The triu
function zeros out below the diagonal.
A = randi(9, 5, 5);
L = tril(A)
We’ll set up and solve a linear system with this matrix.
b = ones(5);
x = forwardsub(L, b)
It’s not clear how accurate this answer is. However, the residual should be zero or comparable to ϵ mach \macheps ϵ mach .
Next, we’ll engineer a problem to which we know the exact answer.
The eye
function creates an identity matrix. The diag
function uses 0 as the main diagonal, positive integers as superdiagonals, and negative integers as subdiagonals.
alpha = 0.3;
beta = 2.2;
U = eye(5) + diag([-1 -1 -1 -1], 1);
U(1, [4, 5]) = [alpha - beta, beta]
x_exact = ones(5);
b = [alpha; 0; 0; 0; 1];
Now we use backward substitution to solve for x \mathbf{x} x , and compare to the exact solution we know already.
x = backsub(U, b);
err = x - x_exact
Everything seems OK here. But another example, with a different value for β, is more troubling.
alpha = 0.3;
beta = 1e12;
U = eye(5) + diag([-1 -1 -1 -1], 1);
U(1, [4, 5]) = [alpha - beta, beta];
b = [alpha; 0; 0; 0; 1];
x = backsub(U, b);
err = x - x_exact
It’s not so good to get 4 digits of accuracy after starting with sixteen! The source of the error is not hard to track down. Solving for x 1 x_1 x 1 performs ( α − β ) + β (\alpha-\beta)+\beta ( α − β ) + β in the first row. Since ∣ α ∣ |\alpha| ∣ α ∣ is so much smaller than ∣ β ∣ |\beta| ∣ β ∣ , this a recipe for losing digits to subtractive cancellation.
Triangular systems of equationsIt’s easy to get just the lower triangular part of any matrix using the tril
function.
A = 1 + floor(9 * random.rand(5, 5))
L = tril(A)
print(L)
We’ll set up and solve a linear system with this matrix.
b = ones(5)
x = FNC.forwardsub(L, b)
print(x)
It’s not clear how accurate this answer is. However, the residual should be zero or comparable to ϵ mach \macheps ϵ mach .
Next we’ll engineer a problem to which we know the exact answer.
alpha = 0.3;
beta = 2.2;
U = diag(ones(5)) + diag([-1, -1, -1, -1], k=1)
U[0, 3:5] = [ alpha - beta, beta ]
print(U)
x_exact = ones(5)
b = array([alpha, 0, 0, 0, 1])
x = FNC.backsub(U, b)
print("error:", x - x_exact)
Everything seems OK here. But another example, with a different value for β, is more troubling.
alpha = 0.3;
beta = 1e12;
U = diag(ones(5)) + diag([-1, -1, -1, -1], k=1)
U[0, 3:5] = [ alpha - beta, beta ]
b = array([alpha, 0, 0, 0, 1])
x = FNC.backsub(U, b)
print("error:", x - x_exact)
It’s not so good to get 4 digits of accuracy after starting with sixteen! But the source of the error is not hard to track down. Solving for x 1 x_1 x 1 performs ( α − β ) + β (\alpha-\beta)+\beta ( α − β ) + β in the first row. Since ∣ α ∣ |\alpha| ∣ α ∣ is so much smaller than ∣ β ∣ |\beta| ∣ β ∣ , this a recipe for losing digits to subtractive cancellation.
The example in Demo 2.3.3 is our first clue that linear system problems may have large condition numbers, making inaccurate solutions inevitable in floating-point arithmetic. We will learn how to spot such problems in Conditioning of linear systems . Before reaching that point, however, we need to discuss how to solve general linear systems, not just triangular ones.
2.3.4 Exercises ¶ ✍ Find a vector b \mathbf{b} b such that the system [ 0 1 0 0 ] x = b \begin{bmatrix} 0&1\\0&0 \end{bmatrix} \mathbf{x}=\mathbf{b} [ 0 0 1 0 ] x = b has no solution.
✍ Solve the following triangular systems by hand.
(a) − 2 x 1 = − 4 x 1 − x 2 = 2 3 x 1 + 2 x 2 + x 3 = 1 \displaystyle \begin{aligned}
-2x_1 &= -4 \\
x_1 - x_2 &= 2 \\
3x_1 + 2x_2 + x_3 &= 1
\end{aligned} \quad − 2 x 1 x 1 − x 2 3 x 1 + 2 x 2 + x 3 = − 4 = 2 = 1
(b) [ 4 0 0 0 1 − 2 0 0 − 1 4 4 0 2 − 5 5 1 ] x = [ − 4 1 − 3 5 ] \displaystyle \begin{bmatrix}
4 & 0 & 0 & 0 \\
1 & -2 & 0 & 0 \\
-1 & 4 & 4 & 0 \\
2 & -5 & 5 & 1
\end{bmatrix} \mathbf{x} = \begin{bmatrix}
-4 \\ 1 \\ -3 \\ 5
\end{bmatrix}\quad ⎣ ⎡ 4 1 − 1 2 0 − 2 4 − 5 0 0 4 5 0 0 0 1 ⎦ ⎤ x = ⎣ ⎡ − 4 1 − 3 5 ⎦ ⎤
(c) 3 x 1 + 2 x 2 + x 3 = 1 x 2 − x 3 = 2 2 x 3 = − 4 \displaystyle \begin{aligned}
3x_1 + 2x_2 + x_3 &= 1 \\
x_2 - x_3 &= 2 \\
2 x_3 &= -4
\end{aligned} 3 x 1 + 2 x 2 + x 3 x 2 − x 3 2 x 3 = 1 = 2 = − 4
⌨ Use Function 2.3.1 or Function 2.3.2 to solve each system from the preceding exercise. Verify that the solution is correct by computing L x \mathbf{L}\mathbf{x} Lx and subtracting b \mathbf{b} b .
⌨ Use Function 2.3.2 to solve the following systems. Verify that the solution is correct by computing U x \mathbf{U}\mathbf{x} Ux and subtracting b \mathbf{b} b .
(a) [ 3 1 0 0 − 1 − 2 0 0 3 ] x = [ 1 1 6 ] \;\begin{bmatrix}
3 & 1 & 0 \\
0 & -1 & -2 \\
0 & 0 & 3 \\
\end{bmatrix} \mathbf{x} = \begin{bmatrix}
1 \\ 1 \\ 6
\end{bmatrix}\qquad ⎣ ⎡ 3 0 0 1 − 1 0 0 − 2 3 ⎦ ⎤ x = ⎣ ⎡ 1 1 6 ⎦ ⎤
(b) [ 3 1 0 6 0 − 1 − 2 7 0 0 3 4 0 0 0 5 ] x = [ 4 1 1 5 ] \;\begin{bmatrix}
3 & 1 & 0 & 6 \\
0 & -1 & -2 & 7 \\
0 & 0 & 3 & 4 \\
0 & 0 & 0 & 5
\end{bmatrix} \mathbf{x} = \begin{bmatrix}
4 \\ 1 \\ 1 \\ 5
\end{bmatrix} ⎣ ⎡ 3 0 0 0 1 − 1 0 0 0 − 2 3 0 6 7 4 5 ⎦ ⎤ x = ⎣ ⎡ 4 1 1 5 ⎦ ⎤
Suppose a string is stretched with tension τ horizontally between two anchors at x = 0 x=0 x = 0 and x = 1 x=1 x = 1 . At each of the n − 1 n-1 n − 1 equally spaced positions x k = k / n x_k=k/n x k = k / n , k = 1 , … , n − 1 k=1,\ldots,n-1 k = 1 , … , n − 1 , we attach a little mass m i m_i m i and allow the string to come to equilibrium. This causes vertical displacement of the string. Let q k q_k q k be the amount of displacement at x k x_k x k . If the displacements are not too large, then an approximate force balance equation is
n τ ( q k − q k − 1 ) + n τ ( q k − q k + 1 ) = m k g , k = 1 , … , n − 1 , n \tau (q_k - q_{k-1}) + n\tau (q_k - q_{k+1}) =
m_k g, \qquad k=1,\ldots,n-1, n τ ( q k − q k − 1 ) + n τ ( q k − q k + 1 ) = m k g , k = 1 , … , n − 1 , where g = − 9.8 g=-9.8 g = − 9.8 m/s2 is the acceleration due to gravity, and we define q 0 = 0 q_0=0 q 0 = 0 and q n = 0 q_n=0 q n = 0 due to the anchors. This defines a linear system for q 1 , … , q n − 1 q_1,\ldots,q_{n-1} q 1 , … , q n − 1 .
(a) ✍ Show that the force balance equations can be written as a linear system A q = f \mathbf{A}\mathbf{q}=\mathbf{f} Aq = f , where q \mathbf{q} q is a vector of the unknown displacements and A \mathbf{A} A is a tridiagonal matrix (i.e., A i j = 0 A_{ij}=0 A ij = 0 if ∣ i − j ∣ > 1 |i-j|>1 ∣ i − j ∣ > 1 ) of size ( n − 1 ) × ( n − 1 ) (n-1)\times(n-1) ( n − 1 ) × ( n − 1 ) .
(b) ⌨ Let τ = 10 \tau=10 τ = 10 N, and m k = ( 1 / 10 n ) m_k=(1/10n) m k = ( 1/10 n ) kg for every k k k . Using backslash, find the displacements when n = 8 n=8 n = 8 and n = 40 n=40 n = 40 , and superimpose plots of q \mathbf{q} q over 0 ≤ x ≤ 1 0\le x \le 1 0 ≤ x ≤ 1 for the two cases. (Be sure to include the zero values at x = 0 x=0 x = 0 and x = 1 x=1 x = 1 in your plots.)
(c) ⌨ Repeat (b) for the case m k = ( k / 5 n 2 ) m_k = (k/5n^2) m k = ( k /5 n 2 ) kg.
⌨ If B ∈ R n × p \mathbf{B}\in\mathbb{R}^{n \times p} B ∈ R n × p has columns b 1 , … , b p \mathbf{b}_1,\ldots,\mathbf{b}_p b 1 , … , b p , then we can pose p p p linear systems at once by writing A X = B \mathbf{A} \mathbf{X} = \mathbf{B} AX = B , where X \mathbf{X} X is n × p n\times p n × p . Specifically, this equation implies A x j = b j \mathbf{A} \mathbf{x}_j = \mathbf{b}_j A x j = b j for j = 1 , … , p j=1,\ldots,p j = 1 , … , p .
(a) Modify Function 2.3.1 and Function 2.3.2 so that they solve the case where the second input is n × p n\times p n × p for p ≥ 1 p\ge 1 p ≥ 1 .
(b) If A X = I \mathbf{A} \mathbf{X}=\mathbf{I} AX = I , then X = A − 1 \mathbf{X}=\mathbf{A}^{-1} X = A − 1 . Use this fact to write a function ltinverse
that uses your modified forwardsub to compute the inverse of a lower triangular matrix. Test your function on at least two nontrivial matrices. (We remind you here that this is just an exercise; matrix inverses are rarely a good idea in numerical practice!)
⌨ Demo 2.3.3 showed solutions of A x = b \mathbf{A}\mathbf{x}=\mathbf{b} Ax = b , where
A = [ 1 − 1 0 α − β β 0 1 − 1 0 0 0 0 1 − 1 0 0 0 0 1 − 1 0 0 0 0 1 ] , b = [ α 0 0 0 1 ] . \mathbf{A} = \begin{bmatrix} 1 & -1 & 0 & \alpha-\beta & \beta \\ 0 & 1 & -1 &
0 & 0 \\ 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & 1
\end{bmatrix}, \quad
\mathbf{b} = \begin{bmatrix} \alpha \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}. A = ⎣ ⎡ 1 0 0 0 0 − 1 1 0 0 0 0 − 1 1 0 0 α − β 0 − 1 1 0 β 0 0 − 1 1 ⎦ ⎤ , b = ⎣ ⎡ α 0 0 0 1 ⎦ ⎤ . Use Function 2.3.2 to solve with α = 0.1 \alpha=0.1 α = 0.1 and β = 10 , 100 , 1 0 3 , … , 1 0 12 \beta=10,100,10^3,\ldots,10^{12} β = 10 , 100 , 1 0 3 , … , 1 0 12 , tabulating the values of β and ∣ x 1 − 1 ∣ |x_1-1| ∣ x 1 − 1∣ . (This kind of behavior is explained in Conditioning of linear systems .)