3.3. Chapter 3: Computational linear algebra#

Before Class:

During and After Class:

  • Take notes (on paper or a tablet computer)

  • Complete this notebook, submit you answer via Gradescope

Other References

import scipy.linalg as la
import numpy as np

3.3.1. Throwback: Solving a Linear System by Hand#

We want to solve the following system:

\[3 x_1 + 2 x_2 + x_3 = 6\]
\[-x_1 + 4 x_2 + 5x_3 = 8\]
\[2x_1 -8 x_2 + 10 x_3 = 4\]

In matrix form this looks like

\[\begin{split}\underbrace{\begin{pmatrix} 3 & 2 & 1\\ -1 & 4 & 5\\ 2& -8 & 10\end{pmatrix}}_{\mathbf{A}} ~ \underbrace{\begin{pmatrix} x_1\\x_2\\x_3\end{pmatrix}}_{\mathbf{x}} = \underbrace{\begin{pmatrix}6\\8\\4\end{pmatrix}}_{\mathbf{b}} \end{split}\]

Another way to write this is a notational shorthand called an augmented matrix, where we put the righthand side into the matrix:

\[\begin{split} \left(\begin{array}{ccc|c} 3 & 2 & 1 &6\\ -1 & 4 & 5&8\\ 2& -8 & 10&4\end{array}\right) \end{split}\]

Now let’s apply Gaussian elimination to solve this system of equations with pencil and paper. If you get stuck, look at Gaussian Elimination. Turn in your work via Gradescope.

3.3.2. Solving a Linear System with Python#

Now let’s solve this system of equations with Python. Tip: scipy is a little more sophisticated than numpy. Read more here.

A = np.array([[3, 2, 1], [-1, 4, 5], [2, -8, 10]])
print("A=\n",A)

# Define b
# Add your solution here
print("b=\n",b)

# Calculate x using la.solve
# Add your solution here
print("x=\n",x)

We can also use Python to calculate the reduced row echelon form. First, let’s use sympy, which Savov uses too.

from sympy import Matrix

# Define A
A_ = Matrix([[3, 2, 1], [-1, 4, 5], [2, -8, 10]])
print("A=\n",A_)

# Define B
b_ = Matrix([6,8,4])
print("b=\n",b_)

# Create augmented matrix
aug_ = A_.row_join(b_)
print("aug=\n",aug_)

# Calculate reduced row eschelon form
rref_ = aug_.rref()
print(rref_)

Sympy performs symbolic computations which are great for teaching but often too slow for numerically solving large problems (e.g., compuational fluid dynamics simulation). We’ll focus on using scipy for most of this class, although you should be comfortable converting the reduced row echelon form of the augmented matrix back into the solution of the linear system.

3.3.3. Extending Gaussian Elimination to Invert a Matrix#

Now let’s “open the black box” to see how scipy/numpy numerically calculates the inverse. Start by studying Gaussian Elimination which converts our by-hand Gaussian elimination calculation into pseudocode and then Python code.

As discussed on pg. 199 of Savov, we can calculate the inverse of a matrix \(\bf A\) by assembling the augemented matrix

\[ \begin{bmatrix} {\bf A} & {\bf I} \end{bmatrix} \]

and applying Gauss-Jordan elimination to obtain:

\[ \begin{bmatrix} {\bf I} & {\bf A}^{-1} \end{bmatrix}\]

Assignment:

  1. Modify the GaussElim and BackSub functions from Gaussian Elimination to perform Gauss-Jordan elimination. As written, these functions first perform Gaussian elimination to produce zeros below the diagonal in the augmented matrix. Next, a backsolve is performed to solve the linear system. This needs to be modified to instead converted the augmented matrix into reduced row eschelon form, i.e., the left of the augmented matrix should be the identify matrix.

  2. Test your function by solving the linear system example \({\bf A x} = {\bf b}\) used earlier in this notebook. In other words, you should be able to reproduce the RREF calculated above with sympy.

  3. Use your function to calculate the matrix inverse \({\bf A}^{-1}\).

  4. Tip: You do not need to worry about pivoting. While pivoting is important for numeric stability, it adds complexity.

def reduced_row_eschelon_form(aug,print_level=1):
    """Perform Gauss-Jordan elimination to convert
    an input augmented matrix into reduced row eschelon form

    Args:
        aug: augmented matrix with dimensions N x M, M >= N
        print_level: 0 = none, 1 = minimal, 2 = every operation

    Returns:
        augmented matrix in reduced row eschelon form
    """
    # Extract dimensions of A
    [Nrow, Ncol] = aug.shape
    
    # Check the dimensions
    assert Nrow <= Ncol, "Augmented matrix has too few columns"

    # Check the data type
    assert type(aug) == np.ndarray, "This function only supports numpy.array, do not use numpy.matrix"

    # Create a copy of the augemented matrix
    aug_matrix = aug.copy()
    
    if print_level >= 2:
        print("Let's get started")
        print("aug=\n",aug)

    # loop over the first Nrow columns
    for column in range(0,Nrow):
        
        # loop over the rows entries to the right of the pivot
        for row in range(column+1, Nrow):

            # Add your solution here

            if print_level >= 2:
                print("\nMultiply row",column,"by",scale,"and subtract from row",row)
                print("aug =\n",aug_matrix)

            

        # end loop over rows
    # end loop over columns

    if print_level >= 1:
        print("\nEliminated below the diagonal:\n",aug_matrix)

    # check augmented matrix is all zeros below the diagonal
    z_tol = 1E-8
    for r in range(Nrow):
        for c in range(0,r):
            assert np.abs(aug_matrix[r,c]) < z_tol, "\nWarning! Nonzero in position "+str(r)+","+str(c)

    # loop over the columns starting at the right (count backwards)
    for column in range(Nrow-1, -1, -1):

        # loop over the rows above the pivot column
        for row in range(0, column):

            # Add your solution here

            if print_level >= 2:
                print("\nMultiply row",column,"by",scale,"and subtract from row",row)
                print("aug =\n",aug_matrix)

        # end loop over rows
    # end loop over columns

    if print_level >= 1:
        print("\nEliminated above the diagonal:\n",aug_matrix)

    # rescale each row
    for row in range(0,Nrow):
        aug_matrix[row,:] = aug_matrix[row,:] / aug_matrix[row,row]

    if print_level >= 1:
        print("\nReduced row eschelon form:\n",aug_matrix)

    return aug_matrix



# Create the augmented matrix and convert elements into floats (instead of ints)
aug = np.column_stack((A,b)) * 1.0
print("Augmented Matrix:\n",aug)

rref = reduced_row_eschelon_form(aug,print_level=1)

Let’s also try our example system from the beginning of class:

\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & 1 & 1\\ 2 & 3 & 0 & 3 \\ 0 & 0 & 4 & 4\end{array}\right) \end{split}\]
A2 = np.array([[1.0, 1, 1, 1],[2, 3, 0, 3], [0, 0, 4, 4]])

rref = reduced_row_eschelon_form(A2,print_level=2)

Finally, calculate the inverse of

\[\begin{split} {\bf A}_3 = \left(\begin{array}{ccc} 1 & 0 & 1 \\ 2 & 3 & 0 \\ 0 & 0 & 4 \end{array}\right) \end{split}\]

using your reduced row echelon code.

A3 = np.array([[1.0, 0, 1], [2, 3, 0], [0, 0, 4]])
print("A3 =\n",A3)

# Add your solution here

And let’s verify our answer with scipy

print("A3_inv=\n",la.inv(A3))

Now let’s try to invert:

\[\begin{split} {\bf A}_4 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \end{split}\]
A4 = np.array([[0, 1.0], [1, 0]])
aug = np.column_stack((A4, np.eye(2)))
rref = reduced_row_eschelon_form(aug,print_level = 3)

What happened? How to fix it? Hint: Look at Gaussian Elimination for ideas.

Note: You do not need to modify your code to implement the fix, just write a sentence about what feature is missing from our function reduced_row_eschelon_form and why this will fix the problem.

Your Answer:

3.3.4. Understanding the Determinant#

Consider the matrices

\[\begin{split} {\bf A} = \begin{bmatrix} 2 & 1 \\ 3 & 4 \end{bmatrix}, \qquad {\bf B} = \begin{bmatrix} 0 & 3 \\ 2 & 1 \end{bmatrix} \end{split}\]

Caculate det(\({\bf A}\)) with paper and pencil.

Use numpy and scipy to numerically verify the following properties of the determinant.

A = np.array([[2, 1], [3, 4]])

# Add your solution here

det(\({\bf A}\)) = det(\({\bf A}^T\))

Does the numerical answer match your pencil and paper calculation of det(\({\bf A}\))?

print("det(A) =", la.det(A))

# Add your solution here

Is \({\bf A}\) invertable?

Answer and explain in a sentence:

det(\({\bf I}\)) = 1

# Add your solution here

det(\(\bf A B\)) = det(\(\bf A\))det(\(\bf B\)) = det(\(\bf B\))det(\(\bf A\)) = det(\(\bf B A\))

print("det(AB) =",la.det(A @ B))

# Add your solution here

Is \({\bf B}\) invertable?

Answer and explain in a sentence:

det(\(\alpha \bf A\)) = \(\alpha^n\) det(\(\bf A\)), where \(\alpha\) is a scalar

# Add your solution here

det(\({\bf A}\)) = \(\prod_i \lambda_i\), where \(\lambda_i\) are the eigenvalues of \(\bf A\)

w = la.eigvals(A)

# Add your solution here

3.3.5. Submission#

Please submit two files:

  1. Submit your pencil and paper calculations and pseudocode as requested to Gradescope (one assignment)

  2. Download this notebook as a .ipynb file and submit to to Gradescope (second assignment)

Reminders:

  • Be sure to upload your pencil and paper calculations and pseudocode to Gradescope as a single PDF for grading.

  • Review the homework formatting guidelines in the syllabus.

  • Review the commenting and pseudocode guidlines.