Frontmatter

If you are publishing this notebook on the web, you can set the parameters below to provide HTML metadata. This is useful for search engines and social media.

Notebook for 2023-02-13

md"""
# Notebook for 2023-02-13
"""
3.2 ms
using LinearAlgebra
255 μs

Pivoting

Row pivoting is necessary for Gaussian elimination in exact arithmetic because of the possibility that a leading submatrix will be singular. For example

$$\begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}$$

does not admit an unpivoted LU decomposition. If we perturb the (1,1) entry, we have a nearby matrix that can be factored without pivoting:

$$\begin{bmatrix} \delta & 1 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \delta^{-1} & 1 \end{bmatrix} \begin{bmatrix} \delta & 1 \\ 0 & 1-\delta^{-1} \end{bmatrix}.$$

But this factorization has terrible backward error, which we can compare to the (approximate) bound derived in class: $|\hat{A}-A| \leq n \epsilon_{\mathrm{mach}} |L| |U|$ elementwise. Note that what we refer to as machine epsilon in class is sometimes called the unit roundoff, which is half the distance between 1.0 and the next largest floating point number – the Julia expression eps(Float64) is the distance between 1.0 and the next largest floating point number, which is twice as big.

md"""
## Pivoting

Row pivoting is necessary for Gaussian elimination in exact arithmetic because of the possibility that a leading submatrix will be singular. For example

$$\begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}$$

does not admit an unpivoted LU decomposition. If we perturb the (1,1) entry, we have a nearby matrix that *can* be factored without pivoting:

$$\begin{bmatrix} \delta & 1 \\ 1 & 1 \end{bmatrix} =
\begin{bmatrix} 1 & 0 \\ \delta^{-1} & 1 \end{bmatrix}
\begin{bmatrix} \delta & 1 \\ 0 & 1-\delta^{-1} \end{bmatrix}.$$

1.2 ms
begin
δ = 1.0e-16
Abad = [δ 1.0; 1.0 1.0]
Lbad = [1.0 0.0; 1.0/δ 1.0]
Ubad = [δ 1.0; 0.0 1.0-1.0/δ]
Abad-Lbad*Ubad, eps(Float64)*abs.(Lbad)*abs.(Ubad)
end
429 ms

The usual strategy of partial pivoting chooses to eliminate variable $j$ using whichever row has the largest element in column $i$ of the Schur complement. This guarantees that the entries of $L$ are all bounded by 1 in magnitude.

md"""
The usual strategy of *partial pivoting* chooses to eliminate variable $j$ using whichever row has the largest element in column $i$ of the Schur complement. This guarantees that the entries of $L$ are all bounded by 1 in magnitude.
"""
184 μs
my_pivoted_lu (generic function with 1 method)
function my_pivoted_lu(A)

n = size(A)[1]
A = copy(A) # Make a local copy to overwrite
piv = zeros(Int, n) # Space for the pivot vector
piv[1:n] = 1:n

for j = 1:n-1

# Find ipiv >= j to maximize |A(i,j)|
ipiv = (j-1)+findmax(abs.(A[j:n,j]))[2]

# Swap row ipiv and row j and record the pivot row
A[ipiv,:], A[j,:] = A[j,:], A[ipiv,:]
piv[ipiv], piv[j] = piv[j], piv[ipiv]

# Compute multipliers and update trailing submatrix
A[j+1:n,j] /= A[j,j]
A[j+1:n,j+1:n] -= A[j+1:n,j]*A[j,j+1:n]'
end
UnitLowerTriangular(A), UpperTriangular(A), piv
end
1.5 ms
begin
A = rand(10,10)
b = rand(10)
L, U, p = my_pivoted_lu(A)
norm(A[p,:]-L*U)/norm(A), maximum(abs.(L))
end
1.8 ms