Notebook for 2023-02-13
Pivoting
Row pivoting is necessary for Gaussian elimination in exact arithmetic because of the possibility that a leading submatrix will be singular. For example
$$\begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}$$
does not admit an unpivoted LU decomposition. If we perturb the (1,1) entry, we have a nearby matrix that can be factored without pivoting:
$$\begin{bmatrix} \delta & 1 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \delta^{-1} & 1 \end{bmatrix} \begin{bmatrix} \delta & 1 \\ 0 & 1-\delta^{-1} \end{bmatrix}.$$
But this factorization has terrible backward error, which we can compare to the (approximate) bound derived in class: $|\hat{A}-A| \leq n \epsilon_{\mathrm{mach}} |L| |U|$ elementwise. Note that what we refer to as machine epsilon in class is sometimes called the unit roundoff, which is half the distance between 1.0 and the next largest floating point number – the Julia expression eps(Float64)
is the distance between 1.0 and the next largest floating point number, which is twice as big.
2×2 Matrix{Float64}: 0.0 0.0 0.0 1.0
2×2 Matrix{Float64}: 2.22045e-32 2.22045e-16 2.22045e-16 4.44089
The usual strategy of partial pivoting chooses to eliminate variable $j$ using whichever row has the largest element in column $i$ of the Schur complement. This guarantees that the entries of $L$ are all bounded by 1 in magnitude.
my_pivoted_lu (generic function with 1 method)
9.35209e-17
1.0