In the previous topic, we have already learnt about polar decomposition and its implementation using S2. In this topic, let us learn about various other decompositions and their respective implementations using S2.

## What is a Matrix Decomposition?

Reducing a matrix to its constituent parts is known as a **matrix decomposition**.

This approach is simpler when it comes to dealing with more complex operations on matrices as it’s always easier to perform them on a decomposed matrix than the original matrix itself.

Let us take a closer look at various matrix decomposition methods.

LU (lower upper) Decomposition

Gauss elimination can be implemented better using LU decomposition, especially when solving multiple equations with the same left-hand side repeatedly. For solving Ax = b with different values of b for the same A.

It writes a matrix as a product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve system of linear equation or calculate the determinant.

This method of decomposition of a square matrix \(A\), decomposes it into two matrices namely, \(L\) and \(U\).

\(A = LU\)

where:

- \(L\) is a lower-triangular matrix and
- \(U\) is an upper-triangular matrix.

**Doolittle Algorithm:**

Doolittle’s method provides an alternative way to factor A into an LU decomposition without going through the hassle of Gaussian Elimination.

It is always possible to decompose a square matrix into a lower triangular matrix and an upper triangular matrix, i.e., **\([A] = [L][U]\)**

- In this algorithm, we assume that, for an \(n \times n\) matrix \(A\), there exists an \(LU\) decomposition and write the form of \(L\) and \(U\) explicitly.
- Now, we solve for \(L\) and \(U\) systematically from the equation \(A = LU \).

` ````
```/*LU decomposition by the Doolittle process*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf
(doubleArrayOf(1.0, 2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(7.0, 8.0, 9.0)))
// LU decomposition
val LU = Doolittle(A1)
val L = LU.L()
val U = LU.U()
println("doolittle LU decomposition:\nL: $L,\nU: $U\n")
// verification
val A2 = L.multiply(U) // L*U = A1 = A2
val ZERO = A1.minus(A2)
val det: Double = MatrixMeasure.det(ZERO)
println("A1 - A2 = $det")

doolittle LU decomposition: L: 3x3 [,1] [,2] [,3] [1,] 1.000000, 0.000000, 0.000000, [2,] 0.142857, 1.000000, 0.000000, [3,] 0.571429, 0.500000, 1.000000, , U: 3x3 [,1] [,2] [,3] [1,] 7.000000, 8.000000, 9.000000, [2,] 0.000000, 0.857143, 1.714286, [3,] 0.000000, 0.000000, 0.000000, A1 - A2 = 0.0

QR Decomposition

This method of decomposition of a matrix \(A\) (not limited to square matrices), decomposes into two matrices namely, \(Q\) and \(R\).

\(A = QR\)

where:

- \(A\) is an \(m \times n\) matrix,
- \(Q\) is an \(m \times m\) matrix and
- \(R\) is an \(m \times n\) upper-triangular matrix.

**Gram – Schimdt Process:**

The Gram–Schmidt orthonormalization process is a procedure for ortho normalizing a set of vectors in an inner product space, most often the Euclidean space Rn provided with the standard inner product, in mathematics, notably linear algebra and numerical analysis.

In this process, we will consider vectors as the columns of a matrix \(A\) whose decomposition we wish to compute:

\(A = [a_1 | a_2 | … | a_n] \)

\(u_1 = a_1, e_1=\frac{u_1}{||u_1||},\)

\(u_2 = a_2 – (a_2.e_1)e_1, e_2=\frac{u_2}{||u_2||},\)

\(u_{k+1} = a_{k+1} – (a_{k+1}.e_{1})e_{1} – … – (a_{k+1}.e_{k})e_{k} , e_{k+1}=\frac{u_{k+1}}{||u_{k+1}||}.\)

The resulting QR Decomposition:

\(A=\begin{bmatrix}a_1 & | & a_2 & | & … & | & a_n \end{bmatrix}=\begin{bmatrix}e_1 & | & e_2 & | & … & | & e_n \end{bmatrix} \begin{bmatrix}a_1.e_1 & a_2.e_1 & … & a_n.e_1 \\ a_1.e_2 & a_2.e_2 & … & a_n.e_3 \\ . & . & … & . \\ . & . & … & . \\ . & . & … & . \\ 0 & 0 & … & a_n.e_n \end{bmatrix}\)

\( \implies A = QR\)

` ````
```/*QR decomposition by the Gram-Schmidt process*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf
(doubleArrayOf(3.0, 2.0),
doubleArrayOf(1.0, 2.0)))
// QR decomposition by the Gram-Schmidt
val GS = GramSchmidt(A1, true, 0.0) // true: parallel computation
val Q = GS.Q()
val R = GS.R()
println("gram-schmidt QR decomposition:\nQ: $Q,\nR: $R\n")
// verification
val A2 = Q.multiply(R); // QR = A1 = A2
val det: Double = MatrixMeasure.det(A1.minus(A2))
println("A1 - A2 = $det")

gram-schmidt QR decomposition: Q: 2x2 [,1] [,2] [1,] 0.948683, -0.316228, [2,] 0.316228, 0.948683, , R: 2x2 [,1] [,2] [1,] 3.162278, 2.529822, [2,] 0.000000, 1.264911, A1 - A2 = 0.0

Cholesky Decomposition

Cholesky decomposition or factorization is a powerful numerical optimization technique that is widely used in linear algebra. It decomposes an Hermitian, positive definite matrix into a lower triangular and its conjugate component. These can later be used for optimally performing algebraic operations.

This method of decomposition is for square symmetric matrices where all eigenvalues are greater than zero (positive definite matrices). The cholesky decomposition is defined as follows:

\(A = LL^{T}\)

where:

- \(A\) is the square symmetric matrix being decomposed,
- \(L\) is the lower triangular matrix, and
- \(L^T\) is the transpose of \(L\).

This decomposition can also be written as the product of the upper triangular matrix as below:

\(A = U^TU\)

where \(U\) is the upper triangular matrix.

` ````
```/*Cholesky decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(
doubleArrayOf(2.0, 1.0, 1.0),
doubleArrayOf(1.0, 2.0, 1.0),
doubleArrayOf(1.0, 1.0, 2.0)))
// Cholesky decomposition
val cholesky = Chol(A1)
println("cholesky decomposition: L: ${cholesky.L()}")
// verification
val L = cholesky.L()
val A2 = L.multiply(L.t()) // L * L' = A1 = A2
println("$A1 =\n$A2")

cholesky decomposition: L: 3x3 [,1] [,2] [,3] [1,] 1.414214, 0.000000, 0.000000, [2,] 0.707107, 1.224745, 0.000000, [3,] 0.707107, 0.408248, 1.154701, 3x3 [,1] [,2] [,3] [1,] 2.000000, 1.000000, 1.000000, [2,] 1.000000, 2.000000, 1.000000, [3,] 1.000000, 1.000000, 2.000000, = 3x3 [,1] [,2] [,3] [1,] 2.000000, 1.000000, 1.000000, [2,] 1.000000, 2.000000, 1.000000, [3,] 1.000000, 1.000000, 2.000000,

Singular Value Decomposition

The singular value decomposition or SVD is one of the most powerful ideas in linear algebra which is obtained by combining the polar decomposition and the spectral theorem we discussed in the previous topics.

**Suggestion:** Revise the polar decomposition and the spectral theorem for a better understanding of the SVD.

The polar decomposition tells us that any square matrix A is almost the same as some symmetric matrix,

and the spectral theorem tells us that a symmetric matrix is almost the same as a simple scaling along the

coordinate axes. (In both cases, the phrase “almost the same” disguises a composition with an orthogonal

transformation.) We should be able to combine these ideas to conclude that any square matrix is basically

the same as a simple scaling along the coordinate axes!

To be more precise, let’s suppose that \(A\) is a square matrix, then the polar decomposition tells that:

\(A = R \sqrt{A^TA}\)

where \(R\) is an orthogonal matrix.

The spectral theorem tells that:

\( \sqrt{A^TA} = VDV^T\)

where:

- \(V\) is a diagonal matrix, and
- \(D\) is a diagonal matrix with nonnegative diagonal entries.

Combining these both the equations, we get:

\(A = RVDV^T\)

Since a product of orthogonal matrices is orthogonal, we can define \(U = RV\) and obtain the singular value

decomposition (SVD) of \(A\):

\(A = UDV^T\)

where, \(U\) and \(V\) are orthogonal matrices.

We can visualize geometrically making a figure like the one shown here, which illustrates the successive effects of each map in the composition \(UDV^T\). If we draw grid lines on the second plot (just before \(D\) is applied) and propagate those grid lines to the other plots, then we endow the domain and range of A with orthogonal sets of gridlines with \(A\) mapping one to the other.

**Definition:**

Suppose that \(A\) is an \(m \times n\) matrix with \(m \leq n\). Then there exist matrices \(U\) and \(V\) with orthonormal columns and a diagonal matrix \(D\) such that:

\(A = UDV^T\)

where:

- \(U\) is an \( m \times m\) matrix,
- \(D\) is an \( m \times m\) matrix, and
- \(V^T\) is an \( m \times n\) matrix.

Similarly, if \(n \leq m\), there exist \(U, V\), and \(D\) satisfying the above properties such that:

\(A = UDV^T\)

where:

- \(U\) is an \( m \times n\) matrix,
- \(D\) is an \( n \times n\) matrix, and
- \(V’\) is an \( n \times n\) matrix.

We call \(A = UDV^T\) as the singular value decomposition (or SVD) of \(A\).The diagonal entries of \(D\) are called the singular values of \(A\).

` ````
```/*SVD Decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(
doubleArrayOf(1.5, 2.0, 4.96, 6.37, 36.01),
doubleArrayOf(3.0, 4.0, 99.0, 12.24, 369.6),
doubleArrayOf(73.0, 63.4, 0.19, 24.0, 99.39),
doubleArrayOf(25.36, 9.31, 66.78, 88.12, 36.03),
doubleArrayOf(15.51, 63.25, 36.1, 45.68, 111.0),
doubleArrayOf(55.0, 66.0, 12.3, 79.0, 121.0),
doubleArrayOf(55.5, 66.9, 12.0, 79.98, 97.001),
doubleArrayOf(55.0, 66.0, 12.0, 79.0, 97.0)))
// SVD decomposition
val svd = SVD(A1, true) // true: compute also U and V in addition to the singular values
val D = svd.D()
val U = svd.U()
val V = svd.V()
println("SVD decomposition:\nU: $U,\nD: $D,\nV: $V\n")
// verification
val A2 = U.multiply(D).multiply(V.t()); // UDV' = A1 = A2
val ZERO = A1.minus(A2)
println("$ZERO is 0")

SVD decomposition: U: 8x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.075225, -0.034244, 0.007191, -0.008976, -0.191550, [2,] -0.757550, -0.630250, -0.051853, 0.097192, -0.067278, [3,] -0.251817, 0.259223, 0.513336, 0.518812, 0.568252, [4,] -0.154635, 0.262590, -0.849993, 0.326621, 0.263552, [5,] -0.281704, 0.145986, -0.029208, -0.781017, 0.526654, [6,] -0.320250, 0.354995, 0.085016, -0.038076, -0.407829, [7,] -0.275482, 0.402217, 0.039365, -0.042877, -0.247181, [8,] -0.274460, 0.395411, 0.040333, -0.036741, -0.248243, , D: 5x5 [,1] [,2] [,3] [,4] [,5] [1,] 479.129823, 0.000000, 0.000000, 0.000000, 0.000000, [2,] 0.000000, 193.023289, 0.000000, 0.000000, 0.000000, [3,] 0.000000, 0.000000, 91.257422, 0.000000, 0.000000, [4,] 0.000000, 0.000000, 0.000000, 34.952854, 0.000000, [5,] 0.000000, 0.000000, 0.000000, 0.000000, 26.036639, , V: 5x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.160827, 0.463675, 0.267363, 0.796111, 0.232089, [2,] -0.200538, 0.528220, 0.367073, -0.597987, 0.434093, [3,] -0.222179, -0.133515, -0.666413, 0.053482, 0.697024, [4,] -0.232306, 0.619347, -0.563828, -0.075841, -0.488660, [5,] -0.911367, -0.323376, 0.178230, -0.002613, -0.181841, 8x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [2,] -0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [3,] 0.000000, -0.000000, 0.000000, 0.000000, -0.000000, [4,] -0.000000, 0.000000, -0.000000, -0.000000, -0.000000, [5,] -0.000000, -0.000000, -0.000000, 0.000000, -0.000000, [6,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [7,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [8,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, is 0

Hessenberg Decomposition

A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such that. where. denotes the conjugate transpose. Hessenberg decomposition is implemented in the Wolfram Language as Hessenberg Decompositio

This method of decomposition is for square matrices and is defined as below:

\(A = Q^THQ\)

where:

- \(A\) is a square matrix,
- \(H\) is a Hessenberg matrix, and
- \(Q\) is an orthogonal matrix.

In this case, \(QQ^T = Q^TQ = I\) and so \(Q^{-1} = Q^T\).

` ````
```/*Hessenberg Decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(doubleArrayOf(1.0, 5.0, 7.0), doubleArrayOf(3.0, 0.0, 6.0), doubleArrayOf(4.0, 3.0, 1.0)))
// Hessenberg decomposition
val decomp = HessenbergDecomposition(A1)
val H1 = decomp.H()
val Q = decomp.Q()
println("hessenberg decomposition:\nH: $H1,\nQ: $Q\n")
// verification
val H2 = Q.t().multiply(A1).multiply(Q); // Q'*A1*Q = H1 = H2
val ZERO = H1.minus(H2)
val det: Double = MatrixMeasure.det(ZERO)
println("H1 - H2 = $det")

hessenberg decomposition: H: 3x3 [,1] [,2] [,3] [1,] 1.000000, -8.600000, 0.200000, [2,] -5.000000, 4.960000, -0.720000, [3,] 0.000000, 2.280000, -3.960000, , Q: 3x3 [,1] [,2] [,3] [1,] 1.000000, 0.000000, 0.000000, [2,] 0.000000, -0.600000, -0.800000, [3,] 0.000000, -0.800000, 0.600000, H1 - H2 = 0.0