Method level

Gaussian elimination, compact scheme for tridiagonal matrices and its modifications

From Algowiki
Jump to navigation Jump to search


Main author: Alexey Frolov

1 Compact scheme of Gaussian elimination for tri-diagonal matrices and its modifications

The triangular decomposition of a tri-diagonal matrix is based on the formulas of the same decomposition for general matrices [1][2]. Tri-diagonal matrices that occur in applied problems are usually diagonally dominant. Consequently, there is no need to use permutations for ensuring the numerical stability of the decomposition. It can be shown, by applying, for instance, the Cauchy–Binet formula [2] to the minors of the product of two matrices:

[math] A \begin{pmatrix} \alpha_{1} & \alpha_{2} & \cdots & \alpha_{k} \\ \beta_{1} & \beta_{2} & \cdots & \beta_{k} \\ \end{pmatrix} = \sum_{\begin{smallmatrix} \gamma_{1} \lt \gamma_{2} \lt \cdots \lt \gamma_{k} \end{smallmatrix}} L \begin{pmatrix} \alpha_{1} & \alpha_{2} & \cdots & \alpha_{k} \\ \gamma_{1} & \gamma_{2} & \cdots & \gamma_{k} \\ \end{pmatrix} U\begin{pmatrix} \gamma_{1} & \gamma_{2} & \cdots & \gamma_{k} \\ \beta_{1} & \beta_{2} & \cdots & \beta_{k} \\ \end{pmatrix} [/math]

[1] that the [math]LU[/math]-decomposition of the tri-diagonal matrix

[math] A = \begin{bmatrix} a_{11} & a_{12} & 0 & \cdots & \cdots & 0 \\ a_{21} & a_{22} & a_{23}& \cdots & \cdots & 0 \\ 0 & a_{32} & a_{33} & \cdots & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & a_{n-1 n-2} & a_{n-1 n-1} & a_{n-1 n} \\ 0 & \cdots & \cdots & 0 & a_{n n-1} & a_{n n} \\ \end{bmatrix} [/math]

involves two bi-diagonal matrices:

[math] L = \begin{bmatrix} l_{11} & 0 & 0 & \cdots & \cdots & 0 \\ l_{21} & l_{22} & 0 & \cdots & \cdots & 0 \\ 0 & l_{32} & l_{33} & \cdots & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & l_{n-1 n-2} & l_{n-1 n-1} & 0 \\ 0 & \cdots & \cdots & 0 & l_{n n-1} & l_{n n} \\ \end{bmatrix}, U = \begin{bmatrix} u_{11} & u_{12} & 0 & \cdots & \cdots & 0 \\ 0 & u_{22} & u_{23}& \cdots & \cdots & 0 \\ 0 & 0 & u_{33} & \cdots & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & u_{n-1 n-1} & u_{n-1 n} \\ 0 & \cdots & \cdots & 0 & 0 & u_{n n} \\ \end{bmatrix} [/math]

As a consequence, the formulas of the compact scheme of Gaussian elimination and the formulas of its modifications are significantly simpler compared to the general case.

1.1 [math]LU[/math]-decomposition

1.1.1 Compact scheme of Gaussian elimination for tri-diagonal matrices

The compact scheme of Gaussian elimination for tri-diagonal matrices calculates the [math]LU[/math]-decomposition in which both factors are bi-diagonal matrices and all the diagonal entries of the lower bi-diagonal matrix [math]L[/math] are equal to 1:

[math] L = \begin{bmatrix} 1 & 0 & 0 & \cdots & \cdots & 0 \\ l_{21} & 1 & 0 & \cdots & \cdots & 0 \\ 0 & l_{32} & 1 & \cdots & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & l_{n-1 n-2} & 1 & 0 \\ 0 & \cdots & \cdots & 0 & l_{n n-1} & 1 \\ \end{bmatrix} [/math]

The formulas of the algorithm are as follows:

[math]u_{11} = a_{11}[/math]

[math]u_{i i+1} = a_{i i+1}[/math], [math]\quad i \in [1, n-1][/math]

[math]l_{i+1 i} = a_{i+1 i} / u_{i i} [/math], [math]\quad i \in [1, n-1] [/math]

[math]u_{ii} = a_{ii} - l_{i i-1} u_{i-1 i}[/math], [math]\quad i \in [2, n] [/math]

A detailed description of the properties of this algorithm is given on the corresponding page. The algorithm is purely serial and, if not modified, cannot be parallelized. There exists a block version of the decomposition designed for block tri-diagonal matrices.

The compact scheme of Gaussian elimination can easily be modified so that the diagonal entries of [math]U[/math] rather than [math]L[/math] are ones. This fact is, for instance, used in the [algorithm].

1.1.2 Stone doubling algorithm for the [math]LU[/math] decomposition of tridiagonal matrices

The Stone_doubling_algorithm_for_the_LU_decomposition_of_a_tridiagonal_matrix is a part of the Stone algorithm[3] for solving tri-diagonal SLAEs. It is the first parallel algorithm designed for the [math]LU[/math] decomposition of tri-diagonal matrices. The mathematical basis of the algorithm is a linear recursion relation for the principal minors of the factorizable matrix. This recursion relation can be written in a matrix form. It then turns out that one can exploit the associativity of matrix multiplication. For parallelization purposes, Stone has used the doubling scheme. However, the region of numerical stability of this algorithm is much more narrow than that of the compact scheme of Gaussian elimination despite the fact that both methods find the same decomposition. Theoretically, there may exist a block version of the algorithm designed for block tri-diagonal matrices. However, such a version was never developed due to the numerical instability of this approach.

A more detailed discussion of the properties of the Stone algorithm can be found on the corresponding page.

1.1.3 Serial-parallel algorithm for the [math]LU[/math] decomposition of tri-diagonal matrices

The Serial-parallel algorithm for the LU decomposition of a tridiagonal matrix was developed in [4] to parallelize the construction of the same [math]LU[/math] decomposition of tri-diagonal matrices that is found by the compact scheme of Gaussian elimination. Similarly to the Stone method, the algorithm exploits the associativity of matrix multiplication, but it also uses the normalization in serial sequences of calculations. As a result, the algorithm has a larger region of stability than the Stone method. There exists a block version for block tri-diagonal matrices; however, it requires that not only diagonal blocks but also the blocks on one of the neighboring diagonals be nonsingular.

1.2 [math]LDU[/math] decomposition

Assume that a tri-diagonal matrix [math]A[/math] admits the [math]LU[/math] decomposition into the product of two bi-diagonal matrices such that all the diagonal entries of the lower bi-diagonal matrix [math]L[/math] are ones. Then one can easily find the [math]LDU[/math] decomposition of [math]A[/math] in which the upper bi-diagonal matrix also has ones as its diagonal entries:

[math] U = \begin{bmatrix} 1 & u_{12} & 0 & \cdots & \cdots & 0 \\ 0 & 1 & u_{23}& \cdots & \cdots & 0 \\ 0 & 0 & 1 & \cdots & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & 1 & u_{n-1 n} \\ 0 & \cdots & \cdots & 0 & 0 & 1 \\ \end{bmatrix} [/math]

The formulas of the compact scheme of Gaussian elimination modified for finding this decomposition are as follows:

[math]d_{11} = a_{11} [/math]

[math]u_{i i+1} = a_{i i+1} / d_{i i}, \quad i \in [1, n-1] [/math]

[math]l_{i+1 i} = a_{i+1 i} / d_{i i} , \quad i \in [1, n-1] [/math]

[math]d_{ii} = a_{ii} - l_{i i-1} a_{i-1 i}, \quad i \in [2, n] [/math]

In a similar way, by introducing additional divisions, one can modify both the Stone method and the serial-parallel decomposition algorithm.

The resulting decomposition is better adapted to solving tri-diagonal SLAEs because, in both substitution processes, divisions will not alternate with operations of multiplication, addition, and subtraction.

2 Compact scheme of triangular decomposition for tri-diagonal Hermitian matrices and its modifications

If a tri-diagonal matrix is Hermitian, then it is possible to construct a more symmetrical decomposition, which reduces the required amount of memory.

2.1 [math]LL^*[/math] decomposition

It would be logical to use the Cholesky algorithm for calculating the [math]LL^*[/math] decomposition of a Hermitian matrix. Its formulas are adapted to the case where the original matrix is tri-diagonal and the result is bi-diagonal as follows:

[math]l_{11} = \sqrt{a_{11}}, [/math]

[math]l_{i+1 i} = \frac{a_{i+1 i}}{l_{ii}}, \quad i \in [1, n-1], [/math]

[math]l_{ii} = \sqrt{a_{ii} - |l_{i i-1}|^2}, \quad i \in [2, n]. [/math]

Similar modifications can be made in the Stone method and the serial-parallel algorithm for the [math]LU[/math] decomposition. However, in practice, the scheme with square roots is in little demand because there exists a more convenient [math]LDL^*[/math] decomposition.

2.2 [math]LDL^*[/math] decomposition

The [math]LDL^*[/math] decomposition is the most economical one of all decompositions of a tri-diagonal Hermitian matrix. The formulas of this decomposition are as follows:

[math]d_{11} = a_{11} [/math]

[math]l_{i+1 i} = a_{i+1 i} / d_{i i} , \quad i \in [1, n-1] [/math]

[math]d_{ii} = a_{ii} - l_{i i-1} a_{i i-1}^*, \quad i \in [2, n] [/math]

Similar modifications can be made in the Stone method and the serial-parallel algorithm for the LU decomposition. As is the case with the [math]LDU[/math] decomposition, the [math]LDL^*[/math] decomposition is more convenient for solving SLAEs with tri-diagonal matrices because it is better adapted to solving the resulting bi-diagonal SLAEs.

3 Decompositions of special tri-diagonal matrices

Certain well-known operators in mathematical physics give origin to matrices of special form, including tri-diagonal matrices. For some of these matrices, there are known analytical expressions of their characteristics, including the values of principal minors. Despite this fact, researches dealing with such matrices often misspend their time by applying the methods described above instead of using the available decomposition formulas.

The decompositions of special tri-diagonal matrices can be found on a separate page.

4 Use of decompositions

Usually, decompositions of tri-diagonal matrices into products of bi-diagonal matrices (with the possible inclusion of a diagonal factor) are used for solving tri-diagonal SLAEs derived from certain physical models. It is often helpful to evaluate the utility of a particular decomposition method by comparing it not only with other decomposition methods for tri-diagonal matrices but also with different approaches to solving tri-diagonal SLAEs, including the ones that do not use such decompositions, for instance, with the cyclic reduction method or the bordering method.

5 References

  1. 1.0 1.1 Voevodin V.V. Computational foundations of linear algebra Moscow: Nauka, 1977.
  2. 2.0 2.1 Voevodin V.V., Kuznetsov Yu.A. Matrices and computations, Moscow: Nauka, 1984.
  3. Stone H.S. An Efficient Parallel Algorithm for the Solution of a Tridiagonal Linear System of Equations // J. ACM, Vol. 20, No. 1 (Jan. 1973), P. 27-38.
  4. А.В.Фролов. Ещё один метод распараллеливания прогонки с использованием ассоциативности операций // Суперкомпьютерные дни в России: Труды международной конференции (28-29 сентября 2015 г., г. Москва). – М.: Изд-во МГУ, 2015. с. 151-162