Difference between revisions of "Orthogonalization method"
[unchecked revision] | [unchecked revision] |
Line 10: | Line 10: | ||
The [[Классический метод ортогонализации|classical orthogonalization method for the QR decomposition of a square matrix (real version)]] is fairly simple. However, due to its instability, which manifests itself in the non-orthogonality of resulting systems, the method is very rarely used in practice. | The [[Классический метод ортогонализации|classical orthogonalization method for the QR decomposition of a square matrix (real version)]] is fairly simple. However, due to its instability, which manifests itself in the non-orthogonality of resulting systems, the method is very rarely used in practice. | ||
− | + | Let <math>\mathbf{a}_1,\;\ldots,\;\mathbf{a}_N</math> be linearly independent vectors. Define the projection of a vector <math>\mathbf{a}</math> on (the direction of) a vector <math>\mathbf{b}</math> by the formula <math>\mathbf{proj}_{\mathbf{b}}\,\mathbf{a} = {\langle \mathbf{a}, \mathbf{b} \rangle \over \langle \mathbf{b}, \mathbf{b}\rangle} \mathbf{b} ,</math> | |
− | |||
− | + | where <math>\langle \mathbf{a}, \mathbf{b} \rangle</math> is the scalar product of the vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math>. | |
− | + | In the '''''k'''''-dimensional real space, the scalar product of the vectors <math>\mathbf{ a= [a_1, a_2, ...,a_k]}</math> and <math>\mathbf{ b= [b_1, b_2, ..., b_k]}</math> is defined as | |
:<math>\langle \mathbf{a}, \mathbf{b} \rangle=\sum_{i=1}^k a_ib_i=a_1b_1+a_2b_2+\cdots+ a_kb_k</math>. | :<math>\langle \mathbf{a}, \mathbf{b} \rangle=\sum_{i=1}^k a_ib_i=a_1b_1+a_2b_2+\cdots+ a_kb_k</math>. | ||
Revision as of 18:23, 5 March 2018
The basic authors of the description: Инжелевская Дарья Валерьевна(text), А.В.Фролов(editing)
The Gram--Schmidt orthogonalization is a method that constructs a set of orthogonal vectors [math]{\displaystyle \mathbf {b}_{1},\;\ldots ,\;\mathbf {b} _{N}} [/math] or a set of orthonormal vectors [math]{\displaystyle \mathbf {e} _{1},\;\ldots ,\;\mathbf {e}_{N}} [/math] from a given set of linearly independent vectors [math]{\displaystyle \mathbf {a} _{1},\;\ldots ,\;\mathbf {a} _{N}}[/math]. This is done in such a way that each vector [math]{\displaystyle \mathbf {b} _{j}} [/math] or [math]{\displaystyle \mathbf {e} _{j}}[/math] is a linear combination of the vectors [math]{\displaystyle \mathbf {a} _{1},\;\ldots ,\; \mathbf {a} _{j}}[/math]. The process may be used for obtaining the QR decomposition, where the system of original vectors is the columns of a given matrix, while the columns of Q are the result of orthogonalization. Thus, unlike the Givens (rotation) and Householder (reflection) methods, which are based on the left unitary/orthogonal reduction to triangular form, the orthogonalization method reduces the original matrix by right non-orthogonal (triangular) transformations to a unitary/orthogonal matrix.
Mathematical foundations of the method
The classical orthogonalization method for the QR decomposition of a square matrix (real version) is fairly simple. However, due to its instability, which manifests itself in the non-orthogonality of resulting systems, the method is very rarely used in practice.
Let [math]\mathbf{a}_1,\;\ldots,\;\mathbf{a}_N[/math] be linearly independent vectors. Define the projection of a vector [math]\mathbf{a}[/math] on (the direction of) a vector [math]\mathbf{b}[/math] by the formula [math]\mathbf{proj}_{\mathbf{b}}\,\mathbf{a} = {\langle \mathbf{a}, \mathbf{b} \rangle \over \langle \mathbf{b}, \mathbf{b}\rangle} \mathbf{b} ,[/math]
where [math]\langle \mathbf{a}, \mathbf{b} \rangle[/math] is the scalar product of the vectors [math]\mathbf{a}[/math] and [math]\mathbf{b}[/math].
In the k-dimensional real space, the scalar product of the vectors [math]\mathbf{ a= [a_1, a_2, ...,a_k]}[/math] and [math]\mathbf{ b= [b_1, b_2, ..., b_k]}[/math] is defined as
- [math]\langle \mathbf{a}, \mathbf{b} \rangle=\sum_{i=1}^k a_ib_i=a_1b_1+a_2b_2+\cdots+ a_kb_k[/math].
Этот оператор проецирует вектор [math]\mathbf{a}[/math] коллинеарно вектору [math]\mathbf{b}[/math].
Ортогональность векторов [math]\mathbf{a}[/math] и [math]\mathbf{b}[/math] достигается на шаге (2).
Классический процесс Грама — Шмидта выполняется следующим образом:
- [math] {\begin{array}{lclr} {\mathbf {b}}_{1}&=&{\mathbf {a}}_{1}&(1)\\ {\mathbf {b}}_{2}&=&{\mathbf {a}}_{2}-{\mathbf {proj}}_{{{\mathbf {b}}_{1}}}\,{\mathbf {a}}_{2}&(2)\\ {\mathbf {b}}_{3}&=&{\mathbf {a}}_{3}-{\mathbf {proj}}_{{{\mathbf {b}}_{1}}}\,{\mathbf {a}}_{3}-{\mathbf {proj}}_{{{\mathbf {b}}_{2}}}\,{\mathbf {a}}_{3}&(3)\\ {\mathbf {b}}_{4}&=&{\mathbf {a}}_{4}-{\mathbf {proj}}_{{{\mathbf {b}}_{1}}}\,{\mathbf {a}}_{4}-{\mathbf {proj}}_{{{\mathbf {b}}_{2}}}\,{\mathbf {a}}_{4}-{\mathbf {proj}}_{{{\mathbf {b}}_{3}}}\,{\mathbf {a}}_{4}&(4)\\ &\vdots &&\\{\mathbf {b}}_{N}&=&{\mathbf {a}}_{N}-\displaystyle \sum _{{j=1}}^{{N-1}}{\mathbf {proj}}_{{{\mathbf {b}}_{j}}}\,{\mathbf {a}}_{N}&(N) \end{array}} [/math]
На основе каждого вектора [math]\mathbf{b}_j \;(j = 1 \ldots N)[/math] может быть получен нормированный вектор: [math]\mathbf{e}_j = {\mathbf{b}_j\over \| \mathbf{b}_j \|}[/math] (у нормированного вектора направление будет таким же, как у исходного, а норма — единичной). Норма в формуле - согласованная со скалярным произведением: [math]\| x \| = \sqrt{\langle x, x \rangle}[/math]
Результаты процесса Грама — Шмидта:
[math]\mathbf{b}_1,\;\ldots,\;\mathbf{b}_N[/math] — система ортогональных векторов либо
[math]\mathbf{e}_1,\;\ldots,\;\mathbf{e}_N[/math] — система ортонормированных векторов.
Наиболее используемой на практике формой метода является вариант метода ортогонализации с переортогонализацией.