Elimination method, pointwise version

From Algowiki
Revision as of 17:34, 29 January 2016 by Икрамов (talk | contribs)
Jump to navigation Jump to search


Primary authors of this description: A.V.Frolov, Vad.V.Voevodin (Section 2.2), A.M.Teplov (Section 2.4)

Contents

1 Properties and structure of the algorithm 1.1 General description of the algorithm 1.2 Mathematical description of the algorithm 1.3 Computational kernel of the algorithm 1.4 Macro structure of the algorithm 1.5 Implementation scheme of the serial algorithm 1.6 Serial complexity of the algorithm 1.7 Information graph 1.8 Parallelization resource of the algorithm 1.9 Input and output data of the algorithm 1.10 Properties of the algorithm 2 Software implementation of the algorithm 2.1 Implementation peculiarities of the serial algorithm 2.2 Locality of data and computations 2.2.1 Locality of implementation 2.2.1.1 Structure of memory access and a qualitative estimation of locality 2.2.1.2 Quantitative estimation of locality 2.3 Possible methods and considerations for parallel implementation of the algorithm 2.4 Scalability of the algorithm and its implementations 2.4.1 Scalability of the algorithm 2.4.2 Scalability of of the algorithm implementation 2.5 Dynamic characteristics and efficiency of the algorithm implementation 2.6 Conclusions for different classes of computer architecture 2.7 Existing implementations of the algorithm 3 References


1 Properties and structure of the algorithm

1.1 General description of the algorithm

The elimination method is a variant of Gaussian elimination used for solving a system of linear algebraic equations SLAE) [1][2] Ax = b, where

A = \begin{bmatrix}

a_{11} & a_{12}  & 0 &    \cdots & \cdots & 0 \\
a_{21} & a_{22}  & a_{23}&  \cdots & \cdots & 0 \\
0 &  a_{32} & a_{33}  &     \cdots & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & a_{n-1 n-2} & a_{n-1 n-1}  & a_{n-1 n} \\
0 & \cdots & \cdots & 0 & a_{n n-1}  & a_{n n} \\

\end{bmatrix}, x = \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \\ \end{bmatrix}, b = \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \\ \end{bmatrix}

However, presentations of the elimination method [3] often use a different notation and a numbering for the right-hand side and matrix of the system. For instance, the above SLAE can be written as

A = \begin{bmatrix}

c_{0} & -b_{0}  & 0      &  \cdots & \cdots & 0 \\
-a_{1} & c_{1}  & -b_{1} &  \cdots & \cdots & 0 \\
0      &  -a_{2} & c_{2} &  \cdots & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & -a_{N-1} & c_{N-1}  & -b_{N-1} \\
0 & \cdots & \cdots & 0 & -a_{N}  & c_{N} \\

\end{bmatrix}\begin{bmatrix} y_{0} \\ y_{1} \\ \vdots \\ y_{N} \\ \end{bmatrix} = \begin{bmatrix} f_{0} \\ f_{1} \\ \vdots \\ f_{N} \\ \end{bmatrix}

(here, N=n-1). If each equation is written separately, then we have

c_{0} y_{0} - b_{0} y_{1} = f_{0}, -a_{i} y_{i-1} + c_{i} y_{i} - b_{i} y_{i+1} = f_{i}, 1 \le i \le N-1, -a_{N} y_{N-1} + c_{N} y_{N} = f_{N}.

Here, we examine the so-called right-elimination, which is the variant of elimination method in which a SLAE is processed top-down and then in the reverse direction. In this variant, one first goes top down eliminating the subdiagonal unknowns and then bottom up eliminating the superdiagonal ones. There is also a variant, called the left-elimination, in which a SLAE is first processed bottom-up and then top-down. There is basically no difference between both variants; consequently, we do not include a separate description of the left-elimination.

Figure 1. Graph of the elimination method for n=8. The input and output data are not shown. The symbols / and f denote division and operation a+bc or a-bc, respectively.

1.2 Mathematical description of the algorithm

In the notation introduced above, the forward elimination path consists in calculating the elimination coefficients

\alpha_{1} = b_{0}/c_{0}, \beta_{1} = f_{0}/c_{0}, \alpha_{i+1} = b_{i}/(c_{i}-a_{i}\alpha_{i}), \quad i = 1, 2, \cdots , N-1, \beta_{i+1} = (f_{i}+a_{i}\beta_{i})/(c_{i}-a_{i}\alpha_{i}), \quad i = 1, 2, \cdots , N.

Then, in the backward elimination path, one calculates the solution

y_{N} = \beta_{N+1}, y_{i} = \alpha_{i+1} y_{i+1} + \beta_{i+1}, \quad i = N-1, N-2, \cdots , 1, 0.

According to the literature (see [3]), these formulas are equivalent to calculating a variant of the LU decomposition for the coefficient matrix accompanied by solving bidiagonal systems via forward and back substitutions.

1.3 Computational kernel of the algorithm

The computational kernel of this algorithm can be thought of as compiled of two parts, namely, the forward and backward elimination paths. The computational kernel of the forward elimination path consists of sequences of divisions, multiplications, and additions/subtractions. The computational kernel of the backward elimination path contains only multiplications and additions sequences.

Figure 2. Detailed graph of the elimination method for n=8 (each reciprocal number is only once calculated). The input and output data are not shown. Each calculation of a reciprocal number is marked by inv, and each multiplication is marked by mult. The operations repeated after the replacement of the right-hand side of SLAE are set out in grey.

1.4 Macro structure of the algorithm

The macro structure of this algorithm can be represented as the combination of the forward and backward elimination paths. In addition, the forward path can also be split into two macro units, namely, the triangular decomposition of the coefficient matrix and the forward substitution for a bidiagonal SLAE. These units are executed simultaneously, that is, in parallel. The process of solving the bidiagonal SLAE uses the results produced by the triangular decomposition.

1.5 Implementation scheme of the serial algorithm

The method is executed as the following sequence of steps:

1. Initialize the forward elimination path:

\alpha_{1} = b_{0}/c_{0}, \beta_{1} = f_{0}/c_{0}

2. For i increasing from 1 to N-1, execute the forward path formulas:

\alpha_{i+1} = b_{i}/(c_{i}-a_{i}\alpha_{i}), \beta_{i+1} = (f_{i}+a_{i}\beta_{i})/(c_{i}-a_{i}\alpha_{i}).

3. Initialize the backward elimination path:

y_{N} = (f_{N}+a_{N}\beta_{N})/(c_{N}-a_{N}\alpha_{N})

4. For i decreasing from N-1 to 0, execute the backward path formulas: y_{i} = \alpha_{i+1} y_{i+1} + \beta_{i+1}.

The formulas of the forward path contain double divisions by the same expressions. These divisions can be replaced by the calculation of reciprocal numbers succeeded by the multiplication by these numbers.

1.6 Serial complexity of the algorithm

Consider a tridiagonal SLAE consisting of n equations with n unknowns. The elimination method as applied to solving such a SLAE in its (fastest) serial form requires 2n-1 divisions, 3n-3 additions/subtractions, and 3n-3 multiplications. Thus, in terms of serial complexity, the elimination method is qualified as a linear complexity algorithm.

1.7 Information graph

The information graph of the elimination method is shown in fig. 1. An analysis indicates that this graph is virtually serial. During the forward path, two branches (namely, the left branch, corresponding to the matrix decomposition, and the central branch, responsible for the solution of the first bidiagonal system) can be executed in parallel. The right branch corresponds to the backward path. It is evident from the figure that not only the mathematical content of this process but also the structure of the algorithm graph and the direction of the corresponding data flows are in complete agreement with the appellation "backward path." The variant with divisions replaced by reciprocal number calculations is illustrated by the graph in fig. 2.

1.8 Parallelization resource of the algorithm

The parallel version of the elimination method as applied to solving a tridiagonal SLAE consisting of n equations with n unknowns requires that the following tiers be executed:

n division tiers (all the tiers except for the one contain 2 divisions),

2n - 2 multiplication tiers and the same number of addition/subtraction tiers (n-1 tiers contain two operations of both types, while another n-1 tiers contain a single operation of each type).

Thus, in terms of the parallel form height, the elimination method is qualified as an algorithm of complexity O(n). In terms of the parallel form width, its complexity is 2.

1.9 Input and output data of the algorithm

Input data: tridiagonal matrix A (with entries a_{ij}), vector b (with components b_{i}).

Output data: vector x (with components x_{i}).

Size of the output data: n.


1.10 Properties of the algorithm

It is clearly seen that the ratio of the serial to parallel complexity is a constant (which is less than 2).

The computational power of the algorithm, understood as the ratio of the number of operations to the total size of the input and output data, is also a constant.

Within the framework of the chosen version, the algorithm is completely determined.

Routinely, the elimination method is used for solving SLAEs with diagonally dominant coefficient matrices. For such systems, the algorithm is guaranteed to be stable.

Suppose that several systems with the same coefficient matrix have to be solved. Then the left branch of calculations (see the figures with the algorithm graph) may not to be repeated. The reason is that the LU decomposition of the coefficient matrix needs not to be recalculated. In such cases, the variant with divisions replaced by reciprocal number calculations is preferable.


2 Software implementation of the algorithm

2.1 Implementation peculiarities of the serial algorithm

Different implementations of the algorithm are possible depending on how the coefficient matrix is stored (as a single array with three rows or three distinct arrays) and how the calculated coefficients are stored (in locations of the already used matrix entries or separately).

Let us give an example of a Fortran subroutine implementing the elimination method. Here, all the matrix entries are stored in a single array; moreover, the entries neighboring in a row are located closely to each other, and the calculated coefficients are stored in locations of the original matrix entries.

     subroutine progm (a,x,N)
     real a(3,0:N), x(0:N)
     a(2,0)=1./a(2,0)
     a(3,0)=-a(3,0)*a(2,0) ! alpha 1
     x(0)=x(0)*a(2,0) ! beta 1
     do 10 i=1,N-1
       a(2,i)=1./(a(2,i)+a(1,i)*a(2,i-1))
       a(3,i)=-a(3,i)*a(2,i) ! alpha i+1
       x(i)=(x(i)-a(1,i)*x(i-1))*a(2,i) ! beta i+1
 10  continue
     a(2,N)=1./(a(2,N)+a(1,N)*a(2,N-1))
     x(N)=(x(N)-a(1,N)*x(N-1))*a(2,N) ! y N
     do 20 i=N-1,0,-1
       x(i)=a(3,i)*x(i+1)+x(i) ! y i
 20  continue
     return
     end 

There are many simple implementations of the method on the web; see, for instance, Wikibooks - Algorithm implementation - Elimination method


2.2 Locality of data and computations

One can see from the algorithm graph that the spatial locality of the data is fair. Indeed, all the arguments needed for an operation are calculated "nearby." However, the temporal locality is not as good. Suppose that the cache cannot hold all the problem data. Then repeated cache misses will occur while performing calculations in the upper left corner of a SLAE. Thus, one of the recommendations to application engineers using the elimination method is as follows: Arrange the calculations so that all the elimination paths are sufficiently short and their data can be placed in the cache.

The elimination method is an example of a serial algorithm with the locality of computations so high that it even becomes excessive [4]. Computational cores of processors virtually cannot use their superscalar capabilities because the data needed by the basic operations of this method are produced by immediately preceding operations. This sharply deteriorates the performance of the elimination method even for modern single-processor and single-core systems.


2.2.1 Locality of implementation

2.2.1.1 Structure of memory access and a qualitative estimation of locality

Figure 3. Elimination method, pointwise version. The general memory access profile

Figure 3 presents the memory access profile for an implementation of the pointwise version of elimination method. This profile is formed of accesses to 5 arrays storing three diagonals of the coefficient matrix, the vector of right-hand sides, and the resulting solution. Judging from the general profile, one can say that the subroutine is composed of two stages. At the first stage, it performs the successive sorting of entries in four arrays. The second stage is also the successive sorting performed in the reverse order.

Figure 4. An isolated fragment of the general profile (set out in green in fig. 3)

However, a closer look shows that the structure of the profile is somewhat more complicated (see fig. 4). At each step, some of the arrays are accessed several times to very close locations, which improves both spatial and temporal locality. Thus, the general profile has a high spatial locality (because successive sortings of the entries dominate here) and a medium temporal locality (because certain entries are immediately used again).


2.2.1.2 Quantitative estimation of locality

The basic fragment of the implementation used for obtaining quantitative estimates is given here (функция Kernel). The start-up parameters are described here

The first estimate is produced on the basis of daps, which estimates the number of memory accesses (reading and writing) per second. This characteristic, used for the interaction with memory, is an analog of the flops estimate. It largely estimates the performance of this interaction rather than locality. Nevertheless, this characteristic (in particular, its comparison with the next characteristic cvg) is a good source of information.

Figure 5 presents the values of daps for implementations of popular algorithms. They are arranged in increasing order (in general, the larger daps, the higher efficiency). The daps value of the subroutine under discussion is fairly unexpected. According to the above qualitative analysis, the locality of memory access is sufficiently high for this subroutine. However, daps estimates the performance of memory interaction to be worse than for the RandomAccess test! Such a distinction between locality and performance occurs not very often.

A detailed inspection of the original code reveals two reasons of this phenomenon. First of all, the iterations of the main loop are memory interdependent; that is, each iteration uses the data calculated by the preceding iteration. In this case, such an interdependence seems to significantly deteriorate the performance of memory interaction; on the other hand, it is rather good from the viewpoint of locality.

The second discovery is that, unexpectedly, one of divisions considerably slows down the execution of the subroutine. After this division was replaced by multiplication, the execution time reduced by a factor 2.5 раза. The assembler codes of two versions looked identical (except for the obvious replacement of a single division by multiplication). These features of the subroutine are not reflected in its locality properties, which resulted in such a substantial gap between the locality estimate and the performance of memory interaction.

Figure 5. Comparison of the values of daps

The second characteristic, called cvg, is intended for obtaining a locality estimate that would be more machine independent. It determines how often a program needs to pull up data to the cache memory. Accordingly, smaller values of cvg correspond to less often pull-up operations and better locality.

Figure 6 presents the values of cvg for the same set of implementations. They are arranged in decreasing order (in general, the smaller cvg, the higher locality). Here, all our previous conclusions are confirmed by cvg: Locality turns out to be fairly good, which was already noted when discussing the qualitative locality estimate.

Figure 6. Comparison of the values of cvg


2.3 Possible methods and considerations for parallel implementation of the algorithm

As seen from the analysis of the algorithm graph, it virtually cannot be parallelized (if not substantially modified). Consequently, there are two ways of adapting eliminations for parallel computers. If a problem uses eliminations, then the first way is to split it so as to obtain fairly many of them and assign a separate processor (separate core) to each elimination. The second way is to replace the elimination method by its parallel variants (the cyclic reduction method, serial-parallel variants, etc.).


2.4 Scalability of the algorithm and its implementations

There is no sense in discussing the scalability of the elimination method because it is completely non-parallel. However, we note that, for its parallel variants, the scalability should be analyzed in comparison with a single processor implementation of the classical method rather than with single processor implementations of these parallel variants.

An analysis of scalability was performed for the algorithm implemented in accordance with the methodology proposed in http://algowiki-project.org/ru/Scalability_methodology. The analysis was based on the experiments conducted on the Lomonosov supercomputer of the Moscow University Supercomputing Center Проведём исследование масштабируемости реализации алгоритма согласно методике. Исследование проводилось на суперкомпьютере "Ломоносов"[5] Суперкомпьютерного комплекса Московского университета. Набор и границы значений изменяемых параметров запуска реализации алгоритма: число процессоров 1; размер области [10240 : 1024000] с шагом 10240. В результате проведенных экспериментов был получен следующий диапазон эффективности реализации алгоритма: минимальная эффективность реализации 0.019%; максимальная эффективность реализации 0.0255%. На следующих рисунках приведены графики производительности и эффективности выбранной реализации алгоритма в зависимости от изменяемых параметров запуска.

Рисунок 7. Реализация алгоритма. Изменение производительности в зависимости от размера вектора.

Рисунок 8. Реализация алгоритма. Изменение эффективности в зависимости от размера вектора. Малая эффективность, по-видимому, связана с избыточной локальностью, описанной в разделе о локальности данных и вычислений. Исследованная реализация алгоритма 2.5 Динамические характеристики и эффективность реализации алгоритма[править] В силу существенно последовательной природы алгоритма и его избыточной локальности, исследование его динамических характеристик представляется малоценным и потому не проводилось. 2.6 Выводы для классов архитектур[править] Прогонка - метод для архитектуры классического, фон-неймановского типа, непригодный даже для эффективной загрузки одноядерных систем, поддерживающих суперскалярность. Для распараллеливания решения СЛАУ с трёхдиагональной матрицей следует использовать какой-либо её параллельный заменитель, например, наиболее распространённую циклическую редукцию, или уступающий ей по критическому пути графа, но имеющий более регулярную структуру графа новый последовательно-параллельный вариант. 2.7 Существующие реализации алгоритма[править] Алгоритм прогонки является настолько простым, что, несмотря на его "дежурное" присутствие в стандартных пакетах программ решения задач линейной алгебры, большинство использующих его исследователей-прикладников часто пишут соответствующий фрагмент программы самостоятельно. Однако надо отметить, что, как правило, метод прогонки в пакетах реализуют не в его "чистом виде", а в виде пары "разложение на две двухдиагональные матрицы" и "решение СЛАУ с предварительно факторизованной трёхдиагональной матрицей". Так обстоит дело, например, в пакете SCALAPACK и его предшественниках. 3 Литература[править] Перейти ↑ Воеводин В.В. Вычислительные основы линейной алгебры. М.: Наука, 1977. Перейти ↑ Воеводин В.В., Кузнецов Ю.А. Матрицы и вычисления. М.: Наука, 1984. ↑ Перейти к: 3,0 3,1 Самарский А.А., Николаев Е.С. Методы решения сеточных уравнений. М.: Наука, 1978. Перейти ↑ Фролов А.В., Антонов А.С., Воеводин Вл.В., Теплов А.М. Сопоставление разных методов решения одной задачи по методике проекта Algowiki // Подана на конференцию "Параллельные вычислительные технологии (ПаВТ'2016)". Перейти ↑ Воеводин Вл., Жуматий С., Соболев С., Антонов А., Брызгалов П., Никитенко Д., Стефанов К., Воеводин Вад. Практика суперкомпьютера «Ломоносов» // Открытые системы, 2012, N 7, С. 36-39. Категории: Уровень алгоритмаЗаконченные статьиАлгоритмы с низким уровнем параллелизма Навигация ИкрамовОбсуждениеНастройкиСписок наблюденияВкладВыйтиСтатьяОбсуждениеЧитатьНепроверенные измененияПравитьИсторияНе следить

Поиск Перейти Заглавная страница Общий форум Технический форум Справка Свежие правки Хранилище файлов Новые файлы Загрузить файл Инструменты Ссылки сюда Связанные правки Загрузить файл Спецстраницы Версия для печати Постоянная ссылка Сведения о странице На других языках English Последнее изменение этой страницы: 10:08, 28 января 2016. Содержимое доступно по лицензии Creative Commons Attribution