Результаты исследований: Научные публикации в периодических изданиях › статья › Рецензирование
Parallel implementations of randomized vector algorithm for solving large systems of linear equations. / Sabelfeld, Karl K.; Kireev, Sergey; Kireeva, Anastasiya.
в: Journal of Supercomputing, Том 79, 07.2023, стр. 10555-10569.Результаты исследований: Научные публикации в периодических изданиях › статья › Рецензирование
}
TY - JOUR
T1 - Parallel implementations of randomized vector algorithm for solving large systems of linear equations
AU - Sabelfeld, Karl K.
AU - Kireev, Sergey
AU - Kireeva, Anastasiya
N1 - The work is supported by the Russian Science Foundation, Grant 19-11-00019 in the part of randomized algorithms implementation, and Russian Fund of Basic Research, under Grant 20-51-18009, in the part of stochastic simulation theory development.
PY - 2023/7
Y1 - 2023/7
N2 - The results of a parallel implementation of a randomized vector algorithm for solving systems of linear equations are presented in the paper. The solution is represented in the form of a Neumann series. The stochastic method computes this series by sampling only random columns, avoiding multiplication of matrix by matrix and matrix by vector. We consider the case when the matrix is too large to fit in random-access memory (RAM). We use two approaches to solve this problem. In the first approach, the matrix is divided into parts that are distributed among MPI processes and stored in the available RAM of the cluster nodes. In the second approach, the entire matrix is stored on each node’s hard drive, loaded into RAM, and processed in parts. Independent Monte Carlo experiments for random column indices are distributed among MPI processes or OpenMP threads for both approaches to matrix storage. The efficiency of parallel implementations is analyzed. Results are given for a system governed by dense matrices of size 10 4 and 10 5.
AB - The results of a parallel implementation of a randomized vector algorithm for solving systems of linear equations are presented in the paper. The solution is represented in the form of a Neumann series. The stochastic method computes this series by sampling only random columns, avoiding multiplication of matrix by matrix and matrix by vector. We consider the case when the matrix is too large to fit in random-access memory (RAM). We use two approaches to solve this problem. In the first approach, the matrix is divided into parts that are distributed among MPI processes and stored in the available RAM of the cluster nodes. In the second approach, the entire matrix is stored on each node’s hard drive, loaded into RAM, and processed in parts. Independent Monte Carlo experiments for random column indices are distributed among MPI processes or OpenMP threads for both approaches to matrix storage. The efficiency of parallel implementations is analyzed. Results are given for a system governed by dense matrices of size 10 4 and 10 5.
KW - Large matrix
KW - MPI
KW - Matrix iterations
KW - OpenMP
KW - Parallel implementation
KW - Randomized algorithm
KW - System of linear equations
UR - https://www.scopus.com/inward/record.url?eid=2-s2.0-85147782288&partnerID=40&md5=7cb092619904571b746aca774ac07a63
UR - https://www.mendeley.com/catalogue/f0b7a42c-510b-35fe-b2d4-9ce1b3a02f7b/
U2 - 10.1007/s11227-023-05079-5
DO - 10.1007/s11227-023-05079-5
M3 - Article
VL - 79
SP - 10555
EP - 10569
JO - Journal of Supercomputing
JF - Journal of Supercomputing
SN - 0920-8542
ER -
ID: 49786641