Standard

Parallel implementations of randomized vector algorithm for solving large systems of linear equations. / Sabelfeld, Karl K.; Kireev, Sergey; Kireeva, Anastasiya.

In: Journal of Supercomputing, Vol. 79, 07.2023, p. 10555-10569.

Research output: Contribution to journalArticlepeer-review

Harvard

APA

Vancouver

Sabelfeld KK, Kireev S, Kireeva A. Parallel implementations of randomized vector algorithm for solving large systems of linear equations. Journal of Supercomputing. 2023 Jul;79:10555-10569. doi: 10.1007/s11227-023-05079-5

Author

Sabelfeld, Karl K. ; Kireev, Sergey ; Kireeva, Anastasiya. / Parallel implementations of randomized vector algorithm for solving large systems of linear equations. In: Journal of Supercomputing. 2023 ; Vol. 79. pp. 10555-10569.

BibTeX

@article{b236c7ede2e945ae9262948993d92030,
title = "Parallel implementations of randomized vector algorithm for solving large systems of linear equations",
abstract = "The results of a parallel implementation of a randomized vector algorithm for solving systems of linear equations are presented in the paper. The solution is represented in the form of a Neumann series. The stochastic method computes this series by sampling only random columns, avoiding multiplication of matrix by matrix and matrix by vector. We consider the case when the matrix is too large to fit in random-access memory (RAM). We use two approaches to solve this problem. In the first approach, the matrix is divided into parts that are distributed among MPI processes and stored in the available RAM of the cluster nodes. In the second approach, the entire matrix is stored on each node{\textquoteright}s hard drive, loaded into RAM, and processed in parts. Independent Monte Carlo experiments for random column indices are distributed among MPI processes or OpenMP threads for both approaches to matrix storage. The efficiency of parallel implementations is analyzed. Results are given for a system governed by dense matrices of size 10 4 and 10 5.",
keywords = "Large matrix, MPI, Matrix iterations, OpenMP, Parallel implementation, Randomized algorithm, System of linear equations",
author = "Sabelfeld, {Karl K.} and Sergey Kireev and Anastasiya Kireeva",
note = "The work is supported by the Russian Science Foundation, Grant 19-11-00019 in the part of randomized algorithms implementation, and Russian Fund of Basic Research, under Grant 20-51-18009, in the part of stochastic simulation theory development.",
year = "2023",
month = jul,
doi = "10.1007/s11227-023-05079-5",
language = "English",
volume = "79",
pages = "10555--10569",
journal = "Journal of Supercomputing",
issn = "0920-8542",
publisher = "Springer Netherlands",

}

RIS

TY - JOUR

T1 - Parallel implementations of randomized vector algorithm for solving large systems of linear equations

AU - Sabelfeld, Karl K.

AU - Kireev, Sergey

AU - Kireeva, Anastasiya

N1 - The work is supported by the Russian Science Foundation, Grant 19-11-00019 in the part of randomized algorithms implementation, and Russian Fund of Basic Research, under Grant 20-51-18009, in the part of stochastic simulation theory development.

PY - 2023/7

Y1 - 2023/7

N2 - The results of a parallel implementation of a randomized vector algorithm for solving systems of linear equations are presented in the paper. The solution is represented in the form of a Neumann series. The stochastic method computes this series by sampling only random columns, avoiding multiplication of matrix by matrix and matrix by vector. We consider the case when the matrix is too large to fit in random-access memory (RAM). We use two approaches to solve this problem. In the first approach, the matrix is divided into parts that are distributed among MPI processes and stored in the available RAM of the cluster nodes. In the second approach, the entire matrix is stored on each node’s hard drive, loaded into RAM, and processed in parts. Independent Monte Carlo experiments for random column indices are distributed among MPI processes or OpenMP threads for both approaches to matrix storage. The efficiency of parallel implementations is analyzed. Results are given for a system governed by dense matrices of size 10 4 and 10 5.

AB - The results of a parallel implementation of a randomized vector algorithm for solving systems of linear equations are presented in the paper. The solution is represented in the form of a Neumann series. The stochastic method computes this series by sampling only random columns, avoiding multiplication of matrix by matrix and matrix by vector. We consider the case when the matrix is too large to fit in random-access memory (RAM). We use two approaches to solve this problem. In the first approach, the matrix is divided into parts that are distributed among MPI processes and stored in the available RAM of the cluster nodes. In the second approach, the entire matrix is stored on each node’s hard drive, loaded into RAM, and processed in parts. Independent Monte Carlo experiments for random column indices are distributed among MPI processes or OpenMP threads for both approaches to matrix storage. The efficiency of parallel implementations is analyzed. Results are given for a system governed by dense matrices of size 10 4 and 10 5.

KW - Large matrix

KW - MPI

KW - Matrix iterations

KW - OpenMP

KW - Parallel implementation

KW - Randomized algorithm

KW - System of linear equations

UR - https://www.scopus.com/inward/record.url?eid=2-s2.0-85147782288&partnerID=40&md5=7cb092619904571b746aca774ac07a63

UR - https://www.mendeley.com/catalogue/f0b7a42c-510b-35fe-b2d4-9ce1b3a02f7b/

U2 - 10.1007/s11227-023-05079-5

DO - 10.1007/s11227-023-05079-5

M3 - Article

VL - 79

SP - 10555

EP - 10569

JO - Journal of Supercomputing

JF - Journal of Supercomputing

SN - 0920-8542

ER -

ID: 49786641