Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференций › глава/раздел › научная › Рецензирование
Parameter Optimization for Restarted Mixed Precision Iterative Sparse Solver. / Пролубников, Александр Вячеславович.
Lecture Notes in Computer Science. Springer, 2025. стр. 374-388 (Lecture Notes in Computer Science; Том 15681 LNCS).Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференций › глава/раздел › научная › Рецензирование
}
TY - CHAP
T1 - Parameter Optimization for Restarted Mixed Precision Iterative Sparse Solver
AU - Пролубников, Александр Вячеславович
PY - 2025/7/6
Y1 - 2025/7/6
N2 - We consider the problem of optimizing the parameter of a two-stage algorithm for approximate solution of a system of linear algebraic equations with a sparse n×n-matrix, i.e., with one in which the number of nonzero elements is m=O(n). The two-stage algorithm uses conjugate gradient method at its stages. At the 1st stage, an approximate solution with accuracy ε1 is found for zero initial vector. All numerical values used at this stage are represented as single precision numbers. The obtained solution is used as initial approximation for an approximate solution with a given accuracy ε2 that we obtain at the 2nd stage, where double precision numbers are used. Based on the values of some matrix parameters, computed in a time not exceeding O(m), we need to determine the value ε1 which minimizes the total computation time at two stages. Using single precision numbers for computations at the 1st stage is advantageous, since the execution time of one iteration will be approximately half that of one iteration at the 2nd stage. But using machine numbers with half the mantissa length accelerates the growth of the rounding errors per iteration at the 1st stage, which entails an increase in the number of iterations performed at 2nd stage. To determine ε1 for the input matrix, we use n, m, an estimation of the diameter of the graph associated with the matrix, an estimation of the spread of the matrix’ eigenvalues, and estimation of its maximum eigenvalue. The optimal or close to the optimal value of ε1 can be determined for matrix with such a vector of parameters using the nearest neighbor regression.
AB - We consider the problem of optimizing the parameter of a two-stage algorithm for approximate solution of a system of linear algebraic equations with a sparse n×n-matrix, i.e., with one in which the number of nonzero elements is m=O(n). The two-stage algorithm uses conjugate gradient method at its stages. At the 1st stage, an approximate solution with accuracy ε1 is found for zero initial vector. All numerical values used at this stage are represented as single precision numbers. The obtained solution is used as initial approximation for an approximate solution with a given accuracy ε2 that we obtain at the 2nd stage, where double precision numbers are used. Based on the values of some matrix parameters, computed in a time not exceeding O(m), we need to determine the value ε1 which minimizes the total computation time at two stages. Using single precision numbers for computations at the 1st stage is advantageous, since the execution time of one iteration will be approximately half that of one iteration at the 2nd stage. But using machine numbers with half the mantissa length accelerates the growth of the rounding errors per iteration at the 1st stage, which entails an increase in the number of iterations performed at 2nd stage. To determine ε1 for the input matrix, we use n, m, an estimation of the diameter of the graph associated with the matrix, an estimation of the spread of the matrix’ eigenvalues, and estimation of its maximum eigenvalue. The optimal or close to the optimal value of ε1 can be determined for matrix with such a vector of parameters using the nearest neighbor regression.
KW - mixed precision, iterative solver
KW - sparse systems of linear equations
UR - https://www.mendeley.com/catalogue/efb66c04-0dbd-3929-8c77-325cb76fd022/
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105010812461&origin=inward
U2 - 10.1007/978-3-031-97077-1_26
DO - 10.1007/978-3-031-97077-1_26
M3 - Chapter
SN - 9783031970764
T3 - Lecture Notes in Computer Science
SP - 374
EP - 388
BT - Lecture Notes in Computer Science
PB - Springer
T2 - 24th International Conference Mathematical Optimization Theory and Operations Research
Y2 - 7 July 2025 through 11 July 2025
ER -
ID: 68561599