Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
Image Processing with Reservoir Neural Network. / Tarkov, Mikhail S.; Ivanova, Victoria V.
Studies in Computational Intelligence. Springer Science and Business Media Deutschland GmbH, 2023. p. 337-345 (Studies in Computational Intelligence; Vol. 1120).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
}
TY - GEN
T1 - Image Processing with Reservoir Neural Network
AU - Tarkov, Mikhail S.
AU - Ivanova, Victoria V.
PY - 2023
Y1 - 2023
N2 - Reservoir neural network (RNN) is a powerful tool for solving complex machine learning problems. The reservoir is a recurrent part of the network having a large size and rare internal connections which are most often set randomly and remain fixed. The idea of the RNN is to train only part of the network using a simple classification/regression technique and leave most of the network (reservoir) fixed. At the same time, all RNN advantages are preserved, and the training time is significantly reduced. The work performed optimization and research of methods that improve the reservoir ability to solve problems of image classification. These methods are based on the reservoir output data transformation before they are fed to the RNN output layer. In the work, the optimal parameters values for the methods Infomax and SpaRCe were obtained, which provide a minimum error in image classification. Using the example of image classification from the MNIST handwritten digit database, it is shown that: 1. Reservoir networks are trained much faster than convolutional networks, although they are inferior to the latter in terms of image classification accuracy. 2. ESN (echo-state network) with principal component projector (PCA) gives more accurate results than ESN, Infomax and SpaRCe networks, but is slower.
AB - Reservoir neural network (RNN) is a powerful tool for solving complex machine learning problems. The reservoir is a recurrent part of the network having a large size and rare internal connections which are most often set randomly and remain fixed. The idea of the RNN is to train only part of the network using a simple classification/regression technique and leave most of the network (reservoir) fixed. At the same time, all RNN advantages are preserved, and the training time is significantly reduced. The work performed optimization and research of methods that improve the reservoir ability to solve problems of image classification. These methods are based on the reservoir output data transformation before they are fed to the RNN output layer. In the work, the optimal parameters values for the methods Infomax and SpaRCe were obtained, which provide a minimum error in image classification. Using the example of image classification from the MNIST handwritten digit database, it is shown that: 1. Reservoir networks are trained much faster than convolutional networks, although they are inferior to the latter in terms of image classification accuracy. 2. ESN (echo-state network) with principal component projector (PCA) gives more accurate results than ESN, Infomax and SpaRCe networks, but is slower.
KW - convolutional networks
KW - image recognition
KW - neural networks
KW - reservoir
UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85175799811&origin=inward&txGid=9502f9629b94ccda846017b2431108c2
UR - https://www.mendeley.com/catalogue/d655f24e-5b30-33b7-b0ad-ee356ee8762a/
U2 - 10.1007/978-3-031-44865-2_36
DO - 10.1007/978-3-031-44865-2_36
M3 - Conference contribution
SN - 9783031448645
T3 - Studies in Computational Intelligence
SP - 337
EP - 345
BT - Studies in Computational Intelligence
PB - Springer Science and Business Media Deutschland GmbH
ER -
ID: 59193431