Standard
Iterative Adaptation to Quantization Noise. / Chudakov, Dmitry; Alyamkin, Sergey; Goncharenko, Alexander et al.
Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings. ed. / Ignacio Rojas; Gonzalo Joya; Andreu Catala. Springer Science and Business Media Deutschland GmbH, 2021. p. 303-310 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12861 LNCS).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
Harvard
Chudakov, D, Alyamkin, S
, Goncharenko, A & Denisov, A 2021,
Iterative Adaptation to Quantization Noise. in I Rojas, G Joya & A Catala (eds),
Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12861 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 303-310, 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Virtual, Online,
16.06.2021.
https://doi.org/10.1007/978-3-030-85030-2_25
APA
Chudakov, D., Alyamkin, S.
, Goncharenko, A., & Denisov, A. (2021).
Iterative Adaptation to Quantization Noise. In I. Rojas, G. Joya, & A. Catala (Eds.),
Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings (pp. 303-310). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12861 LNCS). Springer Science and Business Media Deutschland GmbH.
https://doi.org/10.1007/978-3-030-85030-2_25
Vancouver
Chudakov D, Alyamkin S
, Goncharenko A, Denisov A.
Iterative Adaptation to Quantization Noise. In Rojas I, Joya G, Catala A, editors, Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings. Springer Science and Business Media Deutschland GmbH. 2021. p. 303-310. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-030-85030-2_25
Author
Chudakov, Dmitry ; Alyamkin, Sergey
; Goncharenko, Alexander et al. /
Iterative Adaptation to Quantization Noise. Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings. editor / Ignacio Rojas ; Gonzalo Joya ; Andreu Catala. Springer Science and Business Media Deutschland GmbH, 2021. pp. 303-310 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
BibTeX
@inproceedings{4eb58b3fe46c472ba6061ddb366efbe5,
title = "Iterative Adaptation to Quantization Noise",
abstract = "Quantization allows accelerating neural networks significantly, especially for mobile processors. Existing quantization methods require either training neural network from scratch or gives significant accuracy drop for the quantized model. Low bits quantization (e.g., 4- or 6-bit) task is a much more resource consumptive problem in comparison with 8-bit quantization, it requires a significant amount of labeled training data. We propose a new low-bit quantization method for mobile neural network architectures that doesn{\textquoteright}t require training from scratch and a big amount of train labeled data and allows to avoid significant accuracy drop.",
keywords = "Distillation, Machine learning, Neural networks, Quantization",
author = "Dmitry Chudakov and Sergey Alyamkin and Alexander Goncharenko and Andrey Denisov",
note = "Publisher Copyright: {\textcopyright} 2021, Springer Nature Switzerland AG.; 16th International Work-Conference on Artificial Neural Networks, IWANN 2021 ; Conference date: 16-06-2021 Through 18-06-2021",
year = "2021",
doi = "10.1007/978-3-030-85030-2_25",
language = "English",
isbn = "9783030850296",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "303--310",
editor = "Ignacio Rojas and Gonzalo Joya and Andreu Catala",
booktitle = "Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings",
address = "Germany",
}
RIS
TY - GEN
T1 - Iterative Adaptation to Quantization Noise
AU - Chudakov, Dmitry
AU - Alyamkin, Sergey
AU - Goncharenko, Alexander
AU - Denisov, Andrey
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Quantization allows accelerating neural networks significantly, especially for mobile processors. Existing quantization methods require either training neural network from scratch or gives significant accuracy drop for the quantized model. Low bits quantization (e.g., 4- or 6-bit) task is a much more resource consumptive problem in comparison with 8-bit quantization, it requires a significant amount of labeled training data. We propose a new low-bit quantization method for mobile neural network architectures that doesn’t require training from scratch and a big amount of train labeled data and allows to avoid significant accuracy drop.
AB - Quantization allows accelerating neural networks significantly, especially for mobile processors. Existing quantization methods require either training neural network from scratch or gives significant accuracy drop for the quantized model. Low bits quantization (e.g., 4- or 6-bit) task is a much more resource consumptive problem in comparison with 8-bit quantization, it requires a significant amount of labeled training data. We propose a new low-bit quantization method for mobile neural network architectures that doesn’t require training from scratch and a big amount of train labeled data and allows to avoid significant accuracy drop.
KW - Distillation
KW - Machine learning
KW - Neural networks
KW - Quantization
UR - http://www.scopus.com/inward/record.url?scp=85115138489&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-85030-2_25
DO - 10.1007/978-3-030-85030-2_25
M3 - Conference contribution
AN - SCOPUS:85115138489
SN - 9783030850296
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 303
EP - 310
BT - Advances in Computational Intelligence - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Proceedings
A2 - Rojas, Ignacio
A2 - Joya, Gonzalo
A2 - Catala, Andreu
PB - Springer Science and Business Media Deutschland GmbH
T2 - 16th International Work-Conference on Artificial Neural Networks, IWANN 2021
Y2 - 16 June 2021 through 18 June 2021
ER -