Standard

Winning solution on LPIRC-LL competition. / Goncharenko, Alexander; Alyamkin, Sergey; Denisov, Andrey и др.

Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019. IEEE Computer Society, 2019. стр. 10-16 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Том 2019-June).

Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференцийстатья в сборнике материалов конференциинаучнаяРецензирование

Harvard

Goncharenko, A, Alyamkin, S, Denisov, A & Terentev, E 2019, Winning solution on LPIRC-LL competition. в Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Том. 2019-June, IEEE Computer Society, стр. 10-16, 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019, Long Beach, Соединенные Штаты Америки, 16.06.2019.

APA

Goncharenko, A., Alyamkin, S., Denisov, A., & Terentev, E. (2019). Winning solution on LPIRC-LL competition. в Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 (стр. 10-16). (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Том 2019-June). IEEE Computer Society.

Vancouver

Goncharenko A, Alyamkin S, Denisov A, Terentev E. Winning solution on LPIRC-LL competition. в Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019. IEEE Computer Society. 2019. стр. 10-16. (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops).

Author

Goncharenko, Alexander ; Alyamkin, Sergey ; Denisov, Andrey и др. / Winning solution on LPIRC-LL competition. Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019. IEEE Computer Society, 2019. стр. 10-16 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops).

BibTeX

@inproceedings{afca7f54e9bc4fc1893ce78f23c682e2,
title = "Winning solution on LPIRC-LL competition",
abstract = "The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain the accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNetv2 and MNAS. Here we present two methods to significantly optimize the training with quantization procedure. The first one is introducing the trained scale factors for discretization thresholds that are separate for each filter. The second one is based on mutual rescaling of consequent depth-wise separable convolution and convolution layers. Using the proposed techniques, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available at: https://github.com/agoncharenko1992/FAT-fastadjustable-threshold.",
author = "Alexander Goncharenko and Sergey Alyamkin and Andrey Denisov and Evgeny Terentev",
note = "Publisher Copyright: {\textcopyright} 2019 IEEE Computer Society. All rights reserved.; 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 ; Conference date: 16-06-2019 Through 20-06-2019",
year = "2019",
month = jun,
language = "English",
series = "IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops",
publisher = "IEEE Computer Society",
pages = "10--16",
booktitle = "Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019",
address = "United States",

}

RIS

TY - GEN

T1 - Winning solution on LPIRC-LL competition

AU - Goncharenko, Alexander

AU - Alyamkin, Sergey

AU - Denisov, Andrey

AU - Terentev, Evgeny

N1 - Publisher Copyright: © 2019 IEEE Computer Society. All rights reserved.

PY - 2019/6

Y1 - 2019/6

N2 - The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain the accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNetv2 and MNAS. Here we present two methods to significantly optimize the training with quantization procedure. The first one is introducing the trained scale factors for discretization thresholds that are separate for each filter. The second one is based on mutual rescaling of consequent depth-wise separable convolution and convolution layers. Using the proposed techniques, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available at: https://github.com/agoncharenko1992/FAT-fastadjustable-threshold.

AB - The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain the accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNetv2 and MNAS. Here we present two methods to significantly optimize the training with quantization procedure. The first one is introducing the trained scale factors for discretization thresholds that are separate for each filter. The second one is based on mutual rescaling of consequent depth-wise separable convolution and convolution layers. Using the proposed techniques, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available at: https://github.com/agoncharenko1992/FAT-fastadjustable-threshold.

UR - http://www.scopus.com/inward/record.url?scp=85113856004&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85113856004

T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

SP - 10

EP - 16

BT - Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019

PB - IEEE Computer Society

T2 - 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019

Y2 - 16 June 2019 through 20 June 2019

ER -

ID: 34146219