Standard

Superposition as Data Augmentation using LSTM and HMM in Small Training Sets. / Павловский, Евгений Николаевич; Сивасвами, Акилеш .

Daejeon, South Korea : Cornell University, 2019.

Результаты исследований: Рабочие материалырабочие материалы

Harvard

APA

Vancouver

Павловский ЕН, Сивасвами А. Superposition as Data Augmentation using LSTM and HMM in Small Training Sets. Daejeon, South Korea: Cornell University. 2019 окт. 24.

Author

Павловский, Евгений Николаевич ; Сивасвами, Акилеш . / Superposition as Data Augmentation using LSTM and HMM in Small Training Sets. Daejeon, South Korea : Cornell University, 2019.

BibTeX

@techreport{8f083659fba341fa8b40139922423c65,
title = "Superposition as Data Augmentation using LSTM and HMM in Small Training Sets",
abstract = "Considering audio and image data as having quantum nature (data are represented by density matrices), we achieved better results on training architectures such as 3-layer stacked LSTM and HMM by mixing training samples using superposition augmentation and compared with plain default training and mix-up augmentation. This augmentation technique originates from the mix-up approach but provides more solid theoretical reasoning based on quantum properties. We achieved 3% improvement (from 68% to 71%) by using 38% lesser number of training samples in Russian audio-digits recognition task and 7,16% better accuracy than mix-up augmentation by training only 500 samples using HMM on the same task. Also, we achieved 1.1% better accuracy than mix-up on first 900 samples in MNIST using 3-layer stacked LSTM.",
author = "Павловский, {Евгений Николаевич} and Акилеш Сивасвами",
year = "2019",
month = oct,
day = "24",
language = "English",
publisher = "Cornell University",
address = "United States",
type = "WorkingPaper",
institution = "Cornell University",

}

RIS

TY - UNPB

T1 - Superposition as Data Augmentation using LSTM and HMM in Small Training Sets

AU - Павловский, Евгений Николаевич

AU - Сивасвами, Акилеш

PY - 2019/10/24

Y1 - 2019/10/24

N2 - Considering audio and image data as having quantum nature (data are represented by density matrices), we achieved better results on training architectures such as 3-layer stacked LSTM and HMM by mixing training samples using superposition augmentation and compared with plain default training and mix-up augmentation. This augmentation technique originates from the mix-up approach but provides more solid theoretical reasoning based on quantum properties. We achieved 3% improvement (from 68% to 71%) by using 38% lesser number of training samples in Russian audio-digits recognition task and 7,16% better accuracy than mix-up augmentation by training only 500 samples using HMM on the same task. Also, we achieved 1.1% better accuracy than mix-up on first 900 samples in MNIST using 3-layer stacked LSTM.

AB - Considering audio and image data as having quantum nature (data are represented by density matrices), we achieved better results on training architectures such as 3-layer stacked LSTM and HMM by mixing training samples using superposition augmentation and compared with plain default training and mix-up augmentation. This augmentation technique originates from the mix-up approach but provides more solid theoretical reasoning based on quantum properties. We achieved 3% improvement (from 68% to 71%) by using 38% lesser number of training samples in Russian audio-digits recognition task and 7,16% better accuracy than mix-up augmentation by training only 500 samples using HMM on the same task. Also, we achieved 1.1% better accuracy than mix-up on first 900 samples in MNIST using 3-layer stacked LSTM.

M3 - Working paper

BT - Superposition as Data Augmentation using LSTM and HMM in Small Training Sets

PB - Cornell University

CY - Daejeon, South Korea

ER -

ID: 23058953