Standard

Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language. / Gorbacheva, T. E.; Bondarenko, I. Y.

в: Doklady Mathematics, Том 108, № Suppl 2, 12.2023, стр. S494-S502.

Результаты исследований: Научные публикации в периодических изданияхстатьяРецензирование

Harvard

Gorbacheva, TE & Bondarenko, IY 2023, 'Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language', Doklady Mathematics, Том. 108, № Suppl 2, стр. S494-S502. https://doi.org/10.1134/S1064562423701636

APA

Vancouver

Gorbacheva TE, Bondarenko IY. Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language. Doklady Mathematics. 2023 дек.;108(Suppl 2):S494-S502. doi: 10.1134/S1064562423701636

Author

Gorbacheva, T. E. ; Bondarenko, I. Y. / Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language. в: Doklady Mathematics. 2023 ; Том 108, № Suppl 2. стр. S494-S502.

BibTeX

@article{2a27b82750ef4905a43a2a19095b26ee,
title = "Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language",
abstract = "This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.",
keywords = "AI safety, automatic text generation, deep learning methods, language models, pretraining, synthetic data, transformers",
author = "Gorbacheva, {T. E.} and Bondarenko, {I. Y.}",
note = "Публикация для корректировки.",
year = "2023",
month = dec,
doi = "10.1134/S1064562423701636",
language = "English",
volume = "108",
pages = "S494--S502",
journal = "Doklady Mathematics",
issn = "1064-5624",
publisher = "Maik Nauka-Interperiodica Publishing",
number = "Suppl 2",

}

RIS

TY - JOUR

T1 - Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language

AU - Gorbacheva, T. E.

AU - Bondarenko, I. Y.

N1 - Публикация для корректировки.

PY - 2023/12

Y1 - 2023/12

N2 - This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.

AB - This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.

KW - AI safety

KW - automatic text generation

KW - deep learning methods

KW - language models

KW - pretraining

KW - synthetic data

KW - transformers

UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85188630382&origin=inward&txGid=ebb9146e28f17e79250610804d416061

UR - https://www.mendeley.com/catalogue/285574ff-fbcd-362f-87d9-7b41874493d3/

U2 - 10.1134/S1064562423701636

DO - 10.1134/S1064562423701636

M3 - Article

VL - 108

SP - S494-S502

JO - Doklady Mathematics

JF - Doklady Mathematics

SN - 1064-5624

IS - Suppl 2

ER -

ID: 59887909