Standard

Applying Transformer-Based Text Summarization for Keyphrase Generation. / Glazkova, A. V.; Morozov, D. A.

в: Lobachevskii Journal of Mathematics, Том 44, № 1, 01.2023, стр. 123-136.

Результаты исследований: Научные публикации в периодических изданияхстатьяРецензирование

Harvard

Glazkova, AV & Morozov, DA 2023, 'Applying Transformer-Based Text Summarization for Keyphrase Generation', Lobachevskii Journal of Mathematics, Том. 44, № 1, стр. 123-136. https://doi.org/10.1134/S1995080223010134

APA

Vancouver

Glazkova AV, Morozov DA. Applying Transformer-Based Text Summarization for Keyphrase Generation. Lobachevskii Journal of Mathematics. 2023 янв.;44(1):123-136. doi: 10.1134/S1995080223010134

Author

Glazkova, A. V. ; Morozov, D. A. / Applying Transformer-Based Text Summarization for Keyphrase Generation. в: Lobachevskii Journal of Mathematics. 2023 ; Том 44, № 1. стр. 123-136.

BibTeX

@article{d5152f41f5234e96a527f25d4b506f5a,
title = "Applying Transformer-Based Text Summarization for Keyphrase Generation",
abstract = "Keyphrases are crucial for searching and systematizing scholarly documents. Most current methods for keyphrase extraction are aimed at the extraction of the most significant words in the text. But in practice, the list of keyphrases often includes words that do not appear in the text explicitly. In this case, the list of keyphrases represents an abstractive summary of the source text. In this paper, we experiment with popular transformer-based models for abstractive text summarization using four benchmark datasets for keyphrase extraction. We compare the results obtained with the results of common unsupervised and supervised methods for keyphrase extraction. Our evaluation shows that summarization models are quite effective in generating keyphrases in the terms of the full-match F1-score and BERTScore. However, they produce a lot of words that are absent in the author{\textquoteright}s list of keyphrases, which makes summarization models ineffective in terms of ROUGE-1. We also investigate several ordering strategies to concatenate target keyphrases. The results showed that the choice of strategy affects the performance of keyphrase generation.",
keywords = "BART, T5, Transformer, keyphrase extraction, natural language processing, scholarly document, text summarization",
author = "Glazkova, {A. V.} and Morozov, {D. A.}",
note = "This work was supported by the grant of the President of the Russian Federation no.  MK-3118.2022.4.",
year = "2023",
month = jan,
doi = "10.1134/S1995080223010134",
language = "English",
volume = "44",
pages = "123--136",
journal = "Lobachevskii Journal of Mathematics",
issn = "1995-0802",
publisher = "Maik Nauka Publishing / Springer SBM",
number = "1",

}

RIS

TY - JOUR

T1 - Applying Transformer-Based Text Summarization for Keyphrase Generation

AU - Glazkova, A. V.

AU - Morozov, D. A.

N1 - This work was supported by the grant of the President of the Russian Federation no.  MK-3118.2022.4.

PY - 2023/1

Y1 - 2023/1

N2 - Keyphrases are crucial for searching and systematizing scholarly documents. Most current methods for keyphrase extraction are aimed at the extraction of the most significant words in the text. But in practice, the list of keyphrases often includes words that do not appear in the text explicitly. In this case, the list of keyphrases represents an abstractive summary of the source text. In this paper, we experiment with popular transformer-based models for abstractive text summarization using four benchmark datasets for keyphrase extraction. We compare the results obtained with the results of common unsupervised and supervised methods for keyphrase extraction. Our evaluation shows that summarization models are quite effective in generating keyphrases in the terms of the full-match F1-score and BERTScore. However, they produce a lot of words that are absent in the author’s list of keyphrases, which makes summarization models ineffective in terms of ROUGE-1. We also investigate several ordering strategies to concatenate target keyphrases. The results showed that the choice of strategy affects the performance of keyphrase generation.

AB - Keyphrases are crucial for searching and systematizing scholarly documents. Most current methods for keyphrase extraction are aimed at the extraction of the most significant words in the text. But in practice, the list of keyphrases often includes words that do not appear in the text explicitly. In this case, the list of keyphrases represents an abstractive summary of the source text. In this paper, we experiment with popular transformer-based models for abstractive text summarization using four benchmark datasets for keyphrase extraction. We compare the results obtained with the results of common unsupervised and supervised methods for keyphrase extraction. Our evaluation shows that summarization models are quite effective in generating keyphrases in the terms of the full-match F1-score and BERTScore. However, they produce a lot of words that are absent in the author’s list of keyphrases, which makes summarization models ineffective in terms of ROUGE-1. We also investigate several ordering strategies to concatenate target keyphrases. The results showed that the choice of strategy affects the performance of keyphrase generation.

KW - BART

KW - T5

KW - Transformer

KW - keyphrase extraction

KW - natural language processing

KW - scholarly document

KW - text summarization

UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85159950047&origin=inward&txGid=38f6bc022dac85f03459566ee2318fb9

UR - https://www.mendeley.com/catalogue/92f75db9-0d75-33fb-9d60-9274fcddee0f/

U2 - 10.1134/S1995080223010134

DO - 10.1134/S1995080223010134

M3 - Article

VL - 44

SP - 123

EP - 136

JO - Lobachevskii Journal of Mathematics

JF - Lobachevskii Journal of Mathematics

SN - 1995-0802

IS - 1

ER -

ID: 56548170