Research output: Contribution to journal › Article › peer-review
Applying Transformer-Based Text Summarization for Keyphrase Generation. / Glazkova, A. V.; Morozov, D. A.
In: Lobachevskii Journal of Mathematics, Vol. 44, No. 1, 01.2023, p. 123-136.Research output: Contribution to journal › Article › peer-review
}
TY - JOUR
T1 - Applying Transformer-Based Text Summarization for Keyphrase Generation
AU - Glazkova, A. V.
AU - Morozov, D. A.
N1 - This work was supported by the grant of the President of the Russian Federation no. MK-3118.2022.4.
PY - 2023/1
Y1 - 2023/1
N2 - Keyphrases are crucial for searching and systematizing scholarly documents. Most current methods for keyphrase extraction are aimed at the extraction of the most significant words in the text. But in practice, the list of keyphrases often includes words that do not appear in the text explicitly. In this case, the list of keyphrases represents an abstractive summary of the source text. In this paper, we experiment with popular transformer-based models for abstractive text summarization using four benchmark datasets for keyphrase extraction. We compare the results obtained with the results of common unsupervised and supervised methods for keyphrase extraction. Our evaluation shows that summarization models are quite effective in generating keyphrases in the terms of the full-match F1-score and BERTScore. However, they produce a lot of words that are absent in the author’s list of keyphrases, which makes summarization models ineffective in terms of ROUGE-1. We also investigate several ordering strategies to concatenate target keyphrases. The results showed that the choice of strategy affects the performance of keyphrase generation.
AB - Keyphrases are crucial for searching and systematizing scholarly documents. Most current methods for keyphrase extraction are aimed at the extraction of the most significant words in the text. But in practice, the list of keyphrases often includes words that do not appear in the text explicitly. In this case, the list of keyphrases represents an abstractive summary of the source text. In this paper, we experiment with popular transformer-based models for abstractive text summarization using four benchmark datasets for keyphrase extraction. We compare the results obtained with the results of common unsupervised and supervised methods for keyphrase extraction. Our evaluation shows that summarization models are quite effective in generating keyphrases in the terms of the full-match F1-score and BERTScore. However, they produce a lot of words that are absent in the author’s list of keyphrases, which makes summarization models ineffective in terms of ROUGE-1. We also investigate several ordering strategies to concatenate target keyphrases. The results showed that the choice of strategy affects the performance of keyphrase generation.
KW - BART
KW - T5
KW - Transformer
KW - keyphrase extraction
KW - natural language processing
KW - scholarly document
KW - text summarization
UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85159950047&origin=inward&txGid=38f6bc022dac85f03459566ee2318fb9
UR - https://www.mendeley.com/catalogue/92f75db9-0d75-33fb-9d60-9274fcddee0f/
U2 - 10.1134/S1995080223010134
DO - 10.1134/S1995080223010134
M3 - Article
VL - 44
SP - 123
EP - 136
JO - Lobachevskii Journal of Mathematics
JF - Lobachevskii Journal of Mathematics
SN - 1995-0802
IS - 1
ER -
ID: 56548170