Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференций › статья в сборнике материалов конференции › научная › Рецензирование
An Explanation Method for Semantic Segmentation Enhance Brain Tumor Classification. / Kenzhin, Roman; Luu, Minh Sao Khue; Pavlovskiy, Evgeniy и др.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) . Springer, 2025. стр. 319-330 23 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; Том 15406 LNCS).Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференций › статья в сборнике материалов конференции › научная › Рецензирование
}
TY - GEN
T1 - An Explanation Method for Semantic Segmentation Enhance Brain Tumor Classification
AU - Kenzhin, Roman
AU - Luu, Minh Sao Khue
AU - Pavlovskiy, Evgeniy
AU - Tuchinov, Bair
N1 - Conference code: 10
PY - 2025
Y1 - 2025
N2 - Deep learning algorithms for the analysis of magnetic resonance image data are often used to support the decisions of medical staff. However, most deep learning models are considered “black boxes”. Thus, it is often difficult to interpret the results of the applied deep learning methods. In this study, by using methods of explainable artificial intelligence, we attempted to interpret the decision-making of a deep learning algorithm in the case of brain tumor classification. An open dataset with three kinds of brain tumors and the Siberian Brain Tumor Dataset of Russian people’s brains with four types of tumors were used for algorithm training. In order to improve the classification and interpretation of the neural network models, we provide a classifier with segmentations as semantic features. Using posterior methods of interpretation provided by the Captum library, it is shown that the inclusion of semantic features (tumor segmentation masks) in the model leads to a significant improvement not only in the quality of the models but also in the greater interpretability of the results.
AB - Deep learning algorithms for the analysis of magnetic resonance image data are often used to support the decisions of medical staff. However, most deep learning models are considered “black boxes”. Thus, it is often difficult to interpret the results of the applied deep learning methods. In this study, by using methods of explainable artificial intelligence, we attempted to interpret the decision-making of a deep learning algorithm in the case of brain tumor classification. An open dataset with three kinds of brain tumors and the Siberian Brain Tumor Dataset of Russian people’s brains with four types of tumors were used for algorithm training. In order to improve the classification and interpretation of the neural network models, we provide a classifier with segmentations as semantic features. Using posterior methods of interpretation provided by the Captum library, it is shown that the inclusion of semantic features (tumor segmentation masks) in the model leads to a significant improvement not only in the quality of the models but also in the greater interpretability of the results.
KW - Brain tumor
KW - Deep learning
KW - Explainable AI
KW - MRI
KW - Semantic Segmentation
UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85219205107&origin=inward&txGid=ad30732afee255e58f51faef3ca12c1d
UR - https://www.mendeley.com/catalogue/fe077aaa-437d-3bfc-8e53-f9a669747de0/
U2 - 10.1007/978-3-031-78459-0_23
DO - 10.1007/978-3-031-78459-0_23
M3 - Conference contribution
SN - 9783031784583
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 319
EP - 330
BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PB - Springer
T2 - 10th Russian Supercomputing Days Conference
Y2 - 23 September 2024 through 24 September 2024
ER -
ID: 64991205