Standard

An Explanation Method for Semantic Segmentation Enhance Brain Tumor Classification. / Kenzhin, Roman; Luu, Minh Sao Khue; Pavlovskiy, Evgeniy и др.

2025. 319-330.

Результаты исследований: Материалы конференцийматериалыРецензирование

Harvard

APA

Vancouver

Author

BibTeX

@conference{00d001137d2d43609ac3a8270a4f0c94,
title = "An Explanation Method for Semantic Segmentation Enhance Brain Tumor Classification",
abstract = "Deep learning algorithms for the analysis of magnetic resonance image data are often used to support the decisions of medical staff. However, most deep learning models are considered “black boxes”. Thus, it is often difficult to interpret the results of the applied deep learning methods. In this study, by using methods of explainable artificial intelligence, we attempted to interpret the decision-making of a deep learning algorithm in the case of brain tumor classification. An open dataset with three kinds of brain tumors and the Siberian Brain Tumor Dataset of Russian people{\textquoteright}s brains with four types of tumors were used for algorithm training. In order to improve the classification and interpretation of the neural network models, we provide a classifier with segmentations as semantic features. Using posterior methods of interpretation provided by the Captum library, it is shown that the inclusion of semantic features (tumor segmentation masks) in the model leads to a significant improvement not only in the quality of the models but also in the greater interpretability of the results.",
author = "Roman Kenzhin and Luu, {Minh Sao Khue} and Evgeniy Pavlovskiy and Bair Tuchinov",
year = "2025",
month = jan,
day = "31",
doi = "https://doi.org/10.1007/978-3-031-78459-0_23",
language = "русский",
pages = "319--330",

}

RIS

TY - CONF

T1 - An Explanation Method for Semantic Segmentation Enhance Brain Tumor Classification

AU - Kenzhin, Roman

AU - Luu, Minh Sao Khue

AU - Pavlovskiy, Evgeniy

AU - Tuchinov, Bair

PY - 2025/1/31

Y1 - 2025/1/31

N2 - Deep learning algorithms for the analysis of magnetic resonance image data are often used to support the decisions of medical staff. However, most deep learning models are considered “black boxes”. Thus, it is often difficult to interpret the results of the applied deep learning methods. In this study, by using methods of explainable artificial intelligence, we attempted to interpret the decision-making of a deep learning algorithm in the case of brain tumor classification. An open dataset with three kinds of brain tumors and the Siberian Brain Tumor Dataset of Russian people’s brains with four types of tumors were used for algorithm training. In order to improve the classification and interpretation of the neural network models, we provide a classifier with segmentations as semantic features. Using posterior methods of interpretation provided by the Captum library, it is shown that the inclusion of semantic features (tumor segmentation masks) in the model leads to a significant improvement not only in the quality of the models but also in the greater interpretability of the results.

AB - Deep learning algorithms for the analysis of magnetic resonance image data are often used to support the decisions of medical staff. However, most deep learning models are considered “black boxes”. Thus, it is often difficult to interpret the results of the applied deep learning methods. In this study, by using methods of explainable artificial intelligence, we attempted to interpret the decision-making of a deep learning algorithm in the case of brain tumor classification. An open dataset with three kinds of brain tumors and the Siberian Brain Tumor Dataset of Russian people’s brains with four types of tumors were used for algorithm training. In order to improve the classification and interpretation of the neural network models, we provide a classifier with segmentations as semantic features. Using posterior methods of interpretation provided by the Captum library, it is shown that the inclusion of semantic features (tumor segmentation masks) in the model leads to a significant improvement not only in the quality of the models but also in the greater interpretability of the results.

U2 - https://doi.org/10.1007/978-3-031-78459-0_23

DO - https://doi.org/10.1007/978-3-031-78459-0_23

M3 - материалы

SP - 319

EP - 330

ER -

ID: 64814230