Standard

Interpretable logical-probabilistic approximation of neural networks. / Vityaev, Evgenii; Korolev, Alexey.

в: Cognitive Systems Research, Том 88, 101301, 12.2024.

Результаты исследований: Научные публикации в периодических изданияхстатьяРецензирование

Harvard

APA

Vancouver

Vityaev E, Korolev A. Interpretable logical-probabilistic approximation of neural networks. Cognitive Systems Research. 2024 дек.;88:101301. doi: 10.1016/j.cogsys.2024.101301

Author

Vityaev, Evgenii ; Korolev, Alexey. / Interpretable logical-probabilistic approximation of neural networks. в: Cognitive Systems Research. 2024 ; Том 88.

BibTeX

@article{7d006bbd2f224e92952539624c6bd268,
title = "Interpretable logical-probabilistic approximation of neural networks",
abstract = "The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.",
keywords = "Interpretation, Logical-probabilistic approach, NN approximation, Neural networks",
author = "Evgenii Vityaev and Alexey Korolev",
note = "The authors thank Huawei for the financial and computational support in providing experiments. This work was supported by a grant for research centers, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730324P540002) and the agreement with the Novosibirsk State University dated December 27, 2023 No. 70-2023-001318.",
year = "2024",
month = dec,
doi = "10.1016/j.cogsys.2024.101301",
language = "English",
volume = "88",
journal = "Cognitive Systems Research",
issn = "1389-0417",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - Interpretable logical-probabilistic approximation of neural networks

AU - Vityaev, Evgenii

AU - Korolev, Alexey

N1 - The authors thank Huawei for the financial and computational support in providing experiments. This work was supported by a grant for research centers, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730324P540002) and the agreement with the Novosibirsk State University dated December 27, 2023 No. 70-2023-001318.

PY - 2024/12

Y1 - 2024/12

N2 - The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.

AB - The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.

KW - Interpretation

KW - Logical-probabilistic approach

KW - NN approximation

KW - Neural networks

UR - https://www.scopus.com/record/display.uri?eid=2-s2.0-85206111624&origin=inward&txGid=158591f2b747faa4a1380683af689016

UR - https://www.mendeley.com/catalogue/133814f1-b553-34e5-9bca-2afb48ce2c53/

U2 - 10.1016/j.cogsys.2024.101301

DO - 10.1016/j.cogsys.2024.101301

M3 - Article

VL - 88

JO - Cognitive Systems Research

JF - Cognitive Systems Research

SN - 1389-0417

M1 - 101301

ER -

ID: 60753572