Automatic evaluation of interpretability methods in text categorization
Zapiski Nauchnykh Seminarov POMI, Investigations on applied mathematics and informatics. Part II–2, Tome 530 (2023), pp. 68-79
Voir la notice de l'article provenant de la source Math-Net.Ru
Neural networks have begun to take over more and more of a person's everyday life, and the complexity of neural networks is only increasing. When tested on collected test data, the model can show quite decent performance, but when used in real-life conditions, it can give completely unexpected results. To determine the cause of the error, it is important to know how the model makes its decisions. In this work, we consider various methods of interpreting the BERT model in classification tasks, and also consider a method for evaluating interpretation methods using vector representations fastText and GloVe.
@article{ZNSL_2023_530_a5,
author = {A. Rogov and N. Lukashevich},
title = {Automatic evaluation of interpretability methods in text categorization},
journal = {Zapiski Nauchnykh Seminarov POMI},
pages = {68--79},
publisher = {mathdoc},
volume = {530},
year = {2023},
language = {en},
url = {http://geodesic.mathdoc.fr/item/ZNSL_2023_530_a5/}
}
A. Rogov; N. Lukashevich. Automatic evaluation of interpretability methods in text categorization. Zapiski Nauchnykh Seminarov POMI, Investigations on applied mathematics and informatics. Part II–2, Tome 530 (2023), pp. 68-79. http://geodesic.mathdoc.fr/item/ZNSL_2023_530_a5/