Interpretability methods for neural networks are essential tools that help us understand and make sense of the complex decision-making processes within these black-box models. These techniques aim to provide insights into why a neural network makes specific predictions or classifications, shedding light on the inner workings of the model. By using various approaches, such as feature visualization, saliency maps, and attribution methods, interpretability methods help users grasp the relationships between input features and output predictions, making neural networks more transparent, accountable, and trustworthy.
In particular, as part of my PhD thesis I developed Transformational Measures to quantify the relationship between transformations of the inputs to neural networks and their outputs or intermediate representations in terms of invariance and same-equivariance.
Measures of invariance to transformations such as rotations, scaling and translation for Deep Neural Networks
TMEASURES: A library to compute easily compute Transformation Measures for PyTorch Models. Tensorflow version in progress. The library provides various invariance and same-equivariance measures ready to be applied to any neural network model. The library also allows to easily implement new transformational measures in a straightforward and efficient manner.
FGR-Net is a neural network trained to assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network.