/NVIV

Although deep neural networks have demonstrated greater performance in a variety of tasks, their unexplainability and untrustworthiness exist concurrently, which seriously hinders the further development of decision-making,  particularly in high-risk areas. This work presents Node Visualization and Interpretability Verification (NVIV), a technique of DNN interpretability verification, to solve these difficulties. This method offers two unique benefits: 1) This technique replicates the DNN decision-making process using a decision tree and visualizes node output to give a basis for decision-making; 2) An interpretability verification method based on the correlation degree of convolution kernel units is presented to assess the model's confidence. Our method not only achieves intuitive and easily understandable interpretability, but also achieves high accuracy. In the experiment, we observed that the effect is better than other similar methods by attention force ratio under the positioning evaluation. Furthermore, rather than being limited to the visual visualization zone, the experiments show that the method provided in this study correctly locates the most responsive region in the target item and explains the model's internal decision-making basis. In compared to other similar techniques, the proposed model more accurately describes the decision-making base of DNNs.

Watchers