- (Zhou Bolei) Interpretable Representation Learning for Visual Intelligence [thesis]
- (Been Kim) Interactive and Interpretable Machine Learning Models for Human Machine Collaboration [thesis]
- Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014. [paper]
- Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. "Deep inside convolutional networks: Visualising image classification models and saliency maps." arXiv preprint arXiv:1312.6034 (2013). [paper]
4.3. Simplify the input image & Visualize the receptive fields & Emergence of objects as the internal representation
- Zhou, Bolei, et al. "Object detectors emerge in deep scene cnns." arXiv preprint arXiv:1412.6856 (2014). [paper] [related dataset: Places]
- Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." Advances in neural information processing systems. 2017. [paper] [related datasets: CamVid, NYU v2, Make3D]
- (About Bayesian Neural Networks, see Blundell, Charles, et al. "Weight uncertainty in neural networks." arXiv preprint arXiv:1505.05424 (2015). [paper] Also see a github repository Bayesian-Neural-Networks.)