Project-In-Computational-Science-UU-2019---Estimating-Certainty-in-Deep-Learning
Project in Computational Science Uppsala University 2019
The project compares different methods and metrics for evaluating well-calibrated certainty estimates in deep learning classification tasks, by comparing models using fully bayesian or approximate bayesian models.
We found that the best calibration of uncertainty for deep CNNs were found when combining Label smooting and Temperature scaling. These two methods yielded better calibration than both Monte Carlo Dropout and Variational Inference based methods.
The Code is forked from the working directory: Noodles-321/Certainty by Jahaou Lu.
The code can be run by executing the following commands:
For running on GPU
conda create --name tftorch --file requirements.txt
or
conda install --yes --file requirements.txt
or (for local win-64)
conda env create -f tftorch.yml
For running on CPU
conda create --name certainty_venv python=3.6 --file requirements_CPU.txt -y && conda activate certainty_venv
Usage
To then run the code:
python framework.py
Links
Code Repo
Report (Change link)
Poster (Change link)
Markus Sagen
Authors:
Jianbo Li - Github, Mail
Jiahao Lu - Github, Mail
Markus Sagen - Github, Mail