This repository contains the source code for the thesis submitted in part requirement for the MSc Computational Statistics and Machine Learning at University College London.
Performance benchmarks on MNIST, CIFAR-10, CIFAR-100, SVHN below (and here).
Instructions for running experiments here.
100 labels | 1000 labels | All labels | Method | Year |
---|---|---|---|---|
0.93 (±0.065) | N/A | N/A | Improved Techniques for Training GANs | 2016 |
1.002 (±0.038) | 0.979 (±0.025) | 0.578 (±0.013) | Deconstructing the Ladder Network Architecture (Ladder w/ AMLP[2,2,2]) | 2015 |
1.072 (±0.015) | 0.974 (±0.021) | 0.598 (±0.014) | Deconstructing the Ladder Network Architecture (Ladder w/ AMLP[4]) | 2015 |
1.072 (±0.015) | 1.193 (±0.039) | 0.569 (±0.010) | Deconstructing the Ladder Network Architecture (Ladder w/ AMLP[2,2]) | 2015 |
1.06 (±0.37) | 0.84 (±0.08) | 0.57 (±0.02) | Semi-Supervised Learning with Ladder Networks | 2015 |
1.36 | 1.27 | 0.64 | Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning | 2017 |
2.33 | 1.36 | 0.637 (±0.046) | Distributional Smoothing with Virtual Adversarial Training | 2016 (ICLR) |
4k labels | All labels | Method | Year |
---|---|---|---|
10.55 | N/A | Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning [Conv-Large w/ EntMin, w/ augmentation] | 2017 |
12.16 (±0.24) | 5.60 (±0.10) | Temporal Ensembling for Semi-Supervised Learning [w/ augmentation] | 2016 |
13.15 | N/A | Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning [Conv-Large w/ EntMin, no augmentation] | 2017 |
20.40 | N/A | Semi-Supervised Learning with Ladder Networks [Conv-Large, Gamma model, no augmentation] | 2015 |
500 labels | 1000 labels | All labels | Method | Year |
---|---|---|---|---|
N/A | 3.86 | N/A | Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning [Conv-Large w/ EntMin, w/ augmentation] | 2017 |
N/A | 4.28 | N/A | Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning [Conv-Large w/ EntMin, no augmentation] | 2017 |
5.12 (±0.13) | 4.42 (±0.16) | 2.74 (±0.06) | Temporal Ensembling for Semi-Supervised Learning [w/ augmentation] | 2016 |
6.65 (±0.53) | 4.82 (±0.17) | 2.54 (±0.04) | Temporal Ensembling for Semi-Supervised Learning [Pi model w/ augmentation] | 2016 |
N/A | 24.63 | N/A | Distributional Smoothing with Virtual Adversarial Training | 2016 (ICLR) |
10k labels | All labels | Random 500k Tiny Images | Restricted 237k Tiny Images | Method | Year |
---|---|---|---|---|---|
38.65 (±0.51) | 26.30 (±0.15) | 23.62 (±0.23) | 23.79 (±0.24) | Temporal Ensembling for Semi-Supervised Learning | 2016 |