This repository is the unofficial implementation of Active Learning baseline algorithms for image classification.
To install requirements:
pip install -r requirements.txt
To train the model(s) in the paper, run this command:
python main.py -AL random -D CIFAR10 --data_dir <path_to_data> --batch_size 128 --lr 0.1 --lrscheduler multistep --milestone 160 --num_epoch 200 --resize 32 --init kaiming
-AL
: implemented AL methodfixed
: AL with fixed indicesrandom
: random selectioncoreset
: core set selection [paper]vaal
: VAAL [paper]num_epoch_vaal
(default 100)
learningloss
: learning loss [paper]subset_size
(default 10000)epoch_loss
(default 120)margin
(default 1.0)weight
(default 1.0)
ws
: weight decay scheduling [paper]ws_sampling_type
badge
: BADGE selection [paper]seqgcn
: sequential GCN [paper]subset_size
(default 10000)lambda_loss
(default 1.2)s_margin
(default 0.1)hidden_units
(default 128)dropout_rate
(default 0.3)lr_gcn
(default 0.001)
tavaal
: TA-VAAL[paper]num_epoch_vaal
(default 100)weight
(default 1.0)subset_size
(default 10000)
bait
: BAIT [paper]alfamix
: ALFA-Mix [paper]gradnorm
: AL GradNorm [paper]subset_size
(default 10000)
Our model achieves the following performance on:
Model name \ Accuracy over labeled set | 600 | 800 | 1000 |
---|---|---|---|
Ours | . | . | . |
📋 Include a table of results from your paper, and link back to the leaderboard for clarity and context. If your main result is a figure, include that figure and link to the command or notebook to reproduce it.
MIT License
📋 Pick a licence and describe how to contribute to your code repository.
- https://github.com/ej0cl6/deep-active-learning
- https://github.com/Mephisto405/Learning-Loss-for-Active-Learning
- https://github.com/JordanAsh/badge
- https://github.com/sinhasam/vaal
- https://github.com/cubeyoung/TA-VAAL
- https://github.com/razvancaramalau/Sequential-GCN-for-Active-Learning
- https://github.com/AminParvaneh/alpha_mix_active_learning
- https://github.com/xulabs/aitom/tree/master/aitom/ml/active_learning/al_gradnorm