imirzadeh/Teacher-Assistant-Knowledge-Distillation
Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf
PythonMIT
Issues
- 3
Baseline training
#11 opened by peterbj95 - 0
the performance of plain10 and plain2 on cifar100
#23 opened by cotyyang - 0
I want to know nni version
#22 opened by TaiseiYamana - 0
nni issue
#21 opened by arthur0219 - 0
Implementation of DML as a benchmark
#20 opened by aryanasadianuoit - 1
Three main issues: seed optimization, parameter tuning on the test set, and wrong normalization for cifar10
#19 opened by tknovisky - 0
ImageNet - ResNet experiment
#18 opened by wonchulSon - 2
TA training parameters for CIFAR-100 experiment
#17 opened by alberthky - 2
maybe one mistake
#16 opened by jiangmijiangmi - 3
Tensorboard web UI
#14 opened by mlleo - 3
Baseline training
#15 opened by ming666-wum - 5
Experimental results
#13 opened by MaorunZhang - 6
Cifar 10 training - resenet 110 to resnet 8
#10 opened by karanchahal - 0
excuse me ,how can i solve this problem
#12 opened by storm-zhuo - 14
Teacher (resnet26) best accuracy
#8 opened by TeerathChandani - 12
Optimized values
#7 opened by TeerathChandani - 2
command not found: nnictl
#6 opened by TeerathChandani - 1
Error while running the code
#5 opened by TeerathChandani - 2
Choice of Loss Function
#4 opened by adrianloy - 3
Training Issue (problem with nni)
#2 opened by userb2020 - 1
Student's performance on resnet8
#3 opened by aminshabani - 5
'resnet 110' as teacher, 'resnet20' as TA, 'resnet8' as student on CIFAR100’
#1 opened by InstantWindy