Error when used train_duq_fm.py
rginpan opened this issue · 4 comments
File "/deterministic-uncertainty-quantification-master/utils/evaluate_ood.py", line 44, in loop_over_dataloader
kernel_distance, pred = output.max(1)
AttributeError: 'tuple' object has no attribute 'max'
Thanks for the commit, but there is a new error:
File "/deterministic-uncertainty-quantification-master/train_duq_fm.py", line 75, in calc_gradient_penalty
gradients = gradients.flatten(start_dim=1)
AttributeError: 'NoneType' object has no attribute 'flatten'
Sorry, the aboved mentioned problem happened only when I use CPU to train, forduq_cifar
I also train on CPU, there is no problem.
Update:
When I use GPU, after the training, Segmentation fault (core dumped)
:(I only trianed with sigama={0.05}
)
NEW MODEL
Validation Results - Epoch: 5 Acc: 0.8962 BCE: 0.06 GP: 0.544238 AUROC MNIST: 0.92 AUROC NotMNIST: 0.95
Sigma: 0.05
Validation Results - Epoch: 10 Acc: 0.9170 BCE: 0.05 GP: 0.532214 AUROC MNIST: 0.94 AUROC NotMNIST: 0.95
Sigma: 0.05
Validation Results - Epoch: 15 Acc: 0.9232 BCE: 0.04 GP: 0.498158 AUROC MNIST: 0.94 AUROC NotMNIST: 0.96
Sigma: 0.05
Validation Results - Epoch: 20 Acc: 0.9234 BCE: 0.04 GP: 0.489612 AUROC MNIST: 0.93 AUROC NotMNIST: 0.96
Sigma: 0.05
Validation Results - Epoch: 25 Acc: 0.9240 BCE: 0.04 GP: 0.499154 AUROC MNIST: 0.93 AUROC NotMNIST: 0.96
Sigma: 0.05
Validation Results - Epoch: 30 Acc: 0.9234 BCE: 0.04 GP: 0.505931 AUROC MNIST: 0.93 AUROC NotMNIST: 0.95
Sigma: 0.05
[(0.9234, 0.0), (0.9218, 0.0), (0.930930225, 0.0), (0.9545501361888487, 0.0)]
{'lgp0.0_ls0.05': [(0.9234, 0.0), (0.9218, 0.0), (0.930930225, 0.0), (0.9545501361888487, 0.0)]}
Segmentation fault (core dumped)
- Yes, the code only supports running with a GPU.
- I cannot reproduce your second problem. Perhaps you run out of memory? Try a batch size smaller than 500 in
utils/evaluate_ood.py
.