mboudiaf/RePRI-for-Few-Shot-Segmentation

The oracle setting

zhiheLu opened this issue · 3 comments

Hi, I tried to reproduce the oracle results. I changed FB_param_type: oracle and then ran it. The results have a huge gap to the reported one. I am not sure if I missed something. I will show the configs and the results below. This is the evaluation for split 0 on Pascal dataset.

Configs:

FB_param_noise: 0
FB_param_type: oracle
FB_param_update: [10]
adapt_iter: 50
arch: resnet
augmentations: ['hor_flip', 'vert_flip', 'resize']
batch_size: 12
batch_size_val: 50
bins: [1, 2, 3, 6]
bottleneck_dim: 512
ckpt_path: checkpoints/
ckpt_used: best
cls_lr: 0.025
cls_visdom_freq: 5
data_root: data/pascal/
debug: False
distance: cos
distributed: False
dropout: 0.1
episodic: True
epochs: 50
gamma: 0.1
gpus: [0]
image_size: 417
layers: 50
log_freq: 50
lr: 0.0025
lr_stepsize: 30
m_scale: False
main_optim: SGD
manual_seed: 2020
mean: [0.485, 0.456, 0.406]
milestones: [40, 70]
mixup: False
momentum: 0.9
n_runs: 1
nesterov: True
norm_feat: True
num_classes_tr: 16
num_classes_val: 5
padding_label: 255
port: 41565
pretrained: True
random_shot: False
rot_max: 10
rot_min: -10
save_models: True
save_oracle: False
scale_lr: 1.0
scale_max: 2.0
scale_min: 0.5
scheduler: cosine
shot: 1
smoothing: True
std: [0.229, 0.224, 0.225]
temperature: 20.0
test_name: default
test_num: 1000
test_split: default
train_list: lists/pascal/train.txt
train_name: pascal
train_split: 0
use_split_coco: False
val_list: lists/pascal/val.txt
visdom_port: -1
weight_decay: 0.0001
weights: [1.0, 'auto', 'auto']
workers: 2

Results:

loading weight 'model_ckpt/pascal/split=0/model/pspnet_resnet50/smoothing=True/mixup=False/best.pth'
=> loaded weight 'model_ckpt/pascal/split=0/model/pspnet_resnet50/smoothing=True/mixup=False/best.pth'
INFO: pascal -> pascal
INFO: 0 -> 0
Start Filtering classes
Removed classes = []
Kept classes = ['airplane', 'bicycle', 'bird', 'boat', 'bottle']
Processing data for [1, 2, 3, 4, 5]
==> Start testing
Test: [200/1000] mIoU 0.5896 Loss 0.2439 (0.2205)
Test: [400/1000] mIoU 0.5998 Loss 0.1735 (0.1959)
Test: [600/1000] mIoU 0.5908 Loss 0.2507 (0.2111)
Test: [800/1000] mIoU 0.5969 Loss 0.2431 (0.2067)
Test: [1000/1000] mIoU 0.5981 Loss 0.2006 (0.2036)
mIoU---Val result: mIoU 0.5981.
Class 1 : 0.7976
Class 4 : 0.6099
Class 2 : 0.1969
Class 5 : 0.5643
Class 3 : 0.8221
Test: [200/1000] mIoU 0.6224 Loss 0.2126 (0.1886)
Test: [400/1000] mIoU 0.6115 Loss 0.2031 (0.2012)
Test: [600/1000] mIoU 0.6164 Loss 0.2035 (0.2002)
Test: [800/1000] mIoU 0.6124 Loss 0.2239 (0.1972)
Test: [1000/1000] mIoU 0.6120 Loss 0.1726 (0.2009)
mIoU---Val result: mIoU 0.6120.
Class 1 : 0.7903
Class 3 : 0.8333
Class 2 : 0.1915
Class 5 : 0.6145
Class 4 : 0.6307
Test: [200/1000] mIoU 0.5921 Loss 0.1996 (0.2134)
Test: [400/1000] mIoU 0.5944 Loss 0.2304 (0.2220)
Test: [600/1000] mIoU 0.6082 Loss 0.2099 (0.2091)
Test: [800/1000] mIoU 0.6092 Loss 0.1999 (0.2107)
Test: [1000/1000] mIoU 0.6077 Loss 0.2175 (0.2081)
mIoU---Val result: mIoU 0.6077.
Class 5 : 0.5949
Class 3 : 0.8324
Class 4 : 0.6531
Class 1 : 0.7896
Class 2 : 0.1688
Test: [200/1000] mIoU 0.5977 Loss 0.2175 (0.2164)
Test: [400/1000] mIoU 0.5995 Loss 0.2068 (0.2146)
Test: [600/1000] mIoU 0.6099 Loss 0.1955 (0.2092)
Test: [800/1000] mIoU 0.6122 Loss 0.1857 (0.2072)
Test: [1000/1000] mIoU 0.6091 Loss 0.1860 (0.2027)
mIoU---Val result: mIoU 0.6091.
Class 5 : 0.5856
Class 4 : 0.6328
Class 2 : 0.2274
Class 3 : 0.8108
Class 1 : 0.7888

Hi,

Yes, please refer to the oracle.sh script for the oracle results. The main differences are that the inference is ran for longer (300 iterations) and the F/B parameter / loss weighting terms are fixed throughout inference. Hope this helps.

Malik

Thanks for your prompt reply. I just notice the oracle.sh. Sorry about that.

No worries, I will update the README.md to make it clear. Thanks for the questions.

Malik