lizhaoliu-Lec/CPCM

Cannot reproduce the results.

Closed this issue · 16 comments

Thank you for your codes. I have found that the setting of two_stream_mask_self_loss in default.yaml is "False", which means that the calculation of mask loss for feats_aux and feats_masked_aux (Z2, Zm loss in your paper) is not involved. I also found that two_stream_mask_mode was set to "random", while it should be "grid". I have corrected them and test under 0.1% setting on S3DIS, while the mIoU is only 63.5, not 66.3.

Additionally, I conducted another experiment on 0.01% setting on S3DIS, and the mIoU is 50.9 within 180 epochs.

Sorry for the inconvenience. I have fixed the wrong exp scripts for S3DIS 0.01%. Please rerun the experiments to update your results. Let me know if you need further assistance to reproduce our result.

Moreover, I have also fixed the wrong exp scripts for S3DIS 0.1%. Let me know if you need further assistance.

I have updated the exp scripts for both ScanNet and S3DIS. Let me know if you need further assistance.

Thanks for your replies, Lizhao. I'm now conducting experiments to try to reproduce your work.

I used the 0.1% S3DIS exp script, and the mIoU is 58.0
截屏2023-08-14 15 36 31

May you provide the running script? Moreover, I have updated the config/default.yaml.

Here is the running script:
CUDA_VISIBLE_DEVICES=0 python launch.py ddp_train.py --config config/default.yaml
GENERAL.exp_name 1e-3_percentage_consis_weight2_maskGrid075GridSize4_weight5
TRAINER.name TwoStreamTrainer
MODEL.out_channels 13
DATA.name StanfordDataLoader
DATA.dataset StanfordArea5Dataset
DATA.voxel_size 0.05
DATA.batch_size 2
DATA.train_limit_numpoints 1000000
OPTIMIZER.lr 0.01
OPTIMIZER.weight_decay 0.001
SCHEDULER.name PolyLR
TRAINER.epochs 180
EVALUATOR.iou_num_class 13
DATA.stanford3d_path /dataset_share_ssd/S3DIS_processed
DATA.stanford3d_sampled_inds /dataset_share_ssd/S3DIS_processed/points/percentage0.001evenc
DATA.sparse_label False
DATA.two_stream True
MODEL.two_stream_model_apply True
TRAINER.two_stream_feats_key semantic_scores
TRAINER.two_stream_loss_mode js_divergence_v2
TRAINER.two_stream_seg_both True
TRAINER.two_stream_loss_weight 2.0
AUGMENTATION.use_color_jitter False
TRAINER.two_stream_mask_grid_size 4
TRAINER.two_stream_loss_mask_mode js_divergence_v2
TRAINER.two_stream_mask_ratio 0.75
TRAINER.two_stream_mask_mode grid
TRAINER.two_stream_mask_extra_stream True
TRAINER.two_stream_mask_feats_key semantic_scores
TRAINER.two_stream_mask_corr_loss True
TRAINER.two_stream_mask_self_loss True
TRAINER.two_stream_loss_mask_weight 5.
TRAINER.two_stream_mask_loss_threshold -1.0
TRAINER.empty_cache_every 1

I do not notice the change of config/default.yaml, and I will try it soon. Thanks for your help.

What about the result of # 0.1% baseline or # 0.1% consistency baseline, consis weight 1? The result of these two baselines can help me understand what's going on in your experiment.

Things turned to be normal. Finally I got an mIoU of 65.6 under 0.1% S3DIS setting. Thanks for your help.

Things turned to be normal. Finally I got an mIoU of 65.6 under 0.1% S3DIS setting. Thanks for your help.

Hello, currently I am also experiencing the same problem, 0.1% S3DIS result miou is only 55, can you tell me your solution

Things turned to be normal. Finally I got an mIoU of 65.6 under 0.1% S3DIS setting. Thanks for your help.

Hello, currently I am also experiencing the same problem, 0.1% S3DIS result miou is only 55, can you tell me your solution

I remember that their scripts have some differences to their paper. You need to carefully check every hyper-parameter to align with their setting.

Things turned to be normal. Finally I got an mIoU of 65.6 under 0.1% S3DIS setting. Thanks for your help.

Hello, currently I am also experiencing the same problem, 0.1% S3DIS result miou is only 55, can you tell me your solution

I remember that their scripts have some differences to their paper. You need to carefully check every hyper-parameter to align with their setting.
Thank you for your reply. I use MinkowskiEngine 0.5.4. I don’t know if it will have any impact. I will check these parameters carefully.

Here is the running script: CUDA_VISIBLE_DEVICES=0 python launch.py ddp_train.py --config config/default.yaml GENERAL.exp_name 1e-3_percentage_consis_weight2_maskGrid075GridSize4_weight5 TRAINER.name TwoStreamTrainer MODEL.out_channels 13 DATA.name StanfordDataLoader DATA.dataset StanfordArea5Dataset DATA.voxel_size 0.05 DATA.batch_size 2 DATA.train_limit_numpoints 1000000 OPTIMIZER.lr 0.01 OPTIMIZER.weight_decay 0.001 SCHEDULER.name PolyLR TRAINER.epochs 180 EVALUATOR.iou_num_class 13 DATA.stanford3d_path /dataset_share_ssd/S3DIS_processed DATA.stanford3d_sampled_inds /dataset_share_ssd/S3DIS_processed/points/percentage0.001evenc DATA.sparse_label False DATA.two_stream True MODEL.two_stream_model_apply True TRAINER.two_stream_feats_key semantic_scores TRAINER.two_stream_loss_mode js_divergence_v2 TRAINER.two_stream_seg_both True TRAINER.two_stream_loss_weight 2.0 AUGMENTATION.use_color_jitter False TRAINER.two_stream_mask_grid_size 4 TRAINER.two_stream_loss_mask_mode js_divergence_v2 TRAINER.two_stream_mask_ratio 0.75 TRAINER.two_stream_mask_mode grid TRAINER.two_stream_mask_extra_stream True TRAINER.two_stream_mask_feats_key semantic_scores TRAINER.two_stream_mask_corr_loss True TRAINER.two_stream_mask_self_loss True TRAINER.two_stream_loss_mask_weight 5. TRAINER.two_stream_mask_loss_threshold -1.0 TRAINER.empty_cache_every 1

Sorry to bother you again. My script settings are the same as yours. Can you share the config/default.yaml file? I suspect there are some problems with this.

Here is the running script: CUDA_VISIBLE_DEVICES=0 python launch.py ddp_train.py --config config/default.yaml GENERAL.exp_name 1e-3_percentage_consis_weight2_maskGrid075GridSize4_weight5 TRAINER.name TwoStreamTrainer MODEL.out_channels 13 DATA.name StanfordDataLoader DATA.dataset StanfordArea5Dataset DATA.voxel_size 0.05 DATA.batch_size 2 DATA.train_limit_numpoints 1000000 OPTIMIZER.lr 0.01 OPTIMIZER.weight_decay 0.001 SCHEDULER.name PolyLR TRAINER.epochs 180 EVALUATOR.iou_num_class 13 DATA.stanford3d_path /dataset_share_ssd/S3DIS_processed DATA.stanford3d_sampled_inds /dataset_share_ssd/S3DIS_processed/points/percentage0.001evenc DATA.sparse_label False DATA.two_stream True MODEL.two_stream_model_apply True TRAINER.two_stream_feats_key semantic_scores TRAINER.two_stream_loss_mode js_divergence_v2 TRAINER.two_stream_seg_both True TRAINER.two_stream_loss_weight 2.0 AUGMENTATION.use_color_jitter False TRAINER.two_stream_mask_grid_size 4 TRAINER.two_stream_loss_mask_mode js_divergence_v2 TRAINER.two_stream_mask_ratio 0.75 TRAINER.two_stream_mask_mode grid TRAINER.two_stream_mask_extra_stream True TRAINER.two_stream_mask_feats_key semantic_scores TRAINER.two_stream_mask_corr_loss True TRAINER.two_stream_mask_self_loss True TRAINER.two_stream_loss_mask_weight 5. TRAINER.two_stream_mask_loss_threshold -1.0 TRAINER.empty_cache_every 1

Sorry to bother you again. My script settings are the same as yours. Can you share the config/default.yaml file? I suspect there are some problems with this.

CUDA_VISIBLE_DEVICES=0 python launch.py ddp_train.py --config config/default.yaml
GENERAL.exp_name 1e-3_percentage_consis_weight2_maskGrid075GridSize4_weight5
TRAINER.name TwoStreamTrainer
MODEL.out_channels 13
DATA.name StanfordDataLoader
DATA.dataset StanfordArea5Dataset
DATA.voxel_size 0.05
DATA.batch_size 2
DATA.train_limit_numpoints 1000000
OPTIMIZER.lr 0.01
OPTIMIZER.weight_decay 0.001
SCHEDULER.name PolyLR
TRAINER.epochs 180
EVALUATOR.iou_num_class 13
DATA.stanford3d_path /dataset_share_ssd/S3DIS_processed
DATA.stanford3d_sampled_inds /dataset_share_ssd/S3DIS_processed/points/percentage0.001evenc
DATA.sparse_label False
DATA.two_stream True
MODEL.two_stream_model_apply True
TRAINER.two_stream_feats_key semantic_scores
TRAINER.two_stream_loss_mode js_divergence_v2
TRAINER.two_stream_seg_both True
TRAINER.two_stream_loss_weight 2.0
AUGMENTATION.use_color_jitter False
TRAINER.two_stream_mask_grid_size 4
TRAINER.two_stream_loss_mask_mode js_divergence_v2
TRAINER.two_stream_mask_ratio 0.75
TRAINER.two_stream_mask_mode grid
TRAINER.two_stream_mask_extra_stream True
TRAINER.two_stream_mask_feats_key semantic_scores
TRAINER.two_stream_mask_corr_loss True
TRAINER.two_stream_mask_self_loss True
TRAINER.two_stream_loss_mask_weight 5.
TRAINER.two_stream_mask_loss_threshold -1.0
TRAINER.empty_cache_every 1

Here is my script. I ran this repo last year and I've forgotten if there were any changes I made in their codes. You can have a try with my script.