Confusezius/ICCV2019_MIC

Results are worse than noted in your paper

SikaStar opened this issue · 15 comments

Hi,

I run 'Result_Runs.sh' to train CUB200 dataset, but the results are worse than noted in your paper. NMI: 0.6116, F1: 0.2391, R@1=0.5610, R@2=0.6750, R@4=0.7841.

That's weird... Let me check that and get back to you :). Could you tell me the hardware and setup you used?

That's weird... Let me check that and get back to you :). Could you tell me the hardware and setup you used?

Hi,
I run your code on Titan xp. The setup is the same as your 'Result_Runs.sh'. This is my results on CUB200 dataset:
image

The values reported are the best test values correct?

Ok I'm checking the runs again.

Could you send me the InfoPlot_Class.svg summary? So I know how the training looked like?

Also what PyTorch version are you using?

I think you forgot to attach it?

I think you forgot to attach it?

Hi, this is InfoPlot_Class.svg of CUB200

image

image

Hey @SikaStar,

I reran everything again and this is the current performance plot that I get:

InfoPlot_Class

Which produces similar values to those reported in our paper. There are some slight differences usually depending on the seed, as this influences the initial clustering quality.

Could you tell me the version of faiss and cuda you are using? Maybe this is causing the issue. Our results were generated using cuda 8, and haven't been validated for cuda 10.

Specifically using PyTorch version py3.6_cuda8.0.61_cudnn7.1.2_2.

Hey @SikaStar,

I reran everything again and this is the current performance plot that I get:

InfoPlot_Class

Which produces similar values to those reported in our paper. There are some slight differences usually depending on the seed, as this influences the initial clustering quality.

Could you tell me the version of faiss and cuda you are using? Maybe this is causing the issue. Our results were generated using cuda 8, and haven't been validated for cuda 10.

Specifically using PyTorch version py3.6_cuda8.0.61_cudnn7.1.2_2.

Hi,
I use faiss-gpu 1.5.1 and cuda 10 and its performance is amazing! I will test it with cuda 8 and find out what the cause of the problem is. Thanks for your reply!

Hi, I try to use cuda8 but it doesn't seem to work. The results are still bad. What's the parameters setting you used for CUB200 dataset? I don't think that the huge gap of results depends on cuda version.

Hey @SikaStar ,
I have now tried using the latest version of PyTorch (1.2) and faiss (1.6) both with cuda 10 support, and get similar results (to those in the paper).

Since you are using the parameters provided in Result_Runs.sh, there shouldn't be any significant difference. I will append mine regardless to cross-check.

The only thing I can imagine causing this is faiss, since all metrics are using this library (nearest neighbour, cluster computation). Did you install both faiss and pytorch with cuda 10 support?

My setup (which is stored in Parameter_Info.txt):

dataset
	cub200

arch
	resnet50

not_pretrained
	False

k_vals
	[1, 2, 4, 8]

n_epochs
	100

kernels
	8

seed
	0

scheduler
	step

gamma
	0.3

decay
	0.0004

tau
	[55]

task_p
	[1.0, 0.8]

lr
	1e-05

bs
	112

cs_per_bs
	[4, 4]

embed_sizes
	[128, 128]

losses
	['marginloss', 'marginloss']

sampling
	['distance', 'distance']

proxy_lr
	[1e-05, 1e-05]

beta
	[1.2, 1.2]

beta_lr
	[0.0005, 0.0005]

nu
	[0, 0]

margin
	[0.2, 0.2]

adversarial
	['Class-Shared']

adv_weights
	[2500.0]

adv_dim
	512

shared_num_classes
	30

cluster_update_freq
	3

cluster_mode
	mean

random_cluster_pick_p
	0.2

gpu
	7

savename
	result_run_cub200_checkup

make_graph
	False

source_path
	<path_to_data>

save_path
	<path_to_save_folder>

tasks
	['Class', 'Shared']

device
	cuda

all_num_classes
	[100, 30]

samples_per_class
	4

mean
	[0.485, 0.456, 0.406]

std
	[0.229, 0.224, 0.225]

input_space
	RGB

input_range
	[0, 1]

Hey @SikaStar,
I reran everything again and this is the current performance plot that I get:
![InfoPlot_Class](https://user-images.githubusercontent.com/2625252

Hey @SikaStar ,
I have now tried using the latest version of PyTorch (1.2) and faiss (1.6) both with cuda 10 support, and get similar results (to those in the paper).

Since you are using the parameters provided in Result_Runs.sh, there shouldn't be any significant difference. I will append mine regardless to cross-check.

The only thing I can imagine causing this is faiss, since all metrics are using this library (nearest neighbour, cluster computation). Did you install both faiss and pytorch with cuda 10 support?

My setup (which is stored in Parameter_Info.txt):

dataset
	cub200

arch
	resnet50

not_pretrained
	False

k_vals
	[1, 2, 4, 8]

n_epochs
	100

kernels
	8

seed
	0

scheduler
	step

gamma
	0.3

decay
	0.0004

tau
	[55]

task_p
	[1.0, 0.8]

lr
	1e-05

bs
	112

cs_per_bs
	[4, 4]

embed_sizes
	[128, 128]

losses
	['marginloss', 'marginloss']

sampling
	['distance', 'distance']

proxy_lr
	[1e-05, 1e-05]

beta
	[1.2, 1.2]

beta_lr
	[0.0005, 0.0005]

nu
	[0, 0]

margin
	[0.2, 0.2]

adversarial
	['Class-Shared']

adv_weights
	[2500.0]

adv_dim
	512

shared_num_classes
	30

cluster_update_freq
	3

cluster_mode
	mean

random_cluster_pick_p
	0.2

gpu
	7

savename
	result_run_cub200_checkup

make_graph
	False

source_path
	<path_to_data>

save_path
	<path_to_save_folder>

tasks
	['Class', 'Shared']

device
	cuda

all_num_classes
	[100, 30]

samples_per_class
	4

mean
	[0.485, 0.456, 0.406]

std
	[0.229, 0.224, 0.225]

input_space
	RGB

input_range
	[0, 1]

Hi,
It is my fault and I am so sorry. I tested your code on CUB200-2010 dataset instead of CUB200-2011 dataset which is your setting in your paper. The code is right. Thanks for your quick reply!