seed is not working
Opened this issue · 0 comments
shekhovt commented
Hi,
I am failing to get reproducible behavior with the help of fixing the seed parameter. I've tried
num_workers = 0
batch_size = 10
batches_ahead = 1
torch.manual_seed(0)
np.random.seed(0)
train_loader = ffcv.loader.Loader(train_set, batch_size=batch_size, os_cache=True, batches_ahead=batches_ahead, num_workers=num_workers, order=OrderOption.QUASI_RANDOM, seed = o.data_seed, pipelines=train_pipelines(), drop_last=True)
(data,target) = next(iter(train_loader))
print(data.mean())
print(target)
del train_loader
torch.manual_seed(0)
np.random.seed(0)
train_loader = ffcv.loader.Loader(train_set, batch_size=batch_size, os_cache=True, batches_ahead=batches_ahead, num_workers=num_workers, order=OrderOption.QUASI_RANDOM, seed = o.data_seed, pipelines=train_pipelines(), drop_last=True)
(data,target) = next(iter(train_loader))
print(target)
print(data.mean())
It prints different results:
tensor(0.0761)
tensor(-0.0380)
I expect to get the same values. As far as I can tell, the problem is with RandomResizedCropRGBImageDecoder
in the pipeline. I believe the original images selected in the batch are always the same and are in the same order. But it seems to me that the decoder gets applied in a multiprocessing setup, maybe not in the same order or not respecting the seed and as a result randomly crops differently each time. If I use e.g. CenterCropRGBImageDecoder
and RandomHorizontalFlip
, I get reproducible behavior.
Would it be possible to address this?