Using own training dataset
pwrstudio opened this issue · 0 comments
pwrstudio commented
Hi!
I feel like I am missing something basic here, but trying to train this on the omniglot dataset with thsi command:
floyd run \
--gpu \
--env pytorch-0.2 \
--data feiqinyu/datasets/omniglot/1:omni \
"python main.py --dataset omni --dataroot /omni --outf trained_models --cuda --ngpu 1 --niter 20"
I run into the following problem:
2018-11-22 01:12:25 PSTRun Output:
2018-11-22 01:12:25 PSTStarting services.
2018-11-22 01:12:25 PSTsupervisor: unrecognized service
2018-11-22 01:12:26 PSTNamespace(batchSize=64, beta1=0.5, cuda=True, dataroot='/omni', dataset='omni', imageSize=64, lr=0.0002, manualSeed=None, ndf=64, netD='', netG='', ngf=64, ngpu=1, niter=20, nz=100, outf='trained_models', workers=2)
2018-11-22 01:12:26 PSTRandom Seed: 1734
2018-11-22 01:12:26 PSTTraceback (most recent call last):
2018-11-22 01:12:26 PSTFile "main.py", line 85, in <module>
2018-11-22 01:12:26 PSTassert dataset
2018-11-22 01:12:26 PSTNameError: name 'dataset' is not defined
Any pointers on where I go wrong would be much appreciated!