torch/qtlua

"qlua" does not fullly work with torch.CmdLine()

hnanhtuan opened this issue · 1 comments

Hi,

When execute the following script using th in command line

cmd = torch.CmdLine()
cmd:text()
cmd:text('CIFAR Training')
cmd:text()
cmd:text('Options:')
cmd:option('-save', fname:gsub('.lua',''), 'subdirectory to save/log experiments in')
cmd:option('-full', 0, 'use full dataset (50,000 samples)')
cmd:option('-visualize', false, 'visualize input data and weights during training')
cmd:option('-seed', 1, 'fixed input seed for repeatable experiments')
cmd:option('-optimization', 'ADAM', 'optimization method: SGD | ADAM')
cmd:option('-learningRate', 1e-3, 'learning rate at t=0')
cmd:option('-lrnRateDecay', 5e-7, 'learning rate decay')
cmd:option('-beta1', 0.9, 'beta 1')
cmd:option('-beta2', 0.99, 'beta 2')
cmd:option('-eps', 1e-5, 'for numerical stability (ADAM)')
cmd:option('-threads', 2, 'nb of threads to use')
cmd:option('-batchSize', 500, 'mini-batch size (1 = pure stochastic)')
cmd:option('-dtype', 'torch.CudaTensor')
cmd:text()
opt = cmd:parse(arg)
print(opt)

the output is:

{
  seed : 1
  beta1 : 0.92
  learningRate : 0.001
  full : 1
  batchSize : 500
  visualize : false
  eps : 1e-05
  threads : 2
  lrnRateDecay : 5e-07
  save : "main"
  beta2 : 0.99
  optimization : "ADAM"
  dtype : "torch.CudaTensor"
}

However, when using qlua to execute the script the output is:
table: 0x41b73b90

This is not a big issue as the values in opt are properly parsed. However, it would be better if it could be fixed.

Thanks

qlua -lenv [your script].

Or, at the top of your script, add a line:
require 'env';

pretty-printing is not in lua by default (but is in th), so the env package should cover you: https://github.com/torch/env