nitishgupta/nmn-drop

gc_params problem

wenhuchen opened this issue · 1 comments

Hi, I'm using the allennlp version as you recommended. When I ran your code, I got into the following problem:

Traceback (most recent call last):
  File "/home/hustchenwenhu/anaconda3/envs/pytorch1.4/bin/allennlp", line 11, in <module>
    load_entry_point('allennlp', 'console_scripts', 'allennlp')()
  File "/data/wenhu/allennlp/allennlp/run.py", line 18, in run
    main(prog="allennlp")
  File "/data/wenhu/allennlp/allennlp/commands/__init__.py", line 102, in main
    args.func(args)
  File "/data/wenhu/allennlp/allennlp/commands/train.py", line 124, in train_model_from_args
    args.cache_prefix)
  File "/data/wenhu/allennlp/allennlp/commands/train.py", line 168, in train_model_from_file
    cache_directory, cache_prefix)
  File "/data/wenhu/allennlp/allennlp/commands/train.py", line 234, in train_model
    validation_iterator=pieces.validation_iterator)
  File "/data/wenhu/allennlp/allennlp/training/trainer.py", line 726, in from_params
    params.assert_empty(cls.__name__)
  File "/data/wenhu/allennlp/allennlp/common/params.py", line 433, in assert_empty
    raise ConfigurationError("Extra parameters passed to {}: {}".format(class_name, self.params))
allennlp.common.checks.ConfigurationError: "Extra parameters passed to Trainer: {'gc_freq': 500}"

Any idea how to fix this problem?

Are you trying to train the model?

If yes, you should use this version of allennlp. I had to make minor changes to periodically call garbage collection to free memory during training.

https://github.com/nitishgupta/allennlp/tree/my-edits-0.9

You can install this locally using pip install --editable .