BUG/DOC: Tutorial eval and predict configs missing sections with model names
harshidapancholi opened this issue · 8 comments
Hi, I'm trying to run the vak tutorial and am running into an error during the evaluation step. I've double checked that I named all the directories as indicated, and here is the error I get.
(vak-env) C:\Users\RobertsLab\Desktop\vak-demo\gy6or6>vak eval gy6or6_eval.toml
2024-01-18 16:18:42,122 - vak.cli.eval - INFO - vak version: 1.0.0a3
2024-01-18 16:18:42,122 - vak.cli.eval - INFO - Logging results to C:\Users\RobertsLab\Desktop\vak-demo\gy6or6\results\eval
Traceback (most recent call last):
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\model.py", line 44, in config_from_toml_dict
model_config = toml_dict[model_name]
KeyError: 'TweetyNet'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\Scripts\vak-script.py", line 9, in
sys.exit(main())
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak_main_.py", line 48, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\cli.py", line 54, in cli
COMMAND_FUNCTION_MAPcommand
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\cli.py", line 4, in eval
eval(toml_path=toml_path)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\eval.py", line 41, in eval
model_config = config.model.config_from_toml_path(toml_path, model_name)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\model.py", line 90, in config_from_toml_path
return config_from_toml_dict(config_dict, model_name)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\model.py", line 46, in config_from_toml_dict
raise ValueError(
ValueError: A config section specifies the model name 'TweetyNet', but there is no section named 'TweetyNet' in the config.
Any help is appreciated, thank you!
Hey @harshidapancholi sorry you're running into this issue.
This is our fault 😳
There should be a table in the file with the name "TweetyNet", like the error message says.
Something like this:
[TweetyNet.optimizer]
lr = 0.001
Can you please try with the attached config and tell me if it works?
(After you make the changes so the paths point to the right place on your system)
gy6or6_eval.zip
My fault, thank you for catching this!
I will fix in the tutorial (or you could if you would like, I'm happy to walk you through how you'd do that).
I think it might also work if you literally just wrote
[TweetyNet.optimizer]
(with no key-value pairs underneath the table name)
It's a limitation of the current config file format that we have to add a "dummy" table for the network even if we don't change any options, will make a note that we need to fix this
A different error:
(vak-env) C:\Users\RobertsLab\Desktop\vak-demo\gy6or6>vak eval gy6or6_eval.toml
Traceback (most recent call last):
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\parse.py", line 169, in _load_toml_from_path
config_toml = toml.load(fp)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\toml\decoder.py", line 156, in load
return loads(f.read(), _dict, decoder)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\toml\decoder.py", line 213, in loads
raise TomlDecodeError("Key name found without value."
toml.decoder.TomlDecodeError: Key name found without value. Reached end of line. (line 77 column 2 char 3803)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\Scripts\vak-script.py", line 9, in
sys.exit(main())
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak_main_.py", line 48, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\cli.py", line 54, in cli
COMMAND_FUNCTION_MAPcommand
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\cli.py", line 4, in eval
eval(toml_path=toml_path)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\cli\eval.py", line 25, in eval
cfg = config.parse.from_toml_path(toml_path)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\parse.py", line 201, in from_toml_path
config_toml = _load_toml_from_path(toml_path)
File "C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\vak\config\parse.py", line 171, in _load_toml_from_path
raise Exception(
Exception: Error when parsing .toml config file: gy6or6_eval.toml
Oh whoops, due to a typo because I accidentally typed an "s" at the end of the file 🤦
Try this one, please
gy6or6_eval.zip
I deleted the s, and its running right now. It did throw up this message, not sure if I should just ignore it:
(vak-env) C:\Users\RobertsLab\Desktop\vak-demo\gy6or6>vak eval gy6or6_eval.toml
2024-01-18 17:32:28,133 - vak.cli.eval - INFO - vak version: 1.0.0a3
2024-01-18 17:32:28,133 - vak.cli.eval - INFO - Logging results to C:\Users\RobertsLab\Desktop\vak-demo\gy6or6\results\eval
2024-01-18 17:32:28,133 - vak.eval.frame_classification - INFO - Duration of a frame in dataset, in seconds: 0.002
2024-01-18 17:32:28,133 - vak.eval.frame_classification - INFO - loading spect scaler from path: C:\Users\RobertsLab\Desktop\vak-demo\gy6or6\results\train\results_240118_165912\StandardizeSpect
2024-01-18 17:32:28,149 - vak.eval.frame_classification - INFO - loading labelmap from path: C:\Users\RobertsLab\Desktop\vak-demo\gy6or6\results\train\results_240118_165912\labelmap.json
C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\torch\utils\data\dataloader.py:557: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8 (cpuset
is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
2024-01-18 17:32:28,212 - vak.eval.frame_classification - INFO - running evaluation for model: TweetyNet
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: C:\Users\RobertsLab\Desktop\vak-demo\gy6or6\results\eval\lightning_logs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
C:\Users\RobertsLab\anaconda3\envs\vak-env\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:436: Consider setting persistent_workers=True
in 'val_dataloader' to speed up the dataloader worker initialization.
Great!
I haven't seen that warning before.
I don't think it's the end of the world--please let me know if you do see slowness/freezing.
There's not much of a downside to lowering that option, like setting it to num_workers=8
as suggested, since we usually run with smaller batch sizes anyway. (So we don't need to load a ton of things in parallel)
And if you use a larger window size (which I would recommend) then there will be a limit on your batch size.
Are you working with zebra finch data?
FWIW I helped a student from Lois lab and found that window_size=2000
gave pretty good performance.
Please let me know how things go, I'm happy to hop on Zoom to troubleshoot or tech support.
We are about to add AVA too, would be great to get your feedback on that
@NickleDave same error as before with the predict file.
raise ValueError(
ValueError: A config section specifies the model name 'TweetyNet', but there is no section named 'TweetyNet' in the config.
Thank you @harshidapancholi for catching that.
That config also needs a [TweetyNet]
table.
Guessing you already figured that out but replying just in case anybody else missed it.
Correct config attached: gy6or6_predict.zip
I will double check that these work and then fix them in the tutorial.
This should be fixed now, thanks @harshidapancholi