nlp-uoregon/trankit

Impossible to train on CPU

n2oblife opened this issue · 1 comments

Hi !
When I try to train a customized pipeline on cpu the console returns an error, here is the core of the code :

`training_config={
'category': 'customized',
'task': 'posdep',
'save_dir': './save_dir',
'gpu' : False,
'train_conllu_fpath': my-path/train.conllu', # annotations file in CONLLU format for training
'dev_conllu_fpath': my-path/dev.conllu' # annotations file in CONLLU format for development
}

trainer = TPipeline(training_config)
trainer.train()'

and here is the output :

'File "/.../trankit_build/trankit/models/classifiers.py", line 130, in forward
diag = torch.eye(batch.head_idxs.size(-1) + 1, dtype=torch.bool).cuda().unsqueeze(0)
File "/.../lib/python3.10/site-packages/torch/cuda/init.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available'

I cloned the repo and changed the line to enable the training on CPU but wanted to warn you just in case (even if training on cpu is not efficient, some might not have the proper material, or like me want to test training on cpu before launching the scripts on dedicated servers)

Hi @oterrier ,
Thanks for letting us know.
We have updated Trankit to resolve the issue.
For more information, you can refer to this commit:

2aef4a5

The update can be applied by installing Trankit from source.
We're making this change available in the next release.
Thanks