Can't make predictions following the example
xegulon opened this issue · 8 comments
I'm, thanks for the code
I've tried to run the predictions mode following this: https://github.com/lavis-nlp/spert#examples.
I have downloaded the spacy model, but I get the following error:
OSError: Can't load tokenizer for 'data/models/conll04'. Make sure that: ...
How can I solve it?
Did you fetch the preprocessed datasets and models first? There should be a conll04 model under /data/models.
If not, try this:
bash ./scripts/fetch_datasets.sh
bash ./scripts/fetch_models.sh
It works for me. Did you navigate into the root SpERT folder (with the spert.py file) and executed 'python ./spert.py predict --config configs/example_predict.conf'? Which transformers version are you using?
Finally it works. My mistake was that I ran the script from elsewhere than the root folder. Thanks!
However, I can't make predictions with raw .txt
files. I get a JSON error.
Yes, the code currently supports only JSON input files in one of the three formats shown in 'conll04_prediction_example.json'
For example, you can convert sentences of your .txt file into the following format (list of sentences) and save it as a JSON file:
[ "In 1822, the 18th president of the United States, Ulysses S. Grant, was born in Point Pleasant, Ohio.", "Sentence 2", "Sentence 3", ...]