How to load custom-trained model and inference?
Closed this issue · 5 comments
Thank you for releasing the training code. I have train a model and I am wondering how to load it on the inference stage.
Could you please give me some example scripts or advice?
The checkpoints strcture:
exps/..../000100/
custom_checkpoint_0.pkl model.safetensors optimizer.bin random_states_0.pkl
Hi there,
Thanks for your interest!
Before running the inference script, you should first convert the training checkpoint to huggingface-compatible models by running python scripts/convert_hf.py --config <YOUR_EXACT_TRAINING_CONFIG>
. The converted checkpoint will be saved under exps/releases
.
Plz feel free to comment if there is still any trouble doing inference.
Proxy causes error. Wondering how to solve...
Traceback (most recent call last):
File "OpenLRM/scripts/convert_hf.py", line 80, in <module>
loaded_step = auto_load_model(cfg, hf_model)
File "OpenLRM/openlrm/utils/proxy.py", line 38, in wrapper
os.environ['HTTP_PROXY'] = HTTP_PROXY
File "/usr/local/lib/python3.10/os.py", line 685, in __setitem__
value = self.encodevalue(value)
File "/usr/local/lib/python3.10/os.py", line 757, in encode
raise TypeError("str expected, not %s" % type(value).__name__)
TypeError: str expected, not NoneType
Hi,
Plz comment out this line as a workaround here.
I'll try fix this problem later by detecting special environ vars and avoiding calling no_proxy
by default.
Hi,
Fixed in this commit here.
OpenLRM/openlrm/utils/proxy.py
Line 18 in c2260e0
It should work now without manually commenting out no_proxy
.
Thank you for releasing the training code. I have train a model and I am wondering how to load it on the inference stage. Could you please give me some example scripts or advice?
The checkpoints strcture: exps/..../000100/ custom_checkpoint_0.pkl model.safetensors optimizer.bin random_states_0.pkl
where did you configure the dataset path? what was the folder structure of the dataset?