mistralai/mistral-finetune

Add model format flexibility of AutoModel.from_pretrained()

DavidFarago opened this issue · 1 comments

Since I cannot load models from the huggingface hub (see #27), I am downloading models to a local directory. However, they are either in the format

generation_config.json, pytorch_model.bin.index.json, adapter, special_tokens_map.json, added_tokens.json, pytorch_model-00001-of-00003.bin, tokenizer.model, pytorch_model-00002-of-00003.bin, tokenizer_config.json,
config.json, pytorch_model-00003-of-00003.bin

or in the format

config.json, model-00004-of-00006.safetensors, model-00001-of-00006.safetensors , model-00005-of-00006.safetensors,
model-00002-of-00006.safetensors,  model-00006-of-00006.safetensors,
model-00003-of-00006.safetensors,  model.safetensors.index.json

Could you either add the flexibility of AutoModel.from_pretrained() to wrapped_model.py or explain how I can store my huggingface models locally in a format that wrapped_model.py can digest?

Same with the save_pretrained. I did not see that functionality in the current code base and hence can not leverage open source packages.