zjunlp/Mol-Instructions

can not find config.json

Closed this issue · 17 comments

Hello, I'm not sure which code you were running when you encountered the error. It might be due to an incorrect path. Based on the screenshot, it appears to involve the config.json file.

Do you have any further questions?

ah,I've got a new question,I would like to know, where can I get the llama foundation model, I run it, but when it load the llama, it would throw that error, which is ValueError: Converting into mixed 8-bit weights from tf/flax weights is currently not supported, please make sure the weights are in
PyTorch format. I got my llama-2-7b-hf on the model scope,does it fit your model?
image

if you don't mind, may you add my WeChat and give us some recommendation? +86 13021115937

Hi,

If you're looking to perform tasks related to proteins, you should currently use llama instead of llama2 for protein-related tasks. For your reference, the weights for llama can be found here: https://huggingface.co/baffo32/decapoda-research-llama-7B-hf.

Hi,

If you're looking to perform tasks related to proteins, you should currently use llama instead of llama2 for protein-related tasks. For your reference, the weights for llama can be found here: https://huggingface.co/baffo32/decapoda-research-llama-7B-hf.

actually,I am going to run molecule model,
image
so I think that the problem might from llama2?

image have you meet problem like this?

Did you download the model following the process we provided? I noticed that the model you downloaded is incorrect; we are using Llama-2-7b-chat.

ok,I will try it

I still can not execute it,is this model ok?https://www.modelscope.cn/models/shakechen/Llama-2-7b-chat/files,or I must load the model from huggingface?

We haven't tried downloading models from modelscope; we always download from Hugging Face.

ok,thank you

image I notice that Llama-2-7b-chat on hf also don't have config.json image
image do you guys use this model?

image do you guys use this model?

It should be this one.

hi, do you have any other questions?

I run it successfully, thank you for those details and amazing work.