Assisted model doesn't seem to be working for Meta-Llama-3-8B
jivanph opened this issue · 2 comments
jivanph commented
System Info
transformers
version: 4.41.0.dev0- Platform: Linux-5.10.214-180.855.amzn2int.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.1
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
prompt = "Alice and Bob"
checkpoint = 'meta-llama/Meta-Llama-3-8B'
assistant_checkpoint = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
device = "cpu"
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, do_sample=True, assistant_model=assistant_model,num_assistant_tokens=3,num_assistant_tokens_schedule='constant')
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
The error I get is:
IndexError: index out of range in self
Expected behavior
I would expect a decoded sequence to be printed. But I get an error about an index out of range.
ArthurZucker commented
In [3]: assistant_model.config
Out[3]:
LlamaConfig {
"_name_or_path": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 5632,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 22,
"num_key_value_heads": 4,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.41.0.dev0",
"use_cache": true,
"vocab_size": 32000
}
In [4]: model.config
Out[4]:
LlamaConfig {
"_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.41.0.dev0",
"use_cache": true,
"vocab_size": 128256
}
the vocab size is very different 😉 you need to resize the embedding or choose a better assistant that was trained on the same vocab size!
Closing as this is just that the assitant model is not correct!
jivanph commented
Thanks, that makes a lot of sense!