Model doesn't support task text-classification for the neuron backend
ldoff-tech42 opened this issue · 0 comments
ldoff-tech42 commented
We are adding classification to our fine tuning as follows:
# dataset import and preparation
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
use_cache=True,
device_map="auto",
use_flash_attention_2=True,
quantization_config=bnb_config,
problem_type="single_label_classification",
num_labels=2
)
# Training code
# Compile pipeline
We can fine-tune a model with a classification head, and deploy/test with a GPU. However, when we go to compile the fine-tuned model for Inferentia, we are getting text-classification not supported. It looks the same for LLaMA.
optimum-cli export neuron --model meta-llama/Llama-2-7b-hf --task text-classification --batch_size 1 --sequence_length 16 /home/export_neuron/meta-llama/Meta-Llama-3-8B
"llama doesn't support task text-classification for the neuron backend. Supported tasks are: text-generation."
Does Inferentia support any LLMs with an added classification head?