Opened this issue a year ago · 0 comments
Hi, when I run "preprocess_data_supervised.py" by using llama-7b-hf tokenizer, it shows "Using pad_token, but it is not set yet" and "Truncation was not explicitly activated but max_length is provided a specific value,...".
max_length
Is it ok?