alan-turing-institute/reginald

Add loading of Huggingface token

rchan26 opened this issue · 0 comments

When using some models like Llama2 or Gemma, you need to request access first on Huggingface and then pass in an token argument or sign in via the huggingfce-cli.

For us, it's probably easiest to just require the user to set a HUGGINGFACE_TOKEN environment variable and then pass it to where it's needed (e.g. when we load the Llama2 tokenizer with transformers.AutoTokenizer).