tokenizer for triton inference server
geraldstanje opened this issue · 0 comments
geraldstanje commented
hi,
Can this be used with triton inference server for huggingface setfit (https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)?
here is what i currently do with python:
from transformers import AutoTokenizer
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
Thanks,
Gerald