Max_new_token
sheetalmathur opened this issue · 1 comments
sheetalmathur commented
How can i change the max_new_token number in the fine tunning of whisper model.
def transcribe(audio):
with torch.cuda.amp.autocast():
#new_token=1000
text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"]
return text
Vaibhavs10 commented
Hi @sheetalmathur - What's your use-case here? I'm not sure what exactly are you trying to do here.