aurelio-labs/semantic-router

Mistral embedding model excessive tokens error

Opened this issue · 2 comments

For MistralAI when creating embeddings with mistral-embed:

MistralAPIException: Status: 400. Message: {"object":"error","message":"Too many tokens in batch. Max is 16384 got 17555","type":"invalid_request_error","param":null,"code":null}

Should add handling for this error.

I have the same bug... to follow

I've started looking into this. Close to some sort of solution.