Pinned issues
Issues
- 8
[Bug]: raise validation error for anthropic message input on `.completion` call
#6565 opened by markokraemer - 2
[Bug]: Tool_choice not supported with watsonx.ai.
#6562 opened by yobens5 - 0
- 1
[Feature]: Error details as json
#6578 opened by julian-hartl - 0
[Bug]: Usage cannot be used
#6612 opened by cai417 - 1
- 0
[Bug]: Can't use Anthropic's new text_editor_20241022/str_replace_editor tool
#6608 opened by paul-gauthier - 5
- 1
[Bug]: tpm/rmp of llm model not managed by litellm in memoryb redis cluster
#6602 opened by salman-lilly - 0
[Bug]: 2 vulnerabilities found
#6596 opened by Jacobh2 - 0
[Bug]: Wrong value for budget duration
#6601 opened by superpoussin22 - 5
[Bug]: ensure Anthropic prometheus token tracking
#6599 opened by krrishdholakia - 0
[Feature]: Add support for SAP AI Core
#6600 opened by aantn - 7
[Bug]: Cant add new Models, getting wrong error
#6558 opened by Lo10Th - 3
[Bug]: LiteLLM API Connection Error Issue (Aider, Amazon Bedrock, Claude 3.5 Sonnet v2)
#6559 opened by ldavis9000aws - 2
[Bug]: New llama 3.2 vision models from groq not working: prompting with images is incompatible with system messages
#6569 opened by pradhyumna85 - 0
[Bug]: Cannot get past 50 RPS
#6592 opened by vutrung96 - 3
- 0
- 4
[Feature]: Add GitHub Copilot as model provider
#6564 opened by Jasmin68k - 3
[Bug]: `The model is repeating the same chunk`
#6549 opened by Manouchehri - 1
[Bug]: AzureException
#6581 opened by nathanlara - 1
- 0
[Bug]: File uploads with purpose=batch error out with 'Invalid value for purpose.'
#6582 opened by siddhantgawsane - 0
22:13:10 - LiteLLM Router:ERROR: router.py:2811 - litellm.router.py::async_function_with_fallbacks() - Error occurred while trying to do fallbacks - litellm.APIConnectionError: Cannot connect to host localhost:11434 ssl:default [Connect call failed ('127.0.0.1', 11434)]
#6580 opened by shuvo881 - 1
[Feature]: NVIDIA NIM support
#6548 opened by win4r - 0
[Bug]: Incorrect cost calculation when writing cached prompt tokens in Anthropic
#6575 opened by Gullesnuffs - 0
2024-11-04 17:23:16 ERROR rasa.engine.graph - [error ] graph.node.error_running_component node_name=train_SingleStepLLMCommandGenerator0 ProviderClientAPIException: ProviderClientAPIException: Failed to embed documents Original error: litellm.BadRequestError: litellm.ContextWindowExceededError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 13915 tokens (13915 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
#6574 opened by prasanthv123 - 0
- 0
[Feature]: Return 'trace_id' for failed requests
#6568 opened by krrishdholakia - 0
[Bug]: httpx.ConnectError: All connection attempts failed -> prisma-client-py
#6567 opened by HGladiator - 5
- 1
having this error while using crawl4ai
#6560 opened by Trail87 - 0
AutoAgents / llm
#6557 opened by HanlinJo - 0
[Bug]: When user is deleted, user not removed from the teams he is member
#6556 opened by superpoussin22 - 0
- 0
[Feature]: Adding WatsonX models on the litellm models and price and context window json.
#6552 opened by lorenzejay - 1
[Bug]: When calling cohere-embed-v3 models via Azure AI, the model name in response is always `Cohere-embed-v3-multilingual`
#6540 opened by taralika - 1
[Bug]: `cohere-embed-v3` models fail via litellm proxy while they work fine directly via Azure AI
#6541 opened by taralika - 2
[Bug]: litellm proxy generates a very different embedding vector for images using cohere-embed-v3 models than directly using the same model via Azure AI
#6542 opened by taralika - 1
[Bug]: Bedrock with function calling: The number of toolResult blocks at messages.2.content exceeds the number of toolUse blocks of previous turn.
#6535 opened by xingyaoww - 0
[Feature]: Define an exception for temporary 502 errors
#6550 opened by enyst - 3
- 2
- 1
NO FALLBACK when streaming and [Bug]: litellm.InternalServerError: AnthropicException - Overloaded. Handle with `litellm.InternalServerError`.
#6532 opened by kodemonk - 4
[Bug]: `OpenAI Embedding` does not support `modality` parameter in `extra body.
#6525 opened by S1LV3RJ1NX - 0
[Feature]: Enable Customer Usage Tracking for Assistant API and Token Usage Display in UI
#6531 opened by Venukiran004 - 0
[Bug]: Assistant Listing Limited to 20 with Duplications – Limit Parameter Ignored
#6530 opened by Venukiran004 - 0
[Feature]: File Upload Not Working with Virtual Key
#6529 opened by Venukiran004 - 0
[Feature]: Enable Endpoint Tracing for Image, Audio, and Assistant Operations
#6527 opened by Venukiran004