Pinned issues
Issues
- 3
[Bug]: Async streaming responses using custom callbacks has wrong kwargs sometimes
#3352 opened by joch - 1
- 0
[Bug]: Nextcloud -> LiteLLM -> Ollama -> async_generator object is not iterable
#3357 opened by gitwittidbit - 1
gemini don't connect
#3356 opened by danilo26 - 0
[Bug]: Not catching bedrock rate limit error
#3355 opened by dirkpetersen - 1
- 1
- 0
[Bug]: When custom templates are defined via register_prompt_template with ollama, the whole template is not replaced.
#3350 opened by japanvik - 0
la casa embrujada
#3347 opened by Tomasgq - 1
- 3
[Bug]: vllm 'System prompt not supported'
#3325 opened by krrishdholakia - 4
[Feature]: Support tasks in embeddings
#3322 opened by demux79 - 3
[Feature]: Adding support for Volcano Engine
#3342 opened by Jeffwan - 0
[Feature]: Add support dashscope API for Qwen models
#3343 opened by denverdino - 0
- 1
- 1
- 0
- 2
[Bug]: trim_messages cuts between tool function call and tool function response
#3329 opened by Znunu - 0
[Feature]: Proxy - Support passing through OpenAI, anthropic API Keys from request headers
#3332 opened by ishaan-jaff - 1
- 1
Restricted to 100 threads, can we give users to decide how many threads/ request they want to send to openai in parallel ?
#3321 opened by vivek-hounddog - 3
- 1
[Feature]: ability to see how much capacity is remaining before hitting quota
#3323 opened by amit10may - 2
[Bug]: Problem calling functions with Mistral Large
#3315 opened by sebderhy - 1
- 2
[Bug]: Vertex AI async not working `safety_settings`?
#3318 opened by Manouchehri - 2
[Bug]: huggingface embeddings broken
#3261 opened by dhruv-anand-aintech - 11
- 0
- 2
[Bug]: NameError: name 'GenericAPILogger' is not defined when apply Custom Callback APIs(generic) with proxy server
#3290 opened by hiep-dinh - 2
[Bug]: `chat.completion.chunk`'s index is invalid when using `n>=2` and `stream=True`
#3276 opened by Manouchehri - 0
- 2
[Bug]: Gibberish output of Llama-3 models on AWS Bedrock
#3297 opened by aswny - 0
- 5
[Bug]: Broken s3 cache creation with streaming?
#3268 opened by Manouchehri - 1
[Bug]: With a key with access to "All Team Models", list models returns just "all-team-models"
#3275 opened by tylerbrandt - 0
[Bug]: anthropic stop_sequences: each stop sequence must contain non-whitespace
#3286 opened by PrinceBaghel258025 - 0
- 4
[Bug]: Langfuse logging only logs the first choice (i.e. nothing beyond `n>1`)
#3273 opened by Manouchehri - 0
[Bug]: Improve `n` caching logic
#3272 opened by Manouchehri - 0
- 0
[Feature]: track gemini image tokens
#3269 opened by krrishdholakia - 1
[Bug]: logprobs missing from langfuse
#3254 opened by Manouchehri - 0
- 0
[Feature]: filter by team on `/spend/logs`
#3263 opened by krrishdholakia - 9
[Bug]: `logprobs=True` with `stream=True` is broken (OpenAI and Azure OpenAI)
#3253 opened by Manouchehri - 1
[Feature]: OpenAI Batches Endpoint Support
#3251 opened by krrishdholakia - 3
[Bug]: admin.litellm.ai shows full-screen error after 3-4 seconds, meaning it's fully broken
#3248 opened by AshSourceTable - 0
[Feature]: Support upstream batching on OpenAI
#3247 opened by Manouchehri