[Bug]: LiteLLM checks for non-existing Langfuse versions
Closed this issue · 6 comments
What happened?
There is a discussion going on here which partly describes one of the issues coming form this:
https://github.com/orgs/langfuse/discussions/3780
So not all functionalities of Langfuse work with LiteLLM because LiteLLM checks for non-existing Langfuse versions:
https://github.com/BerriAI/litellm/blob/main/litellm/integrations/langfuse/langfuse.py#L399
The latest version of Langfuse is 2.53.2, yet LiteLLM checks for versions up to 2.7.3. The confusion is probably because their main (non python) repo is alreayd up to 2.86. I would assume this can be easily solved by lowering the version to a recent version of the actual Langufse python package.
To summarize, this would solve:
- Support adding langfuse tags to calls with LiteLLM
- Support adding a langfuse prompt with LiteLLM calls to track usage of langfuse prompt management prompts
- Support adding costs with LiteLLM calls
- Support adding generation start time
Relevant log output
No response
Twitter / LinkedIn details
No response
the check is for >= 2.7.3
so wouldn't 2.53.2
evaluate to true?
i can see it working since cost tracking etc. does work as expected
I'm an idiot, misread 7.3 as 73. Did find the actual problem though; the documentation says to give as prompt the ChatPromptClient
object, but the LiteLLM logic expects a dict.
It throws an exception here:
https://github.com/BerriAI/litellm/blob/main/litellm/integrations/langfuse/langfuse.py#L379
And then removes the prompt object here:
https://github.com/BerriAI/litellm/blob/main/litellm/integrations/langfuse/langfuse.py#L386
Everything is fixed though by not handing the ChatPromptClient
directly, but ChatPromptClient.__dict__
. Not sure if it should stay like this but for me it works for now.
That sounded like a good solution, but unfortunately, it didn't work for me and raised an error.
Did you pass it as part of the metadata in litellm?
Did you check that Langfuse actually tracked the prompt?
my error:
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for Prompt_Text
labels
field required (type=value_error.missing)
tags
field required (type=value_error.missing)
(Altough i do see they are available in .dict)
Like so:
langfuse = Langfuse()
langfuse_prompt = langfuse.get_prompt("litellm_test_prompt", label="latest")
# <code to load litellm client, I use Router, together with Instructor for structured outputs>
class UserInfo(BaseModel):
name: str
age: int
user_message = "John Doe is 30 years old."
user_info, completion = llm.chat.completions.create_with_completion(
model="claude-sonnet-3.5",
max_tokens=4096,
temperature=0.1,
response_model=UserInfo,
max_retries=2,
messages=langfuse_prompt.compile(text=user_message),
metadata={"prompt": langfuse_prompt.__dict__},
)
I'm using:
litellm==1.48.10
langfuse==2.53.1
instructor==1.5.0
@JohanBekker thank you for the solution!