AgentOps-AI/agentops

LLM call Latency increased due to agentops

rupav opened this issue ยท 2 comments

๐Ÿ› Bug Report

๐Ÿ”Ž Describe the Bug
Give a clear and concise description of the bug.
I have a fastapi uvicorn server which serves multiple concurrent requests. In each of the call, I am using LLM call. To monitor the same, I am creaing new agentops session, and then patch then initiate_chat method of the autogen's first most call. But using the same, LLM call latency have been increased by 5x.

๐Ÿ”„ Reproduction Steps

  • Init agentops before any LLM calls (on startup)
  • create a session on even API call to server (which has internal autogen agents integrated)
  • use created session.patch method on initiate_chat method of autogen
  • end the session before serving API response to client.

๐Ÿ™ Expected Behavior
Describe what you expected to happen.

  • Latency should be minimised

๐Ÿ” Additional Context
Provide any other context about the problem here.
Python: 3.11
agentops: 0.3.6
pyautogen: 0.2.32

Hey @rupav, thanks for reporting. We are discussing this internally and think we might just need to deploy a server in India to bring down the latency. Will report back when we arrive at a decision thanks

Hey @rupav, thanks for reporting. We are discussing this internally and think we might just need to deploy a server in India to bring down the latency. Will report back when we arrive at a decision thanks

Cheap ones available in New Delhi :)