AnthropicVertex stream chat generation is taking too much time
DhruvThu opened this issue · 7 comments
Recently, i have started using AnthropicVertex instead of direct anthropic. When I try to generate some data through AnthropicVertex client, it is taking around 2s to start streaming. However, in case of direct anthropic, it is not taking this much time. Also 2s duration is random, sometime it takes quite large amount of time and goes upto 6-10s. In worse case, it goes upto 20s. So, is there any que kind of stuff? I am using same code given in vertex ai anthropic notebook to generate responses. Is there any workaround which i need to complete to get response as fast as direct anthropic? If someone could guide me on this, it would be really helpful.
Thanks !!
Hey @DhruvThu, can you share a few id
s from the responses you get back on vertex requests? Or share a few request ids? This will help us debug.
Thanks for responding. Also, I am using streaming response from vertex anthropic and these are some of the message ids which i got in first chunk. msg_01D2jNpu4rUZMXUvwtpipMnx, msg_01CMjRdPAhDQaWELbrgSirS8
Hmm, message ids from vertex should look like msg_vrtx_...
. The ids you shared are for the direct (1P) Anthropic API.
Could you check for this? msg_vrtx_01AaDL52fwpTrqFftLMxxQ1e. Sorry for the previous one. In this message, it took around 2.4s to start streaming.
The streaming response through direct Anthropic API took around 0.89s. The message id for that is msg_01AWgnspZ2w5NhzE92uL7VZ9.
The code I am using is as follows,
class AnthropicLLM:
def __init__(self, anthropic_client : Anthropic, anthropic_vertex_client : AnthropicVertex) -> None:
self.anthropic = anthropic_client
credentials, project_id = google.auth.load_credentials_from_dict(google_credentials_info, scopes=["https://www.googleapis.com/auth/cloud-platform"],)
anthropic_vertex_client._credentials = credentials
self.vertex_anthropic = anthropic_vertex_client
self.messages = Messages(self)
class Messages:
def __init__(self, client : AnthropicLLM) -> None:
self.client = client
def create(self, model : str, messages : list[Message], temperature : float, system : str, stream : bool, max_tokens : int, tool_choice : str, tools : list[dict]):
model = model.replace("@", "-")
import time
st = time.time()
if(tools == []):
response = self.client.vertex_anthropic.messages.create(
model=model,
messages=messages,
temperature=temperature,
system=system,
stream=True,
max_tokens=max_tokens
)
print(response)
print(time.time()-st)
return response
else:
return self.client.vertex_anthropic.messages.create(
model=model,
messages=messages,
temperature=temperature,
system=system,
stream=stream,
max_tokens=max_tokens,
tool_choice=tool_choice,
tools=tools
)
Hey @DhruvThu, we've identified the root cause of this issue. While we work on a fix you can workaround this issue by explicitly passing an access_token
, e.g. AnthropicVertex(access_token=access_token)
Hey, thanks for the response. I try with access token
This will be fixed in the next release, v0.30.2
, #573.
Note that you will still see a delay when making the very first request with an AnthropicVertex
instance as we need to fetch the access token but subsequent requests will use the cached token.