How to await a tools feedback message?
Opened this issue · 3 comments
In the examples for the voice-pipeline-agent, the function_calling_weather.py has an example on how to add tools to the agent. In line 43 it executes a say() function to give feedback to the user telling him that the tool is going to be used [await call_ctx.agent.say(message)] This function is not awaited and the agent returns the response before delivering the feedback message. Does anyone now how to actually make the function await this call and make it happen before the rest of the function is executed?
I'm not sure what you mean.. could you give an example forThis function is not awaited and the agent returns the response before delivering the feedback message.
It first completes the function call and its associated sound, then play the filler sound. But instead it should play the filler sound first.
`class AssistantFnc(llm.FunctionContext):
"""
The class defines a set of LLM functions that the assistant can execute.
"""
@llm.ai_callable()
async def get_weather(
self,
location: Annotated[
str, llm.TypeInfo(description="The location to get the weather for")
],
):
"""Called when the user asks about the weather. This function will return the weather for the given location."""
# Example of a filler message while waiting for the function call to complete.
# NOTE: This message illustrates how the agent can engage users by using the `say()` method
# while awaiting the completion of the function call. To create a more dynamic and engaging
# interaction, consider varying the responses based on context or user input.
call_ctx = AgentCallContext.get_current()
# message = f"Let me check the weather in {location} for you."
message = f"Here is the weather in {location}: "
filler_messages = [
"Let me check the weather in {location} for you.",
"Let me see what the weather is like in {location} right now.",
# LLM will complete this sentence if it is added to the end of the chat context
"The current weather in {location} is ",
]
message = random.choice(filler_messages).format(location=location)
# NOTE: set add_to_chat_ctx=True will add the message to the end
# of the chat context of the function call for answer synthesis
speech_handle = await call_ctx.agent.say(message, add_to_chat_ctx=True) # noqa: F841
# To wait for the speech to finish
# await speech_handle.join()
logger.info(f"getting weather for {location}")
url = f"https://wttr.in/{location}?format=%C+%t"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
weather_data = await response.text()
# response from the function call is returned to the LLM
return f"The weather in {location} is {weather_data}."
else:
raise Exception(
f"Failed to get weather data, status code: {response.status}"
)
`
@m-aliabbas how to add filler messages when using the MultimodalAgent
?