openai/openai-agents-python

Agents SDK adds response_format, although i dont set output_type

Closed this issue · 2 comments

Please read this first

  • Have you read the docs?Agents SDK docs
    • yes
  • Have you searched for related issues? Others may have faced similar issues.
    • yes

Describe the bug

I am using the qwen model, which requires passing {"response_format": {"type": "json_object"}} to enable structured output. I have not set output_type, but I am still encountering the following error. It seems that the SDK internally sets a default value automatically.
--> openai.resources.chat.completions.completions.AsyncCompletions.create() got multiple values for keyword argument 'response_format'

Debug information

  • Agents SDK version: v0.2.10
  • Python version: Python 3.12

Repro steps

add extra_args like this, error report:" openai.resources.chat.completions.completions.AsyncCompletions.create() got multiple values for keyword argument 'response_format' "

model_settings=ModelSettings(  
        extra_args={"response_format": {"type": "json_object"}}  
    )  

Expected behavior

If the user has not set output_type, the user should be allowed to set response_format through ModelSettings, rather than the SDK internally setting a default response_format, which would prevent users from manually setting it.

Hi, thanks for writing in.

We've been receiving feedback on Qwen model support, but our understanding is that the model does not support structured output like Chat Completions API expects. More importantly, even if this SDK provides a way to customize the response format part, you won't be able to use tools along with the structured outputs as long as you use the model.

This is why we've closed #1595, which is a similar issue. If the model supports structured outputs along with tool calling, we may consider doing something extra for the model support, but for now, we don't plan to adjust our LiteLLM support for it.

Hi, thanks for writing in.

We've been receiving feedback on Qwen model support, but our understanding is that the model does not support structured output like Chat Completions API expects. More importantly, even if this SDK provides a way to customize the response format part, you won't be able to use tools along with the structured outputs as long as you use the model.

This is why we've closed #1595, which is a similar issue. If the model supports structured outputs along with tool calling, we may consider doing something extra for the model support, but for now, we don't plan to adjust our LiteLLM support for it.

I think I didn’t clearly express my point. This modification is not intended to cater to a specific model provider but rather pertains to the design logic of the SDK. In my opinion, when creating an agent, parameters that the user has not manually specified, such as output_type, should not have a default value like response_format=None set within the SDK. Setting such default values should be the responsibility of the backend service. Otherwise, it restricts the user’s ability to manually configure these parameters, regardless of whether they are using Litellm or the OpenAI client. I believe this issue is not limited to the Qwen model; any scenario where certain fields need to be customized but the SDK predefines default values for those fields will encounter similar problems.