openai/openai-agents-python

TypeError: 'NoneType' object is not subscriptable in OpenAIChatCompletionsModel when Runner.run receives orchestrator_result.to_input_list() with Gemini API

Closed this issue · 1 comments

Title: TypeError: 'NoneType' object is not subscriptable in OpenAIChatCompletionsModel when Runner.run receives orchestrator_result.to_input_list() with Gemini API

Description:
I am consistently encountering a TypeError: 'NoneType' object is not subscriptable within the agents library's OpenAIChatCompletionsModel when attempting to pass the output of orchestrator_result.to_input_list() as input to a subsequent Runner.run call for a synthesizer_agent. This occurs despite both the synthesizer_agent and run_config being explicitly configured to use the gemini-2.5-flash model via an AsyncOpenAI client pointed to the Google Generative Language API.

The error points to a situation where response.choices[0].message is None after an API call to Gemini, indicating that the LLM is not returning an expected message object when provided with the structured conversation history from to_input_list().

Steps to Reproduce:

  1. Environment Setup:

    • Ensure GEMINI_API_KEY is set in your .env file.
    • Install necessary dependencies (e.g., pip install openai-agents openai python-dotenv).
  2. Code (examples/agent_patterns/agents_as_tools.py):

    import asyncio
    import os
    
    from agents import Agent, ItemHelpers, MessageOutputItem, OpenAIChatCompletionsModel, RunConfig, Runner, trace
    from dotenv import load_dotenv
    from openai import AsyncOpenAI
    
    load_dotenv()
    gemini_api_key = os.getenv("GEMINI_API_KEY")
    
    if not gemini_api_key:
        raise ValueError("GEMINI_API_KEY not found in .env file. Please create a .env file and add your key.")
    
    # Enable verbose logging for debugging (Optional)
    
    external_client = AsyncOpenAI(
        api_key=gemini_api_key,
        base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
    )
    model = OpenAIChatCompletionsModel(
        model="gemini-2.5-flash",
        openai_client=external_client
    )
    run_config = RunConfig(
        model=model,
    
    )
    spanish_agent = Agent(
        name="spanish_agent",
        instructions="You translate the user's message to Spanish",
        handoff_description="An english to spanish translator",
        model=model
    )
    
    french_agent = Agent(
        name="french_agent",
        instructions="You translate the user's message to French",
        handoff_description="An english to french translator",
        model=model
    )
    
    italian_agent = Agent(
        name="italian_agent",
        instructions="You translate the user's message to Italian",
        handoff_description="An english to italian translator",
        model=model
    )
    
    orchestrator_agent = Agent(
        name="orchestrator_agent",
        instructions=(
            "You are a translation agent. You use the tools given to you to translate."
            "If asked for multiple translations, you call the relevant tools in order."
            "You never translate on your own, you always use the provided tools."
        ),
        tools=[
            spanish_agent.as_tool(
                tool_name="translate_to_spanish",
                tool_description="Translate the user's message to Spanish",
            ),
            french_agent.as_tool(
                tool_name="translate_to_french",
                tool_description="Translate the user's message to French",
            ),
            italian_agent.as_tool(
                tool_name="translate_to_italian",
                tool_description="Translate the user's message to Italian",
            ),
        ],
        model=model
    )
    
    synthesizer_agent = Agent(
        name="synthesizer_agent",
        instructions="You inspect translations, correct them if needed, and produce a final concatenated response.",
        model=model
    )
    
    
    async def main():
        msg = input("Hi! What would you like translated, and to which languages? ")
    
        # Run the entire orchestration in a single trace
        with trace("Orchestrator evaluator"):
            orchestrator_result = await Runner.run(orchestrator_agent, msg, run_config=run_config)
    
            for item in orchestrator_result.new_items:
                if isinstance(item, MessageOutputItem):
                    text = ItemHelpers.text_message_output(item)
                    if text:
                        print(f"  - Translation step: {text}")
            print(orchestrator_result.to_input_list())
            synthesizer_result = await Runner.run(
                synthesizer_agent, orchestrator_result.to_input_list(), run_config=run_config
            )
    
        print(f"\n\nFinal response:\n{synthesizer_result.final_output}")
    
    
    if __name__ == "__main__":
        asyncio.run(main())

Input:
Hi! What would you like translated, and to which languages? hi in spanish

Expected Behavior:
The synthesizer_agent should successfully process the list of messages provided by orchestrator_result.to_input_list() and synthesize a final response (e.g., "Hola"). The OpenAIChatCompletionsModel should correctly translate the structured input (including function_call and function_call_output message types) into a format consistently understood by the Gemini API, leading to a valid LLM response.

Observed Behavior:
The program crashes with a TypeError: 'NoneType' object is not subscriptable when Runner.run attempts to process orchestrator_result.to_input_list() for the synthesizer_agent. This indicates that the response.choices[0].message returned from the Gemini API call within OpenAIChatCompletionsModel.get_response is None, suggesting the API either returned an empty response or a response that could not be parsed into the expected structure containing a message.

Additional Context:
When orchestrator_result.final_output (a simple string) is passed directly to the synthesizer_agent's Runner.run call, the code executes without error. This suggests the issue is specific to the handling of the richer, list-based conversation history containing tool calls (function_call, function_call_output types) within the OpenAIChatCompletionsModel when translating these to Gemini API requests and parsing its responses. It might be a compatibility gap in how these structured messages are interpreted or serialized/deserialized between the openai-agents library's OpenAIChatCompletionsModel and the Gemini API.

Hi, thanks for sharing the details. Gemini's Chat Completions API is not fully compatible for Agents SDK use cases. Thus, please use LiteLLM adapter for the model: https://openai.github.io/openai-agents-python/models/litellm/