DeepSeek R1 `reasoning_content` not accessible
Closed this issue · 2 comments
Checked other resources
- This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Example Code
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, MessagesState
# Minimal reproducible example
llm = ChatOpenAI(
model="deepseek-r1-250528",
openai_api_key="your-key",
openai_api_base="https://api.deepseek.com/v1"
)
# Test with LangGraph workflow
workflow = StateGraph(MessagesState)
workflow.add_node("chat", lambda state: {"messages": state["messages"] + [llm.invoke(state["messages"])]})
workflow.add_edge("chat", "__end__")
workflow.set_entry_point("chat")
app = workflow.compile()
# This should expose reasoning_content but doesn't
result = app.invoke({"messages": [HumanMessage(content="What is 2+2?")]})
print("Content:", result["messages"][-1].content)
print("Additional kwargs:", result["messages"][-1].additional_kwargs)
# Expected: reasoning_content should be in additional_kwargs
# Actual: reasoning_content is missing
Error Message and Stack Trace (if applicable)
No error message - the reasoning_content field is simply not exposed in the AIMessage object.
Description
What I'm trying to do: Use LangChain OpenAI integration with DeepSeek R1 to access both the final response and the detailed reasoning process.
What I expect to happen: The reasoning_content field from DeepSeek R1 should be accessible through AIMessage.additional_kwargs.
What is currently happening: The reasoning_content field is present in the raw API response but is not exposed through the LangChain interface.
Root Cause
The issue is in the response parsing logic in langchain_openai/chat_models/base.py:
_convert_dict_to_message function (lines 133-202) does not extract the reasoning_content field from the response dictionary
_create_chat_result function (lines 1234-1298) does not include the reasoning content in the message's additional_kwargs
Proposed Fix
The fix requires modifying the response parsing to include the reasoning_content field:
File: env/lib/python3.12/site-packages/langchain_openai/chat_models/base.py
Location: _convert_dict_to_message function, around lines 165-175
Change needed:
elif role == "assistant":
content = _dict.get("content", "") or ""
# Add support for reasoning_content
reasoning_content = _dict.get("reasoning_content", "")
additional_kwargs: dict = {}
if reasoning_content:
additional_kwargs["reasoning_content"] = reasoning_content
# ... rest of the function
Additional Context
DeepSeek R1 API: Returns reasoning_content as a top-level field in the message object
Use case: Educational applications, debugging, transparency in AI reasoning
Impact: This affects all LangChain integrations with DeepSeek R1, including LangGraph workflows
System Info
langchain-core==0.2.0
langchain-openai==0.1.8
openai==1.30.1
Use ChatDeepSeek
In addition to using ChatDeepSeek, v1 LangChain should handle this better with .content_blocks