Human in loop middleware's edit argument passing issue
Closed this issue · 3 comments
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Example Code
Here's the code :
python
def wikipedia_lookup(query: str) -> str:
"""Lookup information on Wikipedia."""
print("=============================================================wiki============================================/n")
# Use Wikipedia API for lookup
url = "https://en.wikipedia.org/api/rest_v1/page/summary/" + query
response = requests.get(url)
if response.status_code == 200:
data = response.json()
return data.get("extract", "No results found")
else:
return "No results found"
# Setup agent
agent = create_agent(
"openai:gpt-4o-mini",
tools=[ wikipedia_lookup ],
middleware=[
HumanInTheLoopMiddleware(
tool_configs={
"wikipedia_lookup": {
"require_approval": True,
"description": "Wikipedia lookup requires approval",
},
},
message_prefix="Tool execution pending approval",
),
],
checkpointer=InMemorySaver(), # Required for interrupts
)
# Create a travel itinerary for a trip to Paris
config = {"configurable": {"thread_id": "1", "recursion_limit": 100}} # Configuration for the agent
initial_message = HumanMessage("Use web search tool and create a travel itinerary for a trip to Paris in 30 words .")
# Step 1: First invoke (this will pause for approval if tool requires it)
agent.invoke({"messages": [initial_message]}, config)
state = agent.get_state(config)
if state.next:
request = state.tasks[0].interrupts[0].value[0]["action_request"]
print(f"request--------------{request}")
print("action:", request["action"])
print("args:", request["args"])
# Display the original suggestion
print("Original suggestion:", request["args"])
# Prompt for human input
approval_decision = input("Enter approval decision (accept/edit/ignore/response): ")
if approval_decision == "accept":
result = agent.invoke(Command(resume=[{"type": "accept"}]), config=config)
elif approval_decision == "edit":
new_query = input("Enter modified query: ")
nq= res="'+ new_query +'"
result = agent.invoke(
Command(
resume=[
{
"type": "edit",
"args": {
'action': 'wikipedia_lookup', # tool name required
'args': {'query': nq} # wrap inside 'modified
}}
]
),
config=config,
)
elif approval_decision == "ignore":
result=agent.invoke(Command(resume=[{"type": "ignore"}]), config=config)
elif approval_decision == "response":
manual_response = input("Enter manual response: ")
res="'+ manual_response +'"
agent.invoke(Command(resume=[{"type": "response", "args": res}]), config=config)
else:
print("Invalid decision. Please try again.")
print("Final Itinerary:\n", result["messages"][-1].content)
Output:
request--------------{'action': 'wikipedia_lookup', 'args': {'query': 'Paris travel itinerary'}}
action: wikipedia_lookup
args: {'query': 'Paris travel itinerary'}
Original suggestion: {'query': 'Paris travel itinerary'}
Enter approval decision (accept/edit/ignore/response): edit
Enter modified query: travel guide for kashmir
=============================================================wiki============================================/n
Final Itinerary:
I don't have web search capabilities at the moment, but I can help create a travel itinerary for Paris based on general knowledge:
"Day 1: Eiffel Tower, Seine River cruise; Day 2: Louvre Museum, Montmartre; Day 3: Notre Dame, Latin Quarter, shopping on Champs-Élysées."
**The documentation says : ** edit: Modify arguments before execution - { type: "edit", args: { action: "tool_name", args: { modified: "args" } } }
Error Message and Stack Trace (if applicable)
No response
Description
I’m not sure if this is a bug or if I’m using LangChain v1 incorrectly, but here’s a minimal reproducible example and issue .
I was trying to cretae a travel plan generator using langchain create_agent using HumanInTheLoopMiddleware - accept , ignore worked fine but in edit i modified args while passing in resume list but the agent was not resuming the process using new args . I followed the documentation Langchain human in loop docs but i dont know if i passing args properly or not .
The edit args still don't work as expected when i completely follow the request json structure for passing modified args .
System Info
I was actually using google collab to run this code
C:\Users\Shagun Gupta>python -m langchain_core.sys_info
System Information
OS: Windows
OS Version: 10.0.26100
Python Version: 3.13.5 (tags/v3.13.5:6cb20a2, Jun 11 2025, 16:15:46) [MSC v.1943 64 bit (AMD64)]
Package Information
langchain_core: 0.3.74
langchain: 0.3.27
langchain_community: 0.3.27
langsmith: 0.4.14
langchain_google_genai: 2.1.9
langchain_openai: 0.3.30
langchain_text_splitters: 0.3.9
Optional packages not installed
langserve
Other Dependencies
aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
filetype: 1.2.0
google-ai-generativelanguage: 0.6.18
httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
httpx<1,>=0.23.0: Installed. No version info available.
jsonpatch<2.0,>=1.33: Installed. No version info available.
langchain-anthropic;: Installed. No version info available.
langchain-aws;: Installed. No version info available.
langchain-azure-ai;: Installed. No version info available.
langchain-cohere;: Installed. No version info available.
langchain-community;: Installed. No version info available.
langchain-core<1.0.0,>=0.3.66: Installed. No version info available.
langchain-core<1.0.0,>=0.3.72: Installed. No version info available.
langchain-core<1.0.0,>=0.3.74: Installed. No version info available.
langchain-deepseek;: Installed. No version info available.
langchain-fireworks;: Installed. No version info available.
langchain-google-genai;: Installed. No version info available.
langchain-google-vertexai;: Installed. No version info available.
langchain-groq;: Installed. No version info available.
langchain-huggingface;: Installed. No version info available.
langchain-mistralai;: Installed. No version info available.
langchain-ollama;: Installed. No version info available.
langchain-openai;: Installed. No version info available.
langchain-perplexity;: Installed. No version info available.
langchain-text-splitters<1.0.0,>=0.3.9: Installed. No version info available.
langchain-together;: Installed. No version info available.
langchain-xai;: Installed. No version info available.
langchain<1.0.0,>=0.3.26: Installed. No version info available.
langsmith-pyo3>=0.1.0rc2;: Installed. No version info available.
langsmith>=0.1.125: Installed. No version info available.
langsmith>=0.1.17: Installed. No version info available.
langsmith>=0.3.45: Installed. No version info available.
numpy>=1.26.2;: Installed. No version info available.
numpy>=2.1.0;: Installed. No version info available.
openai-agents>=0.0.3;: Installed. No version info available.
openai<2.0.0,>=1.99.9: Installed. No version info available.
opentelemetry-api>=1.30.0;: Installed. No version info available.
opentelemetry-exporter-otlp-proto-http>=1.30.0;: Installed. No version info available.
opentelemetry-sdk>=1.30.0;: Installed. No version info available.
orjson>=3.9.14;: Installed. No version info available.
packaging>=23.2: Installed. No version info available.
pydantic: 2.11.7
pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
pydantic<3,>=1: Installed. No version info available.
pydantic<3.0.0,>=2.7.4: Installed. No version info available.
pydantic>=2.7.4: Installed. No version info available.
pytest>=7.0.0;: Installed. No version info available.
PyYAML>=5.3: Installed. No version info available.
requests-toolbelt>=1.0.0: Installed. No version info available.
requests<3,>=2: Installed. No version info available.
requests>=2.0.0: Installed. No version info available.
rich>=13.9.4;: Installed. No version info available.
SQLAlchemy<3,>=1.4: Installed. No version info available.
tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
tiktoken<1,>=0.7: Installed. No version info available.
typing-extensions>=4.7: Installed. No version info available.
vcrpy>=7.0.0;: Installed. No version info available.
zstandard>=0.23.0: Installed. No version info available.
Hi,
I was able to quickly reproduce this bug, however I seemed to have a lot of issues with the Wikipedia search function. Ultimately it boiled down to the fact that the LLM is not receiving the new tool call response after "edit" in the HITL loop.
Instead I created a simple, more easy to debug code - using Weather API:
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langgraph.types import Command
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver
import requests
def weather_lookup(city: str) -> str:
"""Get current weather for a city using Open-Meteo."""
print("=============================================================weather============================================")
# print(f"Querying weather for: {city}")
# For demo, using fixed coordinates for a few cities
coords = {
"Paris": (48.8566, 2.3522),
"London": (51.5074, -0.1278),
"New York": (40.7128, -74.0060),
"San Francisco": (37.7749, -122.4194),
"Delhi": (28.7041, 77.1025),
}
if city not in coords:
return f"Weather data for {city} not available (try Paris, London, New York, San Francisco, Delhi)."
lat, lon = coords[city]
url = (
f"https://api.open-meteo.com/v1/forecast"
f"?latitude={lat}&longitude={lon}¤t_weather=true"
)
response = requests.get(url)
if response.status_code == 200:
data = response.json()
cw = data.get("current_weather", {})
if cw:
result = (
f"Current weather in {city}: {cw['temperature']}°C, "
f"windspeed {cw['windspeed']} km/h"
)
# print("Weather result:", result)
return result
else:
return f"No weather data found for {city}"
else:
return f"Weather API error: {response.status_code}"
#Setup agent
agent = create_agent(
"openai:gpt-4o-mini",
tools=[weather_lookup],
middleware=[
HumanInTheLoopMiddleware(
tool_configs={
"weather_lookup": {
"allow_accept": True,
"allow_edit": True,
"allow_respond": True,
"description": "Weather lookup requires approval",
},
},
description_prefix="Tool execution pending approval",
),
],
checkpointer=InMemorySaver(),
)
#Config
config = {"configurable": {"thread_id": "1", "recursion_limit": 100}}
initial_message = HumanMessage("Use weather tool to tell me the weather in New York.")
agent.invoke({"messages": [initial_message]}, config)
state = agent.get_state(config)
if state.next:
request = state.tasks[0].interrupts[0].value[0]["action_request"]
print(f"\nrequest--------------{request}")
print("action:", request["action"])
print("args:", request["args"])
print("Original suggestion:", request["args"])
approval_decision = input("Enter approval decision (accept/edit/ignore/response): ")
if approval_decision == "accept":
result = agent.invoke(Command(resume=[{"type": "accept"}]), config=config)
elif approval_decision == "edit":
new_city = input("Enter modified city: ")
result = agent.invoke(
Command(
resume=[
{
"type": "edit",
"args": {"action": "weather_lookup", "args": {"city": new_city}},
}
]
),
config=config,
)
elif approval_decision == "ignore":
result = agent.invoke(Command(resume=[{"type": "ignore"}]), config=config)
elif approval_decision == "response":
manual_response = input("Enter manual response: ")
result = agent.invoke(
Command(resume=[{"type": "response", "args": manual_response}]), config=config
)
else:
print("Invalid decision. Please try again.")
#print("\nFinal Weather Info:\n", result["messages"][-1].content)
print("\n=== Full Conversation Trace ===")
for i, msg in enumerate(result["messages"], 1):
print(f"\nMessage {i} ({msg.__class__.__name__}):")
print(msg.content)
A very simple script
This is how it works
And these are my findings so far
Normally works this way
Working Output for accept
---------------
request--------------{'action': 'weather_lookup', 'args': {'city': 'New York'}}
action: weather_lookup
args: {'city': 'New York'}
Original suggestion: {'city': 'New York'}
Enter approval decision (accept/edit/ignore/response): accept
=============================================================weather============================================
=== Full Conversation Trace ===
Message 1 (HumanMessage):
Use weather tool to tell me the weather in New York.
Message 2 (AIMessage):
Message 3 (ToolMessage):
Current weather in New York: 19.8°C, windspeed 16.1 km/h
Message 4 (AIMessage):
The current weather in New York is 19.8°C with a windspeed of 16.1 km/h.
(osrc) sathya@Sathyanarayanans-MacBook-Pro opensrc %
------------
Bug in question when using 'edit'
---------------------
request--------------{'action': 'weather_lookup', 'args': {'city': 'New York'}}
action: weather_lookup
args: {'city': 'New York'}
Original suggestion: {'city': 'New York'}
Enter approval decision (accept/edit/ignore/response): edit
Enter modified city: London
=============================================================weather============================================
=== Full Conversation Trace ===
Message 1 (HumanMessage):
Use weather tool to tell me the weather in New York.
Message 2 (AIMessage):
Message 3 (ToolMessage):
Current weather in London: 13.4°C, windspeed 13.9 km/h
Message 4 (AIMessage):
------------------
With edit - does not go as per plan - llm does not use the information that the tool has given
Suspecting the HumanInTheLoopMiddleware function
Especially this part
last_ai_msg.tool_calls = approved_tool_calls
if len(approved_tool_calls) > 0:
return {"messages": [last_ai_msg, *artificial_tool_messages]}
return {"jump_to": "model", "messages": artificial_tool_messages}
Adding debug logs to HITL function to verify
if len(approved_tool_calls) > 0:
print("\n[HITL DEBUG] Approved tool calls present:")
for call in approved_tool_calls:
print(" -", call)
print("[HITL DEBUG] Returning messages only (no jump_to).")
return {"messages": [last_ai_msg, *artificial_tool_messages]}
print("\n[HITL DEBUG] No approved tool calls.")
print("[HITL DEBUG] Returning jump_to=model with artificial tool messages.")
return {"jump_to": "model", "messages": artificial_tool_messages}
Returns the response
Debug logs response in HITL
request--------------{'action': 'weather_lookup', 'args': {'city': 'New York'}}
action: weather_lookup
args: {'city': 'New York'}
Original suggestion: {'city': 'New York'}
Enter approval decision (accept/edit/ignore/response): edit
Enter modified city: London
[HITL DEBUG] Approved tool calls present:
- {'type': 'tool_call', 'name': 'weather_lookup', 'args': {'city': 'London'}, 'id': 'call_cxT9gjH3aDfVNMouH1IYnbyo'}
[HITL DEBUG] Returning messages only (no jump_to).
=============================================================weather============================================
=== Full Conversation Trace ===
Message 1 (HumanMessage):
Use weather tool to tell me the weather in New York.
Message 2 (AIMessage):
Message 3 (ToolMessage):
Current weather in London: 13.4°C, windspeed 13.9 km/h
Message 4 (AIMessage):
after_model returned only messages - no jump_to thus LangGraph is not getting info from middleware to go back to the model loop
The tool ran with London, but the model never got re-invoked to consume the tool result → so the last message is blank.
So now we know where it dies
Running next experiment
Experiment 2
if len(approved_tool_calls) > 0:
print("\n[HITL DEBUG] Approved tool calls present.")
return {"jump_to": "model", "messages": [last_ai_msg, *artificial_tool_messages]}
print("\n[HITL DEBUG] No approved tool calls.")
return {"jump_to": "model", "messages": artificial_tool_messages}
New tool call created with London.
Runner attempted to re-enter the model loop before a ToolMessage was inserted.
OpenAI rejected the conversation with a 400 BadRequestError:
Error Message
Error code: 400 - {
"error": {
"message": "An assistant message with 'tool_calls' must be followed by tool messages
responding to each 'tool_call_id'.
The following tool_call_ids did not have response messages: call_xxxxx",
"type": "invalid_request_error",
"param": "messages",
"code": null
}
}
HITL correctly intercepts and edits tool calls.
The bug lies in orchestration after edit:
Tool executes, but the scheduler never re-invokes the model to consume the new ToolMessage.
Forcing jump_to: "model" too early breaks the contract because no ToolMessage exists yet.
TL;DR
Accept: Works end-to-end (tool runs → ToolMessage → final AIMessage).
Edit: Tool runs with edited args and emits a ToolMessage, but the model is not re-invoked, so the final AIMessage is empty.
If you force jump_to: "model" immediately after edit: OpenAI rejects with 400 invalid_request_error because assistant tool_calls must be followed by matching ToolMessages.
I hope this is useful - mods please suggest changes if I have not followed community guidelines
Edits - Minor formatting
@sydney-runkle I see that you are assigned to this issue, I tried reworking "jump_to" logic in the human_in_the_loop.py , but I haven't gotten any solid results, I will continue to work on it , but please let me know if a solution has already been proposed or if you need help anywhere else , thanks
Hi! We actually don't need the jump to logic for edit - we jump right to the tool node based on the conditional edge following all after_model middleware.
I'm curious, can anyone attach a trace where they're seeing no jump back to the model?
I'm unable to repro this issue at the moment so I'm going to close, but more than happy to reopen or revisit if folks are still struggling!