redis/agent-memory-server

MCP stdio: search_long_term_memory() missing required positional argument 'background_tasks'

Opened this issue · 2 comments

MCP stdio: search_long_term_memory() missing required positional argument 'background_tasks'

Environment

  • agent-memory-server version: 0.12.3
  • agent-memory-client version: 0.13.0
  • Python version: 3.12
  • OS: Ubuntu 24.04 LTS
  • Transport: MCP stdio over SSH
  • Deployment: Systemd service (REST API on port 8765)

Problem Description

The MCP stdio interface for search_long_term_memory and create_long_term_memories tools fail with a missing positional argument error, while the REST API endpoints work perfectly.

Error Message

2025-11-08 01:05:20,520 agent_memory_server.mcp ERROR Error in search_long_term_memory tool: search_long_term_memory() missing 1 required positional argument: 'background_tasks'

What Works

✅ REST API: POST http://localhost:8765/v1/long-term-memory/search - Works perfectly
✅ REST API: POST http://localhost:8765/v1/long-term-memory/ - Works perfectly
✅ MCP initialization: ListToolsRequest, ListPromptsRequest, ListResourcesRequest - All succeed

What Fails

❌ MCP: search_long_term_memory tool call - Fails with background_tasks error
❌ MCP: create_long_term_memories tool call - Fails with background_tasks error

Steps to Reproduce

  1. Start agent-memory MCP server via stdio:
cd /opt/rams
source credentials/env.systemd  # Sets OPENAI_API_KEY and REDIS_URL
venv/bin/agent-memory mcp --no-worker
  1. Connect MCP client (e.g., Claude Code) via stdio transport

  2. Call search_long_term_memory tool with any parameters

  3. Observe error in stderr logs

Debugging Performed

Attempted Fix #1: --no-worker Flag

Based on web research, tried adding --no-worker flag to run tasks inline instead of using background workers.

Result: Same error persists

Attempted Fix #2: Verify Environment

Confirmed both required environment variables are set:

OPENAI_API_KEY=sk-proj-...
REDIS_URL=redis://:PASSWORD@localhost:6380

Attempted Fix #3: Check Version Mismatch

Noticed client/server version mismatch:

  • agent-memory-client: 0.13.0
  • agent-memory-server: 0.12.3

Attempted upgrade but 0.12.3 is latest available version.

Evidence from Logs

Full debug log showing the error:

++ date
+ echo '=== RAMS MCP Debug Session Sat Nov  8 01:05:00 AM UTC 2025 ==='
+ cd /opt/rams
+ set -a
+ source credentials/env.systemd
+ set +a
+ exec venv/bin/agent-memory mcp --no-worker
2025-11-08 01:05:00,011 mcp.server.lowlevel.server INFO Processing request of type ListToolsRequest
2025-11-08 01:05:00,013 mcp.server.lowlevel.server INFO Processing request of type ListPromptsRequest
2025-11-08 01:05:00,014 mcp.server.lowlevel.server INFO Processing request of type ListResourcesRequest
2025-11-08 01:05:20,520 mcp.server.lowlevel.server INFO Processing request of type CallToolRequest
2025-11-08 01:05:20,520 agent_memory_server.mcp ERROR Error in search_long_term_memory tool: search_long_term_memory() missing 1 required positional argument: 'background_tasks'

Root Cause Analysis

The MCP wrapper functions in agent_memory_server/mcp.py are calling the FastAPI endpoint functions directly, but FastAPI's dependency injection (which provides background_tasks: BackgroundTasks) doesn't work outside the HTTP request context.

Why REST API works: FastAPI automatically injects BackgroundTasks dependency
Why MCP stdio fails: No FastAPI request context, so dependency injection fails

Web Research Findings

Found similar issues reported for other MCP servers:

Impact

Severity: High - MCP interface is completely non-functional for long-term memory operations

Workaround: Use REST API directly via HTTP calls (but defeats the purpose of MCP integration)

Suggested Fix

The MCP wrapper functions should either:

  1. Create their own BackgroundTasks instance when calling the API functions
  2. Refactor the core functions to not require BackgroundTasks as a parameter
  3. Use --no-worker mode properly to run tasks inline without dependency injection

Additional Context

We're using this for Claude Code integration to provide long-term memory across sessions. The REST API works beautifully, but MCP stdio integration is essential for seamless AI agent memory.

Happy to provide additional logs, code snippets, or testing assistance to help resolve this!


Testing Environment Available: We have a full test environment and can validate any fixes quickly.

✅ SOLUTION FOUND - Root Cause + Complete Fix

After extensive debugging, we've identified TWO separate bugs that combine to make MCP stdio completely non-functional for memory operations:


Bug #1: Missing background_tasks Parameter (Partial Fix)

Root Cause: agent_memory_server/mcp.py line 524-526 in search_long_term_memory() doesn't pass background_tasks parameter to core function.

Fix: Add the parameter to match the pattern used in create_long_term_memory()

# BEFORE (Line 524-526)
results = await core_search_long_term_memory(
    payload, optimize_query=optimize_query
)

# AFTER
results = await core_search_long_term_memory(
    payload,
    background_tasks=get_background_tasks(),
    optimize_query=optimize_query
)

Status: This fixes the immediate error, but memories still don't get indexed! ⚠️


Bug #2: stdio Mode Hardcodes use_docket = False (Critical)

Root Cause: agent_memory_server/cli.py line 155 hardcodes settings.use_docket = False in stdio mode, ignoring the USE_DOCKET environment variable.

# cli.py line 153-158 (CURRENT CODE - BROKEN)
if mode == "stdio":
    # Don't run a task worker in stdio mode by default
    settings.use_docket = False
elif no_worker:
    # Use --no-worker flag for SSE mode
    settings.use_docket = False

Why This Breaks Everything:

  1. Without use_docket = True, background tasks go to FastAPI's BackgroundTasks
  2. FastAPI background tasks never execute in stdio mode (no HTTP request cycle)
  3. Memories get created with {"status":"ok"} but are never indexed
  4. Searches always return empty results
  5. Even with a separate task-worker service running, tasks never reach the Docket queue

Complete Fix

Step 1: Patch cli.py to Respect USE_DOCKET Environment Variable

# cli.py line 153-158 (FIXED)
if mode == "stdio":
    # Respect USE_DOCKET environment variable if set, otherwise disable
    if not os.getenv('USE_DOCKET', '').lower() in ('true', '1', 'yes'):
        settings.use_docket = False
elif no_worker:
    # Use --no-worker flag for SSE mode
    settings.use_docket = False

Also add: import os at the top of cli.py if not already present.

Step 2: Set Environment Variable

# In your environment file or systemd EnvironmentFile
USE_DOCKET=true

Step 3: Deploy Separate task-worker Service

# /etc/systemd/system/rams-worker.service
[Unit]
Description=RAMS Docket Task Worker
After=network.target rams.service

[Service]
Type=simple
User=youruser
WorkingDirectory=/path/to/rams
EnvironmentFile=/path/to/rams/credentials/env
ExecStart=/path/to/rams/venv/bin/agent-memory mcp task-worker
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable rams-worker.service
sudo systemctl start rams-worker.service

Step 4: Start MCP Server WITHOUT --no-worker

cd /path/to/rams
source credentials/env  # Must include USE_DOCKET=true
venv/bin/agent-memory mcp  # Remove --no-worker flag

Verification

After applying the fix:

# 1. Create memory
curl -s "http://localhost:8765/v1/long-term-memory/" \
  -X POST -H "Content-Type: application/json" \
  -d '{"memories":[{
    "id":"test_verification",
    "text":"Test memory",
    "topics":["test"],
    "memory_type":"semantic",
    "user_id":"test"
  }]}' | jq '.'

# Expected: {"status": "ok"}

# 2. Wait 30 seconds for embedding + indexing

# 3. Search
curl -s "http://localhost:8765/v1/long-term-memory/search" \
  -X POST -H "Content-Type: application/json" \
  -d '{"text":"test","user_id":{"eq":"test"},"limit":5}' | \
  jq '.memories[].text'

# Expected: Memory text returned ✅

Check worker is processing:

journalctl -u rams-worker.service -f
# Should see: "Indexed memory" messages

Why The Original --no-worker Assumption Was Wrong

The comment in cli.py says: "Don't run a task worker in stdio mode by default"

This was probably intended to prevent stdio mode from requiring a worker process. But it breaks the use case where you WANT stdio + external worker (exactly our deployment scenario).

The fix allows both:

  • Default behavior (USE_DOCKET not set): stdio works without worker (same as before)
  • External worker scenario (USE_DOCKET=true): stdio uses Docket queue with separate worker

Impact

Before Fix:

  • MCP stdio mode: Memories created but never indexed → always empty searches
  • Only workaround: Use HTTP API directly (defeats MCP purpose)

After Fix:

  • MCP stdio mode: Full end-to-end memory storage, indexing, and retrieval ✅
  • Suitable for production deployments with external workers

Testing

We've tested this fix in production for several hours with:

  • OpenAI embeddings (text-embedding-3-small)
  • Redis Stack 8.2 with vector search
  • Systemd services for both API and worker
  • SSH stdio transport for MCP

Result: 100% functional. Memories are created, indexed, and searchable via MCP.


Recommendation

  1. Fix mcp.py - Add background_tasks parameter to search function
  2. Fix cli.py - Respect USE_DOCKET environment variable in stdio mode
  3. Document - Add note about USE_DOCKET for stdio + external worker deployments
  4. Consider - Making USE_DOCKET=true the default for stdio mode when a worker is available

We're happy to submit a PR if that would be helpful!

@abrookins Noticed this issue RE docket and it may be helpful to know we've got support for a memory docket now as of 0.12.0! The URL is memory:// (or memory://<any-string> to emulate multiple separate docket servers).