modelcontextprotocol/java-sdk

Support for accessing and customizing request handlers in McpServerSession

Opened this issue · 3 comments

Expected Behavior

As a developer extending the MCP Java SDK, I should be able to access and customize the request handlers after the
McpServerSession is initialized. This would enable dynamic replacement of default handlers with custom
implementations.

// Example use case: Replacing default handlers with custom implementations
McpServerSession session = // ... initialized session
Map<String, McpRequestHandler<?>> handlers = session.getRequestHandlers();
handlers.put(McpSchema.METHOD_TOOLS_LIST, new MyCustomListToolsRequestHandler());
handlers.put(McpSchema.METHOD_TOOLS_CALL, new MyCustomToolsCallRequestHandler());

Current Behavior

Currently, the McpServerSession class does not provide any way to access or modify the requestHandlers map after
initialization. The request handlers are set in the constructor and remain private with no getter method.

Without a proper getter method, developers are forced to use reflection-based workarounds. For example, using
Spring's ReflectionUtils:

// Workaround: Using reflection to access private requestHandlers field
Field field = ReflectionUtils.findField(McpServerSession.class, "requestHandlers");
ReflectionUtils.makeAccessible(field);
@SuppressWarnings("unchecked")
Map<String, McpRequestHandler> handlers = (Map>) ReflectionUtils.getField(field,
session);

// Replace with custom handlers
handlers.put(McpSchema.METHOD_TOOLS_LIST, new MyCustomListToolsRequestHandler());
handlers.put(McpSchema.METHOD_TOOLS_CALL, new MyCustomToolsCallRequestHandler());

Or to replace the entire map:

// Workaround: Using reflection to replace the requestHandlers field
Field field = ReflectionUtils.findField(McpServerSession.class, "requestHandlers");
ReflectionUtils.makeAccessible(field);

Map<String, McpRequestHandler<?>> customHandlers = new ConcurrentHashMap<>();
customHandlers.put(McpSchema.METHOD_TOOLS_LIST, new MyCustomListToolsRequestHandler());
customHandlers.put(McpSchema.METHOD_TOOLS_CALL, new MyCustomToolsCallRequestHandler());
// ... add other default handlers

ReflectionUtils.setField(field, session, customHandlers);

This approach is fragile, breaks encapsulation, and makes the code harder to maintain.

Context

How has this issue affected you?
When building custom MCP server implementations, I need to replace default tool handlers with custom logic.
Currently, I'm forced to use reflection hacks to access and modify the private requestHandlers field, which is not
a recommended practice.

What are you trying to accomplish?
I want to implement custom McpRequestHandler instances for standard MCP methods (like tools/list and tools/call)
and register them with an existing McpServerSession without using reflection.

My proposed solution:

  1. Make the requestHandlers field a ConcurrentHashMap (for thread-safety during modifications)
  2. Add a public getter getRequestHandlers() to expose the handlers map

This allows for safe runtime modification while maintaining backward compatibility and eliminating the need for
reflection-based workarounds.

What other alternatives have you considered?

  • Creating a wrapper class around McpServerSession - not feasible due to private field access
  • Rebuilding the session with new handlers - inefficient and loses session state
  • Using reflection (current workaround) - fragile and breaks encapsulation

Are you aware of any workarounds?
Yes, using Spring's ReflectionUtils to access and modify the private field (as shown above), but this is not a
proper solution.

Hey, thanks for describing the issue you're facing. We have a similar request in #578 that would be a bit higher level but would allow controlling the resources, prompts, and tools via a custom implementation of a repository of these capabilities. The description you provide focuses on a proposed solution, not actually a description of the problem you are trying to solve. Can you please describe the need behind this suggestion? I do believe we must ensure the internal consistency of how we dispatch the handler code and have consistent edge case handling - exposing the internal state to be modified at runtime is a huge risk. Please, let's take a step back and focus on the business logic that you'd like to incorporate. These concerns can be addressed in various ways, including #578 for part of them. Thank you.

Thank you for your feedback and for pointing me to #578. I appreciate your perspective on focusing on the business problem rather than jumping to implementation details.

Let me clarify the real-world use case I'm trying to solve:

Business Problem

I'm building an MCP server that dynamically exposes Dify workflows as MCP tools. Each workflow endpoint needs to:

  1. Dynamically discover tool schemas - When a client requests tools/list, the server needs to call Dify's API to fetch the workflow's parameter schema and return it as an MCP tool definition
  2. Route tool calls to external workflows - When a client invokes tools/call, the server needs to forward the request to the appropriate Dify workflow endpoint and stream back the results

Dify workflow is : https://dify.ai/blog/dify-ai-workflow

The challenge is that the tool definitions and implementations aren't known at server initialization time - they depend on runtime context (specifically, which workflow ID is requested via URL
path parameters like /mcp/{workflowId}).

Current Implementation (Using Reflection)

I've successfully implemented this functionality, but it requires reflection to work around the current SDK limitations. Here's how:

  1. Custom Transport Provider

I copied WebMvcSseServerTransportProvider.java to my project as QuickWebMvcSseServerTransportProvider.java and modified the handleSseConnection method to inject custom handlers:



  private ToolRequestHandlerFactory toolRequestHandlerFactory;
 protected ServerResponse handleSseConnection(ServerRequest request) {
        if (isClosing) {
            return ServerResponse.status(HttpStatus.SERVICE_UNAVAILABLE).body("Server is shutting down");
        }

        String sessionId = UUID.randomUUID().toString();
        logger.debug("Creating new SSE connection for session: {}", sessionId);
        Map<String, String> requestPathVariables = request.pathVariables();
        MultiValueMap<String, String> requestParams = request.params();
// .......

  if (toolRequestHandlerFactory != null) {
      logger.info("Replacing default handlers with custom implementations: {}", toolRequestHandlerFactory);

      // Use reflection to access private requestHandlers field
      Field requestHandlersField = ReflectionUtils.findField(McpServerSession.class, "requestHandlers");
      requestHandlersField.setAccessible(true);
      Object requestHandlers = ReflectionUtils.getField(requestHandlersField, session);

      if (requestHandlers instanceof Map rhMap) {
          // Convert to ConcurrentHashMap for session-level isolation
          if (!(requestHandlers instanceof ConcurrentHashMap)) {
              logger.info("Converting global map to session-level ConcurrentHashMap");
              rhMap = new ConcurrentHashMap<>(rhMap);
              ReflectionUtils.setField(requestHandlersField, session, rhMap);
          }

          // Replace tools/call handler
          Object toolCallRequestHandlerExist = rhMap.get(McpSchema.METHOD_TOOLS_CALL);
          if (toolCallRequestHandlerExist instanceof McpRequestHandler tcrhe) {
              AbstractRequestHandler<?> toolCallRequestHandler = toolRequestHandlerFactory.createToolCallRequestHandler();
              toolCallRequestHandler.setRequestParams(requestParams);
              toolCallRequestHandler.setRequestPathVariables(requestPathVariables);
              toolCallRequestHandler.setDelegate(tcrhe);
              rhMap.put(McpSchema.METHOD_TOOLS_CALL, toolCallRequestHandler);
          }

          // Replace tools/list handler
          Object toolListRequestHandlerExist = rhMap.get(McpSchema.METHOD_TOOLS_LIST);
          if (toolListRequestHandlerExist instanceof McpRequestHandler tlrhe) {
              AbstractRequestHandler<?> toolListRequestHandler = toolRequestHandlerFactory.createToolListRequestHandler();
              toolListRequestHandler.setRequestParams(requestParams);
              toolListRequestHandler.setRequestPathVariables(requestPathVariables);
              toolListRequestHandler.setDelegate(tlrhe);
              rhMap.put(McpSchema.METHOD_TOOLS_LIST, toolListRequestHandler);
          }
      }
  }
  1. Bridge Pattern for Handler Wrapping

I created an AbstractRequestHandler base class that wraps the original handlers while providing access to HTTP request context:

  public abstract class AbstractRequestHandler<T> implements McpRequestHandler<T> {
      static final Logger LOG = LoggerFactory.getLogger(AbstractRequestHandler.class);

      private Map<String, String> requestPathVariables;
      private MultiValueMap<String, String> requestParams;
      private McpRequestHandler<T> delegate;

      public Map<String, String> getRequestPathVariables() {
          return requestPathVariables;
      }

      public void setRequestPathVariables(Map<String, String> requestPathVariables) {
          if (requestPathVariables != null) {
              this.requestPathVariables = new LinkedHashMap<>(requestPathVariables);
          }
      }

      public MultiValueMap<String, String> getRequestParams() {
          return requestParams;
      }

      public void setRequestParams(MultiValueMap<String, String> requestParams) {
          if (requestParams != null) {
              MultiValueMap<String, String> requestParamsTmp = new LinkedMultiValueMap<>();
              requestParamsTmp.addAll(requestParams);
              this.requestParams = requestParamsTmp;
          }
      }

      public void setDelegate(McpRequestHandler<T> delegate) {
          this.delegate = delegate;
      }

      public McpRequestHandler<T> getDelegate() {
          return delegate;
      }

      @Override
      public Mono<T> handle(McpAsyncServerExchange exchange, Object params) {
          LOG.info("Handling request - Exchange: {}, Params: {}", exchange, params);

          return delegate.handle(exchange, params)
              .doOnSuccess(response -> LOG.info("Request handled successfully - Response: {}", response))
              .doOnError(error -> LOG.error("Request handling failed - Error: ", error));
      }

      public interface ToolRequestHandlerFactory {
          AbstractRequestHandler<?> createToolListRequestHandler();
          AbstractRequestHandler<?> createToolCallRequestHandler();
      }
  }
  1. Dynamic Tool List Handler

This handler queries Dify's API to dynamically build the tool schema based on the workflow ID from the URL path:

  public class ToolListForDifyWorkflowRequestHandler extends AbstractRequestHandler<McpSchema.ListToolsResult> {
      static final Logger LOG = LoggerFactory.getLogger(ToolListForDifyWorkflowRequestHandler.class);

      @Override
      public Mono<McpSchema.ListToolsResult> handle(McpAsyncServerExchange exchange, Object params) {
          LOG.info("Handling tools/list request - Exchange: {}, Params: {}", exchange, params);

          List<Tool> tools = new ArrayList<>();

          // Extract workflow ID from URL path variable
          String difyWorkFlowId = getRequestPathVariables().get("workflowId");

          try {
              // Fetch Dify workflow configuration
              DifyApiInfo apiInfo = DifyWorkFlowUtil.getDifyApiInfo(difyWorkFlowId);
              String apiKey = apiInfo.getDifyApiKey();
              String baseUrl = apiInfo.getDifyApiUrl();

              // Get workflow info
              DifyCallConfig difyCallConfig4Info = new DifyCallConfig();
              difyCallConfig4Info.setApiKey(apiKey);
              difyCallConfig4Info.setUrl(DifyCallConfig.getDifyUrl(baseUrl, DifyCallConfig.URL_V1_INFO));
              DifyInfoResponse difyInfoResponse = DifyInfoUtil.difyInfo(difyCallConfig4Info);

              // Get workflow parameter schema
              DifyCallConfig difyCallConfig = new DifyCallConfig();
              difyCallConfig.setApiKey(apiKey);
              difyCallConfig.setUrl(DifyCallConfig.getDifyUrl(baseUrl, DifyCallConfig.URI_V1_PARAMETERS));
              DifyWorkflowParameterInfoResponse paramResponse = DifyWorkFlowUtil.getDifyWorkflowParameterInfoResponse(difyCallConfig);

              // Convert Dify schema to MCP JsonSchema
              JsonSchema inputSchema = McpJsonSchemaUtil.toJsonSchema(paramResponse.getUserInputForm());

              // Build MCP tool definition
              Tool tool = Tool.builder()
                  .name("difyWorkFlowCall_" + difyWorkFlowId)
                  .title(difyInfoResponse.getData().getName())
                  .description(difyInfoResponse.getData().getDescription())
                  .inputSchema(inputSchema)
                  .build();

              tools.add(tool);
              return Mono.just(new ListToolsResult(tools, null));

          } catch (Exception e) {
              LOG.error("Failed to fetch Dify workflow schema", e);
              return Mono.error(e);
          }
      }
  }
  1. Dynamic Tool Call Handler

This handler forwards tool invocations to the appropriate Dify workflow:

  public class ToolCallForDifyWorkflowRequestHandler extends AbstractRequestHandler<McpSchema.CallToolResult> {
      static final Logger LOG = LoggerFactory.getLogger(ToolCallForDifyWorkflowRequestHandler.class);

      @Override
      public Mono<CallToolResult> handle(McpAsyncServerExchange exchange, Object params) {
          McpSchema.CallToolRequest callToolRequest = JSONUtil.convertValue(params,
              new TypeReference<McpSchema.CallToolRequest>() {});

          // Extract workflow ID from URL path variable
          String difyWorkFlowId = getRequestPathVariables().get("workflowId");

          try {
              // Fetch Dify workflow configuration
              DifyApiInfo apiInfo = DifyWorkFlowUtil.getDifyApiInfo(difyWorkFlowId);
              String apiKey = apiInfo.getDifyApiKey();
              String baseUrl = apiInfo.getDifyApiUrl();

              // Configure Dify API call
              DifyCallConfig difyCallConfig = new DifyCallConfig();
              difyCallConfig.setApiKey(apiKey);
              difyCallConfig.setUrl(DifyCallConfig.getDifyUrl(baseUrl, DifyCallConfig.URI_V1_WORKFLOWS_RUN));

              String callUserId = "user_id_from_context";
              Map<String, Object> arguments = new TreeMap<>(callToolRequest.arguments());

              // Execute Dify workflow
              Object objectResp = DifyWorkFlowUtil.runDifyWorkflow(difyCallConfig, callUserId, arguments, Object.class);

              // Convert response to MCP format
              List<Content> content = new ArrayList<>();
              TextContent textContent = new TextContent(JSONUtil.toDenseJsonStr(objectResp));
              content.add(textContent);

              return Mono.just(new CallToolResult(content, false));

          } catch (Exception e) {
              LOG.error("Failed to execute Dify workflow", e);
              return Mono.error(e);
          }
      }
  }
  1. Spring Bean Configuration

The transport provider is configured with dynamic URL paths:

  @Bean
  @Qualifier("difyWorkFlowWebMvcSseServerTransportProvider")
  public QuickWebMvcSseServerTransportProvider difyWorkFlowWebMvcSseServerTransportProvider(
          DifyWorkflowToolRequestHandlerFactory difyWorkflowToolRequestHandlerFactory) {

      // Base URL with path variable for workflow ID
      String baseUrl = "/difyWorkFlowmcp/{workflowId}";
      String messageEndpoint = MCPServerConfigurationUtil.DEFAULT_MESSAGE_ENDPOINT;
      String sseEndpoint = MCPServerConfigurationUtil.DEFAULT_SSE_ENDPOINT;

      QuickWebMvcSseServerTransportProvider provider = new QuickWebMvcSseServerTransportProvider(
          JSONUtil.getFullMapperForHttpMessageConverter(),
          baseUrl, messageEndpoint, sseEndpoint
      );

      provider.setToolRequestHandlerFactory(difyWorkflowToolRequestHandlerFactory);

      return provider;
  }

@Service
public class DifyWorkflowToolRequestHandlerFactory implements ToolRequestHandlerFactory {

    private static final Logger LOG = LoggerFactory.getLogger(DifyWorkflowToolRequestHandlerFactory.class);
    @Override
    public AbstractRequestHandler<?> createToolListRequestHandler() {
        return new ToolListForDifyWorkflowRequestHandler();
    }

    @Override
    public AbstractRequestHandler<?> createToolCallRequestHandler() {
        return new ToolCallForDifyWorkflowRequestHandler();
    }

}

Why This Matters

This implementation enables a single MCP server instance to dynamically serve hundreds of different Dify workflows, each exposed as a separate tool with its own schema. For example:

  • GET /difyWorkFlowmcp/workflow-123/sse exposes workflow-123's tools
  • GET /difyWorkFlowmcp/workflow-456/sse exposes workflow-456's tools

Without access to request context (path variables, query parameters) in handlers, this pattern would be impossible to implement.

Questions About #578

Looking at the ToolsRepository enhancement mentioned in #578, I'd like to understand:

  1. Would it support dynamic tool discovery based on runtime context? (e.g., different tools for different URL paths)
  2. Can handlers access HTTP request context? (path variables, query parameters needed to determine which external workflow to call)
  3. Can tools be registered/modified after session initialization? (for scenarios where tool availability changes dynamically)

If the ToolsRepository approach can address these requirements, I'd be happy to migrate to that pattern. Otherwise, I'm open to discussing alternative architectural approaches that align with
the SDK's design principles while enabling this kind of dynamic, context-aware tool serving.

Thank you for taking the time to ensure we get this right architecturally!

Thanks for adding more context! Let's try discussing this in #578. My view is that we should enable the flow you just expressed and provide enough contextual insight for the repository to be able to make the right inference and manage the dynamic nature as well.