Skip to content

Conversation

@samchenatti
Copy link

@samchenatti samchenatti commented Oct 25, 2025

Context

First of all, congratulations on the excellent work!

This PR is a proposal for how users can define a tool in the MCP server that receives parameters which the LLM should not be aware of.

The pseudo-code below illustrates how it works:

# client.py

client = MultiServerMCPClient(
    ...
)

tools = await client.get_tools()
agent = create_react_agent("openai:gpt-4.1", tools)
response = await agent.ainvoke(
    {
        "messages": "What is my dog's name?",
        "user_id": "user_0"
    }
)
# server.py
from langchain_core.tools import tool as lc_tool

...

data = {
    'user_0': 'Spike'
}

@lc_tool
async def get_user_pet_name(user_id: Annotated[str, InjectedToolArg]) -> str:
    """Returns the user's pet name"""

    return data[user_id]

fastmcp_tool = to_fastmcp(add)
mcp = FastMCP("Math", tools=[fastmcp_tool])
mcp.run(transport="stdio")

Implementation details

Now that MCP's Python SDK exposes _meta in call_tool interface, we can rely on it to hide some tools parameters from the LLM.

On the MCP side, tools converted with to_fastmcp will return InjectedToolArg in tools/list calls to the server:

{
    "meta": null,
    "nextCursor": null,
    "tools": [
        {
            "name": "get_user_pet_name",
            "title": null,
            "description": "Returns the user's pet name",
            "inputSchema": {
                "description": "Returns the user's pet name",
                "properties": {},
                "required": [],
                "title": "get_user_pet_name",
                "type": "object"
            },
            "outputSchema": null,
            "icons": null,
            "annotations": null,
            "meta": {
                "langchain/injectedArgsSchema": {
                    "user_id": {
                       "type": "string"
                    }
                }
            }
        }
    ]
}

On the client-side, LC tools created with load_mcp_tools will detect langchain/injectedArgsSchema and pass any matching input in _meta during tools/call requests:

{
    "method": "tools/call",
    "params": {
        "meta": {
            "progressToken": null,
            "langchain/injectedArgsValue": {
                "user_id": "user_0"
            }
        },
        "name": "get_user_pet_name",
        "arguments": {
        }
    }
}

The MCP Tool will extract the values from langchain/injectedArgsValue and inject them into arguments to the underlying LC Tool invoke method.

Considerations

From my understanding, LangChain is discouraging the use of InjectedToolArg in favor of ToolRuntime instead.

However, it doesn’t seem that InjectedToolArg is fully deprecated yet.

@samchenatti samchenatti force-pushed the support-injected-arguments branch from ee5002d to d7fd6e3 Compare October 26, 2025 00:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant