You signed in with another tab or window. Reload
to refresh your session. You signed out in another tab or window. Reload
to refresh your session. You switched accounts on another tab or window. Reload
to refresh your session. Dismiss alert
langchain-ai / langchainjs-mcp-adapters Public
Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
72 stars
9 forks
Branches
Tags
Activity
Notifications
You must be signed in to change notification settings
main
Go to file
Code
| Name | | Name | Last commit message | Last commit date |
| --- | --- | --- | --- |
| Latest commit
-------------
github-actions[bot]
chore: bump version to v0.3.4 [skip ci]
Mar 24, 2025
b431314
· Mar 24, 2025
History
-------
66 Commits
| | |
| .github | | .github | fix: fix post-release commit (
#34
) | Mar 22, 2025 |
| .husky | | .husky | fix: update husky pre-commit hook and add GitHub Actions permissions | Mar 3, 2025 |
| .yarn | | .yarn | fix: fix CJS by dynamically importing json-schema-to-zod (
#37
) | Mar 24, 2025 |
| __tests__ | | __tests__ | fix: build & import issues (
#32
) | Mar 22, 2025 |
| examples | | examples | fix: build & import issues (
#32
) | Mar 22, 2025 |
| src | | src | fix: fix CJS by dynamically importing json-schema-to-zod (
#37
) | Mar 24, 2025 |
| .eslintrc.cjs | | .eslintrc.cjs | fix: build & import issues (
#32
) | Mar 22, 2025 |
| .gitignore | | .gitignore | fix: build & import issues (
#32
) | Mar 22, 2025 |
| .npmignore | | .npmignore | chore: prepare for npm publishing, add .npmignore, CHANGELOG and CONT… | Mar 3, 2025 |
| .prettierrc | | .prettierrc | fix: build & import issues (
#32
) | Mar 22, 2025 |
| .prettierrc.json | | .prettierrc.json | chore: prepare for npm publishing, add .npmignore, CHANGELOG and CONT… | Mar 3, 2025 |
| .yarnrc.yml | | .yarnrc.yml | fix: build & import issues (
#32
) | Mar 22, 2025 |
| CHANGELOG.md | | CHANGELOG.md | Release v0.1.7 | Mar 8, 2025 |
| CONTRIBUTING.md | | CONTRIBUTING.md | chore: prepare for npm publishing, add .npmignore, CHANGELOG and CONT… | Mar 3, 2025 |
| LICENSE | | LICENSE | chore: prepare for npm publishing, add .npmignore, CHANGELOG and CONT… | Mar 3, 2025 |
| README.md | | README.md | export loadMcpTools (
#29
) | Mar 20, 2025 |
| RELEASE_NOTES.md | | RELEASE_NOTES.md | Release v0.1.7 | Mar 8, 2025 |
| langchain.config.js | | langchain.config.js | fix: build & import issues (
#32
) | Mar 22, 2025 |
| package.json | | package.json | chore: bump version to v0.3.4 [skip ci] | Mar 24, 2025 |
| tsconfig.cjs.json | | tsconfig.cjs.json | fix: build & import issues (
#32
) | Mar 22, 2025 |
| tsconfig.examples.json | | tsconfig.examples.json | break: remove winston in favor of the debug package (
#25
) | Mar 19, 2025 |
| tsconfig.json | | tsconfig.json | fix: build & import issues (
#32
) | Mar 22, 2025 |
| tsconfig.tests.json | | tsconfig.tests.json | fix: build & import issues (
#32
) | Mar 22, 2025 |
| vitest.config.ts | | vitest.config.ts | fix: pass inputSchema to LangChain tools & fail loudly (
#20
) | Mar 18, 2025 |
| vitest.setup.ts | | vitest.setup.ts | fix: pass inputSchema to LangChain tools & fail loudly (
#20
) | Mar 18, 2025 |
| yarn.lock | | yarn.lock | fix: don't autorun husky on npm install (
#35
) | Mar 22, 2025 |
| View all files | | |
This library provides a lightweight wrapper that makesAnthropic Model Context Protocol (MCP)
tools compatible with LangChain.js
and LangGraph.js
.
🔌 Transport Options
🔄 Multi-Server Management
Connect to multiple MCP servers simultaneously
🧩 Agent Integration
Compatible with LangChain.js and LangGraph.js
🛠️ Development Features
Flexible configuration options
npm install @langchain/mcp-adapters
For SSE connections with custom headers in Node.js:
npm install eventsource
For enhanced SSE header support:
npm install extended-eventsource
eventsource
packageHere is a simple example of using the MCP tools with a LangGraph agent.
npm install @langchain/mcp-adapters @langchain/langgraph @langchain/core @langchain/openai
export OPENAI_API_KEY=<your_api_key>
First, let's create an MCP server that can add and multiply numbers.
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { loadMcpTools } from '@langchain/mcp-adapters';
// Initialize the ChatOpenAI model
const model = new ChatOpenAI({ modelName: 'gpt-4' });
// Create transport for stdio connection
const transport = new StdioClientTransport({
command: 'python',
args: ['math_server.py'],
});
// Initialize the client
const client = new Client({
name: 'math-client',
version: '1.0.0',
});
try {
// Connect to the transport
await client.connect(transport);
// Get tools
const tools = await loadMcpTools("math", client);
// Create and run the agent
const agent = createReactAgent({ llm: model, tools });
const agentResponse = await agent.invoke({
messages: [{ role: 'user', content: "what's (3 + 5) x 12?" }],
});
console.log(agentResponse);
} catch (e) {
console.error(e);
} finally {
// Clean up connection
await client.close();
}
The library also allows you to connect to multiple MCP servers and load tools from them:
# math_server.py
...
# weather_server.py
from mcp.server.fastmcp import FastMCP
# Create a server
mcp = FastMCP(name="Weather")
@mcp.tool()
def get_temperature(city: str) -> str:
"""Get the current temperature for a city."""
# Mock implementation
temperatures = {
"new york": "72°F",
"london": "65°F",
"tokyo": "25°C",
}
city_lower = city.lower()
if city_lower in temperatures:
return f"The current temperature in {city} is {temperatures[city_lower]}."
else:
return "Temperature data not available for this city"
# Run the server with SSE transport
if __name__ == "__main__":
mcp.run(transport="sse")
import { MultiServerMCPClient } from '@langchain/mcp-adapters';
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
// Create client and connect to server
const client = new MultiServerMCPClient();
await client.connectToServerViaStdio('math-server', 'python', ['math_server.py']);
await client.connectToServerViaSSE('weather-server', 'http://localhost:8000/sse');
const tools = client.getTools();
// Create an OpenAI model
const model = new ChatOpenAI({
modelName: 'gpt-4o',
temperature: 0,
});
// Create the React agent
const agent = createReactAgent({
llm: model,
tools,
});
// Run the agent
const mathResponse = await agent.invoke({
messages: [{ role: 'user', content: "what's (3 + 5) x 12?" }],
});
const weatherResponse = await agent.invoke({
messages: [{ role: 'user', content: 'what is the weather in nyc?' }],
});
await client.close();
Below are more detailed examples of how to configure MultiServerMCPClient
.
import { MultiServerMCPClient } from '@langchain/mcp-adapters';
// Create a client
const client = new MultiServerMCPClient();
// Connect to a local server via stdio
await client.connectToServerViaStdio(
'math-server', // Server name
'python', // Command to run
['./math_server.py'] // Command arguments
);
// Connect to a remote server via SSE
await client.connectToServerViaSSE(
'weather-server', // Server name
'http://localhost:8000/sse' // SSE endpoint URL
);
// Get all tools from all servers as a flattened array
const tools = client.getTools();
// Get tools from specific servers
const mathTools = client.getTools(['math-server']);
// Get tools grouped by server name
const toolsByServer = client.getToolsByServer();
// Close all connections when done
await client.close();
Note
For stdio connections, the transport
field is optional. If not specified, it defaults to 'stdio'.
// Connect to a server with authentication
await client.connectToServerViaSSE(
'auth-server',
'https://api.example.com/mcp/sse',
{
Authorization: 'Bearer token',
'X-API-Key': 'your-api-key',
},
true // Use Node.js EventSource for header support
);
Define your server connections in a JSON file:
{
"servers": {
"math": {
"command": "python",
"args": ["./math_server.py"]
},
"weather": {
"transport": "sse",
"url": "http://localhost:8000/sse",
"headers": {
"Authorization": "Bearer token"
},
"useNodeEventSource": true
}
}
}
Then load it in your code:
import { MultiServerMCPClient } from '@langchain/mcp-adapters';
// Load from default location (./mcp.json)
const client = MultiServerMCPClient.fromConfigFile();
// Or specify a custom path
// const client = MultiServerMCPClient.fromConfigFile('./config/mcp.json');
await client.initializeConnections();
const tools = client.getTools();
LangChainJS-MCP-Adapters provides flexible and powerful configuration management capabilities:
The client automatically looks for and loads a mcp.json
file from the current working directory if no explicit configuration is provided:
// This will automatically load from ./mcp.json if it exists
const client = new MultiServerMCPClient();
await client.initializeConnections();
There are multiple ways to load configurations:
// Method 1: Automatic default loading
const client1 = new MultiServerMCPClient(); // Automatically checks for mcp.json
// Method 2: From specified config file
const client2 = MultiServerMCPClient.fromConfigFile('./config/custom-mcp.json');
You can combine configurations from multiple sources - they will be merged rather than replaced:
// Start with default configuration or empty if no mcp.json exists
const client = new MultiServerMCPClient();
// Add another configuration file
client.addConfigFromFile('./team1-servers.json');
// Add yet another configuration file
client.addConfigFromFile('./team2-servers.json');
// Add configurations directly in code
client.addConnections({
'custom-server': {
transport: 'stdio',
command: 'python',
args: ['./special_server.py'],
},
});
// Initialize all connections from all sources
await client.initializeConnections();
Configurations are processed in the order they are added:
mcp.json
(if present)addConfigFromFile()
call in sequenceaddConnections()
call in sequenceIf the same server name appears in multiple configurations, the later configuration takes precedence, allowing for overriding settings.
For simple use cases, you can bypass configuration files entirely and connect to servers directly using the provided connection methods:
const client = new MultiServerMCPClient();
// Add a stdio connection
await client.connectToServerViaStdio(
'math-server',
'python',
['./math_server.py'],
// Optional environment variables
{ PYTHONPATH: './lib' },
// Optional restart configuration
{ enabled: true, maxAttempts: 3, delayMs: 2000 }
);
// Add an SSE connection
await client.connectToServerViaSSE(
'remote-server',
'https://api.example.com/mcp/sse',
// Optional headers
{ Authorization: 'Bearer token' },
// Optional Node.js EventSource flag
true,
// Optional reconnection configuration
{ enabled: true, maxAttempts: 5, delayMs: 1000 }
);
Configuration files support environment variable substitution using ${ENV_VAR}
syntax in both string values and environment variable objects:
{
"servers": {
"api-server": {
"transport": "sse",
"url": "https://${API_DOMAIN}/sse",
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
}
},
"local-server": {
"transport": "stdio",
"command": "python",
"args": ["./server.py"],
"env": {
"OPENAI_API_KEY": "${OPENAI_API_KEY}",
"DEBUG_LEVEL": "info"
}
}
}
}
Below is the complete schema for the configuration file:
{
"servers": {
"server-name": {
// For stdio transport (transport field is optional for stdio)
"transport": "stdio", // Optional for stdio, defaults to "stdio" if command and args are present
"command": "python",
"args": ["./server.py"],
"env": {
"ENV_VAR": "value"
},
"encoding": "utf-8",
"encodingErrorHandler": "strict",
"restart": {
"enabled": true,
"maxAttempts": 3,
"delayMs": 1000
},
// For SSE transport (transport field is required)
"transport": "sse",
"url": "http://localhost:8000/sse",
"headers": {
"Authorization": "Bearer token"
},
"useNodeEventSource": true,
"reconnect": {
"enabled": true,
"maxAttempts": 3,
"delayMs": 1000
}
}
}
}
Note
For stdio connections, the transport
field is optional. If not specified, it defaults to 'stdio' when command
and args
are present.
When using in browsers:
Connection Failures:
Tool Execution Errors:
Examine server logs for error messages
Headers Not Applied:
Install the recommended extended-eventsource
package
useNodeEventSource: true
in SSE connectionsThis package makes use of the debug
package for debug logging.
Logging is disabled by default, and can be enabled by setting the DEBUG
environment variable as per the instructions in the debug package.
To output all debug logs from this package:
DEBUG='@langchain/mcp-adapters:*'
To output debug logs only from the client
module:
DEBUG='@langchain/mcp-adapters:client'
To output debug logs only from the tools
module:
DEBUG='@langchain/mcp-adapters:tools'
MIT
Big thanks to @vrknetha
, @cawstudios
for the initial implementation!
Contributions are welcome! Please check out our contributing guidelines
for more information.
Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
javascript
typescript
mcp
ai-tools
langchain
llm-tools
openai-functions
langchainjs
llm-agents
agent-tools
llm-integration
model-context-protocol
No packages published
You can’t perform that action at this time.