A bridge connecting Model Context Protocol (MCP) servers to OpenAI-compatible LLMs like Ollama.
The Ollama to MCP Server Bridge is a simple yet powerful tool designed to connect Model Context Protocol (MCP) servers with OpenAI-compatible Large Language Models (LLMs) such as Ollama. This bridge facilitates seamless communication between MCP servers and LLMs, enabling users to leverage the capabilities of both systems effectively.
To get started with the Ollama to MCP Server Bridge, follow these steps:
# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .
Note: Reactivate the environment if needed to use the keys in .env
: source .venv/bin/activate
Then configure the bridge in src/mcp_llm_bridge/main.py
:
mcp_server_params=StdioServerParameters(
command="uv",
# CHANGE THIS = it needs to be an absolute directory! add the mcp fetch server at the directory (clone from https://github.com/modelcontextprotocol/servers/)
args=["--directory", "~/llms/mcp/mc-server-fetch/servers/src/fetch", "run", "mcp-server-fetch"],
env=None
),
# llm_config=LLMConfig(
# api_key=os.getenv("OPENAI_API_KEY"),
# model=os.getenv("OPENAI_MODEL", "gpt-4o"),
# base_url=None
# ),
llm_config=LLMConfig(
api_key="ollama", # Can be any string for local testing
model="llama3.2",
base_url="http://localhost:11434/v1" # Point to your local model's endpoint
),
The bridge also works with any endpoint implementing the OpenAI API specification:
llm_config=LLMConfig(
api_key="not-needed",
model="mistral-nemo:12b-instruct-2407-q8_0",
base_url="http://localhost:11434/v1"
)
This project is licensed under the MIT License.
PRs are welcome. Feel free to contribute to the project by submitting pull requests.
A Simple bridge from Ollama to a fetch url mcp server.
No releases published.
No packages published.