OmniLLM is a Model Context Protocol (MCP) server that allows Claude to query and integrate responses from multiple large language models (LLMs), including ChatGPT, Azure OpenAI, and Google Gemini. This creates a unified access point for all your AI needs.
# Clone or download this repository
git clone https://github.com/yourusername/omnillm-mcp.git
cd omnillm-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install mcp[cli] httpx python-dotenv
Create a .env
file in the project root with your API keys:
OPENAI_API_KEY=your_openai_key_here
AZURE_OPENAI_API_KEY=your_azure_key_here
AZURE_OPENAI_ENDPOINT=your_azure_endpoint_here
GOOGLE_API_KEY=your_google_api_key_here
You only need to add the keys for the services you want to use.
claude_desktop_config.json
file:{
"mcpServers": {
"omnillm": {
"command": "python",
"args": [\
"path/to/server.py"\
],
"env": {
"PYTHONPATH": "path/to/omnillm-mcp"
}
}
}
}
Replace "path/to/server.py" with the actual path to your server.py file.
Once connected to Claude Desktop, you can use phrases like:
Claude will automatically detect when to use the Multi-LLM Proxy tools to enhance its responses.
query_chatgpt
- Query OpenAI's ChatGPT with a custom promptquery_azure_chatgpt
- Query Azure OpenAI's ChatGPT with a custom promptquery_gemini
- Query Google's Gemini with a custom promptquery_all_llms
- Query all available LLMs and get all responses togethercheck_available_models
- Check which LLM APIs are properly configured.env
fileOmniLLM: A Model Context Protocol (MCP) server that enables Claude to access and integrate responses from multiple LLMs including ChatGPT, Azure OpenAI, and Google Gemini, creating a unified AI knowledge hub.
No releases published
No packages published