The Multi-Model Advisor MCP Server is a Model Context Protocol (MCP) server designed to query multiple Ollama models and combine their responses. This creates a "council of advisors" approach where Claude can synthesize multiple viewpoints alongside its own to provide more comprehensive answers. The server integrates seamlessly with Claude for Desktop, offering diverse AI perspectives on a single question.
To install multi-ai-advisor-mcp
for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @YuChenSSR/multi-ai-advisor-mcp --client claude
shell
git clone https://github.com/yourusername/multi-model-advisor.git
cd multi-model-advisor
shell
npm install
shell
npm run build
shell
ollama pull gemma3:1b
ollama pull llama3.2:1b
ollama pull deepseek-r1:1.5b
Create a .env
file in the project root with your desired configuration:
# Server configuration
SERVER_NAME=multi-model-advisor
SERVER_VERSION=1.0.0
DEBUG=true
# Ollama configuration
OLLAMA_API_URL=http://localhost:11434
DEFAULT_MODELS=gemma3:1b,llama3.2:1b,deepseek-r1:1.5b
# System prompts for each model
GEMMA_SYSTEM_PROMPT=You are a supportive and empathetic AI assistant focused on human well-being. Provide considerate and balanced advice.
LLAMA_SYSTEM_PROMPT=You are a logical and analytical AI assistant. Think step-by-step and explain your reasoning clearly.
DEEPSEEK_SYSTEM_PROMPT=You are a creative and innovative AI assistant. Think outside the box and offer novel perspectives.
~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Edit the file to add the Multi-Model Advisor MCP server:
json
{
"mcpServers": {
"multi-model-advisor": {
"command": "node",
"args": ["/absolute/path/to/multi-model-advisor/build/index.js"]
}
}
}
Replace /absolute/path/to/
with the actual path to your project directory.
Restart Claude for Desktop.
Once connected to Claude for Desktop, you can use the Multi-Model Advisor in several ways:
You can see all available models on your system:
Show me which Ollama models are available on my system
Simply ask Claude to use the multi-model advisor:
what are the most important skills for success in today's job market,
you can use gemma3:1b, llama3.2:1b, deepseek-r1:1.5b to help you
Claude will query all default models and provide a synthesized response based on their different perspectives.
list-available-models
: Shows all Ollama models on your systemquery-models
: Queries multiple models with a question
When you ask Claude a question referring to the multi-model advisor:
query-models
toolClaude receives all responses and synthesizes a comprehensive answer
Each model can have a different "persona" or role assigned, encouraging diverse perspectives.
If the server can't connect to Ollama:
- Ensure Ollama is running (ollama serve
)
- Check that the OLLAMA_API_URL
is correct in your .env
file
- Try accessing http://localhost:11434 in your browser to verify Ollama is responding
If a model is reported as unavailable:
- Check that you've pulled the model using ollama pull <model-name>
- Verify the exact model name using ollama list
- Use the list-available-models
tool to see all available models
If the tools don't appear in Claude:
- Ensure you've restarted Claude after updating the configuration
- Check the absolute path in claude_desktop_config.json
is correct
- Look at Claude's logs for error messages
Some managers' AI models may have chosen larger models, but there is not enough memory to run them. You can try specifying a smaller model (see the Basic Usage) or upgrading the memory.
MIT License. For more details, please see the LICENSE file in this project repository.
Contributions are welcome! Please feel free to submit a Pull Request.
Council of models for decision.