A custom MCP server designed to perform code reviews using Repomix and Large Language Models (LLMs). This server provides structured code reviews with specific issues and recommendations, supporting multiple LLM providers such as OpenAI, Anthropic, and Gemini.
# Clone the repository
git clone https://github.com/yourusername/code-review-server.git
cd code-review-server
# Install dependencies
npm install
# Build the server
npm run build
Create a .env
file in the root directory based on the .env.example
template:
cp .env.example .env
Edit the .env
file to set up your preferred LLM provider and API key:
# LLM Provider Configuration
LLM_PROVIDER=OPEN_AI
OPENAI_API_KEY=your_openai_api_key_here
The code review server implements the Model Context Protocol (MCP) and can be used with any MCP client:
# Start the server
node build/index.js
The server exposes two main tools:
1. analyze_repo
: Flattens a codebase using Repomix
2. code_review
: Performs a code review using an LLM
Use this tool when you need to:
- Get a high-level overview of a codebase's structure and organization
- Flatten a repository into a textual representation for initial analysis
- Understand the directory structure and file contents without detailed review
- Prepare for a more in-depth code review
- Quickly scan a codebase to identify relevant files for further analysis
Use this tool when you need to:
- Perform a comprehensive code quality assessment
- Identify specific security vulnerabilities, performance bottlenecks, or code quality issues
- Get actionable recommendations for improving code
- Conduct a detailed review with severity ratings for issues
- Evaluate a codebase against best practices
For testing purposes, you can use the included CLI tool:
node build/cli.js <repo_path> [options]
Options:
- --files <file1,file2>
: Specific files to review
- --types <.js,.ts>
: File types to include in the review
- --detail <basic|detailed>
: Level of detail (default: detailed)
- --focus <areas>
: Areas to focus on (security,performance,quality,maintainability)
Example:
node build/cli.js ./my-project --types .js,.ts --detail detailed --focus security,quality
# Run tests
npm test
# Watch mode for development
npm run watch
# Run the MCP inspector tool
npm run inspector
The code review server integrates directly with multiple LLM provider APIs:
- OpenAI (default: gpt-4o)
- Anthropic (default: claude-3-opus-20240307)
- Gemini (default: gemini-1.5-pro)
Configure your preferred LLM provider in the .env
file:
# Set which provider to use
LLM_PROVIDER=OPEN_AI # Options: OPEN_AI, ANTHROPIC, or GEMINI
# Provider API Keys (add your key for the chosen provider)
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
GEMINI_API_KEY=your-gemini-api-key
You can optionally specify which model to use for each provider:
# Optional: Override the default models
OPENAI_MODEL=gpt-4-turbo
ANTHROPIC_MODEL=claude-3-sonnet-20240229
GEMINI_MODEL=gemini-1.5-flash-preview
code_review
tool processes code using Repomix to flatten the repository structureThe code review is returned in a structured JSON format:
{
"summary": "Brief summary of the code and its purpose",
"issues": [
{
"type": "SECURITY|PERFORMANCE|QUALITY|MAINTAINABILITY",
"severity": "HIGH|MEDIUM|LOW",
"description": "Description of the issue",
"line_numbers": [12, 15],
"recommendation": "Recommended fix"
}
],
"strengths": ["List of code strengths"],
"recommendations": ["List of overall recommendations"]
}
MIT