This Node.js-based MCP server demonstrates how to build and integrate custom tools into AI-assisted development environments, such as Cursor AI. It provides tools for basic arithmetic operations and environment variable retrieval, using Zod for schema validation and StdioServerTransport for seamless integration. The project simplifies the process of adding custom functionality to AI-powered IDEs, enhancing developer productivity and workflow efficiency.
This project implements a Model Context Protocol (MCP) server to enhance AI assistants with custom tools and resources. It includes features like web crawling using crawl4ai, dynamic greeting generation, and integration with Google Search API. The server supports AI assistants like Claude and provides tools for searching documentation for LangChain, LlamaIndex, and OpenAI. It is designed to be easily integrated with Cursor IDE and includes fixes for Windows-specific issues.
The MCP server is a robust backend service that facilitates file access, database connections, API integration, and vector database operations. It is specifically designed to integrate with Qwen, a large language model, and includes comprehensive Docker deployment configurations and Qwen usage examples. The server supports MongoDB for database operations, external API integrations, and FAISS for vector storage and similarity searches, making it a versatile tool for various computational tasks.
This MCP server facilitates direct access to OpenAI's ChatGPT API from Claude Desktop, allowing for dynamic conversations between Claude and ChatGPT. It supports customizable parameters like model versions and temperature, integrates web search for up-to-date information, and utilizes OpenAI's Responses API for automatic conversation state management. This setup enhances Claude's capabilities by leveraging ChatGPT's advanced conversational features.
The Docs Fetch MCP Server is designed to enable LLMs to autonomously explore web pages and documentation. It fetches clean, readable content from web pages and allows recursive exploration of linked pages up to a specified depth. This tool is particularly useful for gathering comprehensive information on specific topics by exploring documentation or web content. It features content extraction, link analysis, recursive exploration, parallel processing, robust error handling, and a dual-strategy approach for efficient web crawling.
The MCP-Repo2LLM server is designed to bridge the gap between traditional codebases and modern AI language models by converting repositories into LLM-friendly formats. It addresses challenges such as processing large codebases, maintaining context, and handling multiple programming languages efficiently. Key features include smart repository scanning, context preservation, multi-language support, metadata enhancement, and optimized processing for large repositories.
The SingleStore MCP Server facilitates integration between SingleStore's Management API and external systems using the Model Context Protocol (MCP). It enables users to interact with SingleStore through natural language, simplifying complex operations. The server supports various tools for workspace management, SQL execution, and notebook creation, making it ideal for developers and data engineers.
This repository provides a barebones foundation for building Model Control Protocol (MCP) servers for macOS applications and command line tools. It demonstrates basic MCP integration using the `mcp-swift-sdk`, offering a library template and a command-line example. The project serves as a reference implementation and a starting point for custom MCP server development.
The Kokoro Text to Speech MCP Server is a Python-based implementation that converts text into .mp3 files using the Kokoro TTS model. It supports customizable voice, speed, and language settings, and includes features for local file storage and optional S3 uploads. The server can be configured with environment variables and integrates with AWS S3 for cloud storage. It also provides a client script for sending TTS requests and managing MP3 files.
The FastDomainCheck MCP Server is a Model Context Protocol (MCP) implementation designed to check domain name registration status in bulk. It enables secure, two-way connections between AI tools (such as Claude) and domain availability data, ensuring seamless compatibility. Key features include bulk domain checking, dual verification using WHOIS and DNS, support for Internationalized Domain Names (IDN), and built-in input validation and error handling.
The IMDB Data Access MCP Server provides a structured way to access and manage IMDB data using the Model Context Protocol (MCP). It includes features like a custom note storage system, a prompt for summarizing notes, and a tool for adding new notes. The server is configured to work with Claude Desktop and supports development and debugging through the MCP Inspector.
The Postman MCP Server is a Cloudflare Worker implementation that provides API access to Postman collections and environments through the Model Context Protocol (MCP). It allows Claude AI to retrieve, create, and manage Postman collections and environments, facilitating API testing, documentation, and workflow automation. The server supports operations like adding requests, running collections, and managing environments, making it a powerful tool for integrating AI into API development workflows.
The Docker MCP Server integrates with the Model Context Protocol (MCP) to provide AI-powered automation for Docker container management. It enables users to create, monitor, and control containers using natural language commands. Key features include real-time status monitoring, Docker API integration via Dockerode, and extensible MCP tool ecosystem. This server simplifies container lifecycle management and enhances deployment workflows.
The Structured Thinking MCP Server is a TypeScript-based implementation of the Model Context Protocol (MCP) designed to allow Large Language Models (LLMs) to programmatically construct mind maps. It enforces metacognitive self-reflection by assigning quality scores and stages to thoughts, guiding the LLM's thinking process. The server supports thought branching, memory management, and provides feedback to steer the LLM's reasoning.