The Vibe Eyes Client is a lightweight library designed to integrate browser games with the Vibe Eyes MCP debug server. It enables real-time debugging, visualization, and analysis by capturing canvas screenshots, collecting console logs, and displaying SVG visualizations in a dedicated debug window. Features include automatic canvas capture, error handling, and minimal performance impact, making it ideal for developers working on AI-driven browser games.
This project is a collection of MCP (Model Context Protocol) servers written in .NET, specifically designed for NuGet package management. It includes functionalities like searching NuGet packages with custom queries, retrieving package contents, and accessing specific files within packages. The server integrates seamlessly with NuGet.org and supports optional parameters for advanced search and package retrieval.
This project provides an MCP (Model Context Protocol) server specifically designed for PipeCD, enabling seamless integration and management of applications and deployments. It supports configuration via environment variables and includes features for resource handling and URI formatting.
This MCP server facilitates memory management for Claude by interacting with a memory text file. It allows Claude to add, search, delete, and list memories, ensuring consistent context across chats. The server uses a simple text file structure inspired by ChatGPT's memory system, making it easy to manage and retrieve memories during conversations.
The Model Context Protocol (MCP) Development Framework provides a comprehensive toolkit for creating custom tools that interact with large language models. It supports advanced functionalities such as web content retrieval, file processing (PDF, Word, Excel), and integration with Cursor IDE. The framework is designed for extensibility, offering modular components and efficient document processing capabilities.
This project leverages NVIDIA AgentIQ and NIM Inference Microservice to enhance the Anthropic MCP and GPT-4o-mini models, creating a context-aware conversational AI. It features natural reasoning with NVIDIA's Llama3 Nemotron Super 49B model, tool-using agents via LangGraph, and an interactive Streamlit frontend. The setup is streamlined with the `aiq` CLI, and real-time monitoring is enabled with LangFuse.
This MCP server enables seamless integration of URL screenshot capture capabilities into AI applications like Claude Desktop. It simplifies the process of generating and sharing webpage snapshots, making it ideal for automation workflows. Built with Python and the MCP SDK, it offers easy setup and real-time functionality.
This repository focuses on rapid research and development using AI, AI Agents, and Claude MCP Servers. It includes tools for local and cloud-based AI development, such as Ollama for LLM backend, Open-WebUI for multi-LLM frontend, and Docker for easy hosting. The goal is to inspire and enable the building of advanced AI features and applications.
The LSPD Interrogation MCP Server is a specialized implementation of the Model Context Protocol (MCP) designed to simulate police interrogations. It integrates OpenAI's GPT-3.5-turbo model to generate dynamic interrogation strategies, simulate suspect responses, and create realistic dialogue flows. Key features include officer profile management, smart interrogation mechanics, and crime type integration, making it a powerful tool for training or simulation purposes.
The Filesystem Model Context Protocol Server is a lightweight Python implementation that securely exposes file contents and metadata from a preconfigured directory. It leverages FastMCP to provide endpoints for listing files and reading their contents, ensuring safe file access by validating paths and preventing directory traversal attacks. This server is designed to integrate with LLMs, enabling them to interact with local file systems in a controlled and secure manner.
The BigQuery Analysis MCP Server is designed to execute SQL queries on Google BigQuery while ensuring safety and efficiency. It provides query validation (dry run) to verify query validity and estimate processing size, and it only executes SELECT queries under 1TB to prevent data modifications. The server returns query results in structured JSON format, making it ideal for integrating with tools like Claude Desktop.
This MCP server allows users to generate and execute AWS CLI commands directly from Claude, providing full access to AWS CLI capabilities. It supports listing AWS services, retrieving detailed service information, and executing commands with optional parameters. The server integrates seamlessly with Claude Desktop, offering a powerful tool for managing AWS resources through conversational AI.
The Puppeteer-Extra MCP Server provides advanced browser automation capabilities by leveraging Puppeteer-Extra and the Stealth Plugin. It enables seamless interaction with web pages, emulating human behavior to avoid detection as automation. Features include screenshot capture, console logging, JavaScript execution, and a full suite of interaction methods like click, fill, and hover.
This repository contains a collection of server implementations designed to handle multimodal data indexing and querying, including audio, video, images, and documents. The services are orchestrated using Docker for local development, providing capabilities such as semantic search, object detection, and retrieval-augmented generation (RAG). Each server is accessible via specific endpoints and can be configured through Dockerfiles or environment variables.