LYRAI is a Model Context Protocol (MCP) operating system for multi-AI AGENTs designed to extend the functionality of AI applications by enabling them to interact with financial networks and blockchain public chains. The server offers a range of advanced AI assistants, including blockchain public chain operations (SOLANA, ETH, etc.), fintech market analysis, and learning systems for the education sector.
Welcome to check out the demo of our LYRA MCP-OS!
LYRAIOS aims to create the next generation AI Agent operating system with technological breakthroughs in three dimensions:
For detailed architecture information, see the Architecture Documentation.
LYRAIOS adopts a layered architecture design, including the user interface layer, core OS layer, MCP integration layer, and external services layer.
Provides multiple interaction modes:
Implements basic functions of the AI operating system:
Achieves seamless integration with external services through the Model Context Protocol:
Includes various services integrated through the MCP protocol:
The Tool Integration Protocol is a key component of LYRAIOS's Open Protocol Architecture.
For examples and detailed documentation, see the Tool Integration Guide.
Model Context Protocol (MCP) is a client-server architecture protocol for connecting LLM applications and integrations.
# Clone the repo
git clone https://github.com/GalaxyLLMCI/lyraios
cd lyraios
# Create + activate a virtual env
python3 -m venv aienv
source aienv/bin/activate
# Install phidata
pip install 'phidata[aws]'
# Setup workspace
phi ws setup
# Copy example secrets
cp workspace/example_secrets workspace/secrets
# Create .env file
cp example.env .env
# Run Lyraios locally
phi ws up
# Open [localhost:8501](http://localhost:8501) to view the Streamlit App.
# Stop Lyraios locally
phi ws down
export OPENAI_API_KEY=sk-***
# To use Exa for research, export your EXA_API_KEY (get it from [here](https://dashboard.exa.ai/api-keys))
export EXA_API_KEY=xxx
# To use Gemini for research, export your GOOGLE_API_KEY (get it from [here](https://console.cloud.google.com/apis/api/generativelanguage.googleapis.com/overview?project=lyraios))
export GOOGLE_API_KEY=xxx
# OR set them in the `.env` file
OPENAI_API_KEY=xxx
EXA_API_KEY=xxx
GOOGLE_API_KEY=xxx
# Start the workspace using:
phi ws up
# Open [localhost:8501](http://localhost:8501) to view the Streamlit App.
# Stop the workspace using:
phi ws down
POST /api/v1/assistant/chat
: Process chat messages with the AI assistant.GET /api/v1/health
: Monitor system health status./docs
./redoc
./openapi.json
.lyraios/
├── ai/ # AI core functionality
│ ├── assistants.py # Assistant implementations
│ ├── llm/ # LLM integration
│ └── tools/ # AI tools implementations
├── app/ # Main application
│ ├── components/ # UI components
│ ├── config/ # Application configuration
│ ├── db/ # Database models and storage
│ ├── styles/ # UI styling
│ ├── utils/ # Utility functions
│ └── main.py # Main application entry point
├── assets/ # Static assets like images
├── data/ # Data storage
├── tests/ # Test suite
├── workspace/ # Workspace configuration
│ ├── dev_resources/ # Development resources
│ ├── settings.py # Workspace settings
│ └── secrets/ # Secret configuration (gitignored)
├── docker/ # Docker configuration
├── scripts/ # Utility scripts
├── .env # Environment variables
├── requirements.txt # Python dependencies
└── README.md # Project documentation
# Copy the example .env file
cp example.env .env
# Required environment variables
EXA_API_KEY=your_exa_api_key_here # Get from https://dashboard.exa.ai/api-keys
OPENAI_API_KEY=your_openai_api_key_here # Get from OpenAI dashboard
OPENAI_BASE_URL=your_openai_base_url # Optional: Custom OpenAI API endpoint
# OpenAI Model Configuration
OPENAI_CHAT_MODEL=gpt-4-turbo-preview # Default chat model
OPENAI_VISION_MODEL=gpt-4-vision-preview # Model for vision tasks
OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Model for embeddings
# Optional configuration
STREAMLIT_SERVER_PORT=8501 # Default Streamlit port
API_SERVER_PORT=8000 # Default FastAPI port
# Create Streamlit config directory
mkdir -p ~/.streamlit
# Create config.toml to disable usage statistics (optional)
cat > ~/.streamlit/config.toml << EOL
[browser]
gatherUsageStats = false
EOL
# Run both frontend and backend
python -m scripts.dev run
# Run only frontend
python -m scripts.dev run --no-backend
# Run only backend
python -m scripts.dev run --no-frontend
# Run with custom ports
python -m scripts.dev run --frontend-port 8502 --backend-port 8001
# Start Streamlit frontend
streamlit run app/app.py
# Start FastAPI backend
uvicorn api.main:app --reload
# Install production dependencies
pip install -r requirements.txt
# Install development dependencies
pip install -r requirements-dev.txt
# Install the project in editable mode
pip install -e .
# Install python-dotenv for environment management
pip install python-dotenv
# Install development tools
pip install black isort mypy pytest
Code Style
Follow PEP 8 guidelines.
Use isort for import sorting.
Testing
# Run tests
pytest
# Run tests with coverage
pytest --cov=app tests/
# Install pre-commit hooks
pre-commit install
# Run manually
pre-commit run --all-files
# Build development image
docker build -f docker/Dockerfile.dev -t lyraios:dev .
# Run development container
docker-compose -f docker-compose.dev.yml up
# Build production image
docker build -f docker/Dockerfile.prod -t lyraios:prod .
# Run production container
docker-compose -f docker-compose.prod.yml up -d
# Application Settings
DEBUG=false
LOG_LEVEL=INFO
ALLOWED_HOSTS=example.com,api.example.com
# AI Settings
AI_MODEL=gpt-4
AI_TEMPERATURE=0.7
AI_MAX_TOKENS=1000
# Database Settings
DATABASE_URL=postgresql://user:pass@localhost:5432/dbname
Scaling Options
Configure worker processes via GUNICORN_WORKERS
.
MEMORY_LIMIT
.MAX_CONCURRENT_REQUESTS
.Health Checks
Monitor /health
endpoint.
Review logs in /var/log/lyraios/
.
Backup and Recovery
# Backup database
python scripts/backup_db.py
# Restore from backup
python scripts/restore_db.py --backup-file backup.sql
Troubleshooting
Check application logs.
The system supports both SQLite and PostgreSQL databases:
# SQLite Configuration
DATABASE_TYPE=sqlite
DATABASE_PATH=data/lyraios.db
# PostgreSQL Configuration
DATABASE_TYPE=postgres
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=lyraios
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_password
The system will automatically use SQLite if no PostgreSQL configuration is provided.
We welcome contributions! Please see the CONTRIBUTING.md file for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
LYRAI is a Model Context Protocol (MCP) operating system for multi-AI AGENTs designed to extend the functionality of AI applications by enabling them to interact with financial networks and blockchain public chains. The server offers a range of advanced AI assistants, including blockchain public chain operations (SOLANA, ETH, BSC, etc.).
No releases published
No packages published