ga

gateway

MCP and MCP SSE Server that automatically generate API based on database schema and data. Supports PostgreSQL, Clickhouse, MySQL, Snowflake, BigQuery, Supabase

Publishergateway
Submitted date4/13/2025

CentralMind Gateway: Supercharge Your LLMs with Instant Data Access

Build Binaries        

Unleash the Power of Your Data for AI Agents

CentralMind Gateway is a revolutionary solution designed to bridge the gap between your structured data and the world of AI agents and LLM-powered applications. It provides a streamlined, secure, and optimized pathway for AI to access and leverage your valuable data assets. Forget complex API configurations and potential security vulnerabilities – CentralMind Gateway empowers you to create robust API or Model Context Protocol (MCP) servers in minutes.

🚀 Interactive Demo: Experience the power firsthand with our GitHub Codespaces demo:

Deploy with GitHub Codespaces

The Challenge: Data Access for AI

AI agents thrive on data, but traditional APIs and direct database access pose significant challenges:

  • Security Risks: Exposing databases directly to AI agents creates vulnerabilities and potential data breaches.
  • Compliance Concerns: Handling sensitive data (PII) requires meticulous attention to GDPR, CPRA, and other regulations.
  • Performance Bottlenecks: Traditional APIs are not optimized for the unique demands of AI workloads.
  • Lack of Context: AI agents need rich metadata and context to effectively understand and utilize API endpoints.

The CentralMind Gateway Solution: Intelligent Data Exposure

CentralMind Gateway addresses these challenges head-on by providing an intelligent API layer that automates the creation of secure, LLM-optimized APIs for your structured data.

Key Benefits:

  • Simplified Data Access: Expose your databases to AI agents with minimal configuration and code.
  • Enhanced Security: Protect sensitive data with built-in PII filtering and redaction capabilities.
  • Compliance Assurance: Ensure adherence to data privacy regulations with comprehensive auditing and traceability features.
  • Optimized Performance: Leverage caching and other optimizations to accelerate AI workloads.
  • Context-Aware APIs: Utilize the Model Context Protocol (MCP) to provide AI agents with the necessary context for effective API interaction.

Getting Started: One-Line Deployment

Deploy CentralMind Gateway with a single Docker command:

docker run --platform linux/amd64 -p 9090:9090 \ ghcr.io/centralmind/gateway:v0.2.6 start \ --connection-string "postgres://db-user:db-password@db-host/db-name?sslmode=require"

This command instantly creates an API endpoint and an MCP server, ready for integration with your AI agents:

INFO Gateway server started successfully! INFO MCP SSE server for AI agents is running at: http://localhost:9090/sse INFO REST API with Swagger UI is available at: http://localhost:9090/

Core Features: A Deep Dive

CentralMind Gateway is packed with features designed to empower your AI initiatives:

  • ⚡ Automatic API Generation: Leverage LLMs to automatically generate APIs based on your database schema and data samples.
  • 🗄️ Broad Database Support: Connect to a wide range of databases, including PostgreSQL, MySQL, ClickHouse, Snowflake, MSSQL, BigQuery, Oracle Database, SQLite, and Elasticsearch.
  • 🌍 Multi-Protocol Support: Expose APIs via REST or MCP, including SSE mode for real-time updates.
  • 📜 Auto-Generated Documentation: Benefit from Swagger documentation and OpenAPI 3.1.0 specifications for easy API discovery and integration.
  • 🔒 Robust PII Protection: Implement regex-based or Microsoft Presidio-powered plugins for PII and sensitive data redaction.
  • ⚡ Flexible Configuration: Customize Gateway with YAML configuration and a powerful plugin system.
  • 🐳 Versatile Deployment: Deploy as a binary or Docker container, with a ready-to-use Helm chart for Kubernetes environments.
  • 🤖 Multiple AI Provider Support: Seamlessly integrate with OpenAI, Anthropic, Amazon Bedrock, Google Gemini, and Google VertexAI.
  • 📦 Local & On-Premises LLMs: Utilize self-hosted LLMs through configurable AI endpoints and models.
  • 🔑 Row-Level Security (RLS): Implement fine-grained data access control using Lua scripts.
  • 🔐 Authentication Options: Secure your APIs with built-in API key and OAuth support.
  • 👀 Comprehensive Monitoring: Integrate with OpenTelemetry (OTel) for request tracking and audit trails.
  • 🏎️ Performance Optimization: Implement time-based and LRU caching strategies to enhance API performance.

How CentralMind Gateway Works: A Visual Overview

img.png

  1. Connect & Discover: Gateway connects to your database and analyzes the schema and data samples to generate an optimized API structure. LLMs are used only during the discovery phase to produce the API configuration.
  2. Deploy: Deploy Gateway using various options, including standalone binary, Docker, or Kubernetes.
  3. Use & Integrate: Access your data through REST APIs or the Model Context Protocol (MCP), seamlessly integrating with AI models and applications like LangChain, OpenAI, Claude Desktop, and Cursor.

Deep Dive: API Generation with LLMs

CentralMind Gateway leverages the power of LLMs to automate API configuration. Here's how:

  1. Choose an AI Provider: Select from supported providers like OpenAI, Anthropic, Amazon Bedrock, Google Gemini, or Google Vertex AI. Google Gemini offers a generous free tier for development and testing.

  2. Configure Authentication: Set up the necessary API keys or credentials for your chosen provider.

  3. Run the Discovery Command: Execute the gateway discover command, specifying the AI provider, connection string, and a prompt describing the desired API.

    ./gateway discover \ --ai-provider gemini \ --connection-string "postgresql://neondb_owner:MY_PASSWORD@MY_HOST.neon.tech/neondb?sslmode=require" \ --prompt "Generate for me awesome readonly API"
  4. Review the Generated Configuration: Examine the gateway.yaml file to understand the generated API endpoints, parameters, and queries.

Running the API: Flexible Deployment Options

  • Locally:

    ./gateway start --config gateway.yaml rest
  • Docker Compose:

    docker compose -f ./example/simple/docker-compose.yml up

MCP Protocol Integration: Seamless AI Agent Connectivity

CentralMind Gateway fully supports the Model Context Protocol (MCP) for seamless integration with AI agents like Claude.

  1. Build the Gateway Binary:

    go build .
  2. Configure Claude Desktop:

    { "mcpServers": { "gateway": { "command": "PATH_TO_GATEWAY_BINARY", "args": ["start", "--config", "PATH_TO_GATEWAY_YAML_CONFIG", "mcp-stdio"] } } }

Roadmap: Future Enhancements

CentralMind Gateway is constantly evolving. Planned features include:

  • Expanded Database Support: Redshift, S3 (Iceberg and Parquet), Oracle DB, Microsoft SQL Server, Elasticsearch.
  • SSH Tunneling: Secure connections through jump hosts or SSH bastions.
  • Advanced Query Capabilities: Complex filtering syntax and aggregation functions as parameters.
  • Enhanced MCP Security: API key and OAuth authentication for MCP.
  • Schema Management: Automated schema evolution and API versioning.
  • Advanced Traffic Management: Intelligent rate limiting and request throttling.
  • Write Operations Support: Insert and update operations.

CentralMind Gateway is your key to unlocking the power of your data for AI. Start building intelligent applications today!

Visit More

View All