mc

mcp-snowflake-server

Snowflake integration implementing read and (optional) write operations as well as insight tracking

Publishermcp-snowflake-server
Submitted date4/13/2025

Bridging LLMs and Snowflake: An Expert's Guide to the Model Context Protocol Server

smithery badge PyPI - Version


Unveiling the Power of Context: Integrating LLMs with Snowflake

This document details the implementation of a Model Context Protocol (MCP) server designed to seamlessly connect Large Language Models (LLMs) with Snowflake, a leading cloud data platform. By leveraging MCP, we empower LLMs with direct access to Snowflake's vast data resources, enabling sophisticated data analysis, intelligent decision-making, and the creation of AI-driven applications with unparalleled context awareness. This server acts as a crucial bridge, transforming raw data into actionable insights for LLMs.


Core Components: Resources and Tools for Enhanced LLM Interaction

This MCP server exposes key resources and tools, providing LLMs with the necessary capabilities to interact effectively with Snowflake.

Resources: Contextual Data at Your Fingertips

  • memo://insights: A dynamic, continuously updated repository of data insights derived from Snowflake. This memo acts as a central knowledge base, aggregating findings discovered through the append_insight tool. The real-time update mechanism ensures that LLMs always have access to the latest discoveries.

  • context://table/{table_name}: (Optional, requires prefetch) Exposes detailed schema summaries for each table, including column definitions, data types, and descriptive comments. This resource provides LLMs with a deep understanding of the data structure, enabling more accurate and contextually relevant queries and analyses.

Tools: Empowering LLMs with Direct Database Interaction

The server provides a comprehensive suite of tools, categorized for clarity, that enable LLMs to perform a wide range of operations within Snowflake.

Query Tools: Data Retrieval and Manipulation

  • read_query: Executes SELECT statements, allowing LLMs to retrieve data from Snowflake. The tool returns query results as an array of objects, facilitating easy parsing and utilization by the LLM.

    • Input:
      • query (string): The SELECT SQL query to be executed.
    • Returns: Query results as an array of objects.
  • write_query (Requires --allow-write): Executes INSERT, UPDATE, and DELETE statements, enabling LLMs to modify data within Snowflake. This tool is disabled by default for security reasons and requires explicit activation.

    • Input:
      • query (string): The SQL modification query.
    • Returns: The number of affected rows or a confirmation message.
  • create_table (Requires --allow-write): Executes CREATE TABLE statements, allowing LLMs to create new tables within Snowflake. This tool is also disabled by default and requires explicit activation.

    • Input:
      • query (string): The CREATE TABLE SQL statement.
    • Returns: Confirmation of table creation.

Schema Tools: Database Structure Discovery

  • list_databases: Retrieves a list of all databases within the Snowflake instance. This tool allows LLMs to discover the available data sources.

    • Returns: An array of database names.
  • list_schemas: Retrieves a list of all schemas within a specified database. This tool enables LLMs to navigate the database structure and identify relevant schemas.

    • Input:
      • database (string): The name of the database.
    • Returns: An array of schema names.
  • list_tables: Retrieves a list of all tables within a specified database and schema. This tool provides LLMs with a comprehensive view of the available tables.

    • Input:
      • database (string): The name of the database.
      • schema (string): The name of the schema.
    • Returns: An array of table metadata.
  • describe_table: Retrieves detailed column information for a specified table, including names, data types, nullability, defaults, and comments. This tool provides LLMs with a deep understanding of the table structure, enabling more accurate and contextually relevant queries.

    • Input:
      • table_name (string): The fully qualified table name (database.schema.table).
    • Returns: An array of column definitions.

Analysis Tools: Knowledge Extraction and Aggregation

  • append_insight: Adds new data insights to the memo://insights resource. This tool allows LLMs to contribute to the collective knowledge base, enriching the context available to other LLMs and applications.
    • Input:
      • insight (string): A data insight discovered from analysis.
    • Returns: Confirmation of insight addition.
    • Effect: Triggers an update of the memo://insights resource.

Seamless Integration: Connecting to Claude Desktop

This section provides detailed instructions for integrating the Snowflake MCP server with the Claude Desktop application.

Method 1: Streamlined Installation via Smithery

Leverage the Smithery CLI for a simplified installation process:

npx -y @smithery/cli install mcp_snowflake_server --client claude

This command automates the installation and configuration of the Snowflake Server for Claude Desktop.

Method 2: Advanced Configuration with UVX

This method provides greater control over the installation and configuration process.

  1. Configure claude_desktop_config.json: Add the following configuration block to your claude_desktop_config.json file:
"mcpServers": { "snowflake_pip": { "command": "uvx", "args": [ "--python=3.12", // Optional: specify Python version <=3.12 "mcp_snowflake_server", "--account", "your_account", "--warehouse", "your_warehouse", "--user", "your_user", "--password", "your_password", "--role", "your_role", "--database", "your_database", "--schema", "your_schema" // Optionally: "--allow_write" // Optionally: "--log_dir", "/absolute/path/to/logs" // Optionally: "--log_level", "DEBUG"/"INFO"/"WARNING"/"ERROR"/"CRITICAL" // Optionally: "--exclude_tools", "{tool_name}", ["{other_tool_name}"] ] } }
*   **`command`**: Specifies the command to execute the server. In this case, we use `uvx` to manage the Python environment.
*   **`args`**: An array of arguments passed to the command.
    *   `--python=3.12`: (Optional) Specifies the Python version to use. Ensure it's <=3.12.
    *   `mcp_snowflake_server`: The name of the MCP server package.
    *   `--account`, `--warehouse`, `--user`, `--password`, `--role`, `--database`, `--schema`: Snowflake connection parameters. Replace these with your actual credentials.
    *   `--allow_write`: (Optional) Enables write operations. Use with caution.
    *   `--log_dir`: (Optional) Specifies the directory for log files.
    *   `--log_level`: (Optional) Sets the logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL).
    *   `--exclude_tools`: (Optional) Excludes specific tools from being exposed to the LLM.

Method 3: Local Installation for Development and Testing

This method allows for local development and testing of the Snowflake MCP server.

  1. Install Claude AI Desktop App: Download and install the Claude AI Desktop App from https://claude.ai/download.

  2. Install uv: Install the uv package manager using the following command:

curl -LsSf https://astral.sh/uv/install.sh | sh
  1. Create a .env file: Create a .env file in the root directory of the project and add your Snowflake credentials:
SNOWFLAKE_USER="xxx@your_email.com" SNOWFLAKE_ACCOUNT="xxx" SNOWFLAKE_ROLE="xxx" SNOWFLAKE_DATABASE="xxx" SNOWFLAKE_SCHEMA="xxx" SNOWFLAKE_WAREHOUSE="xxx" SNOWFLAKE_PASSWORD="xxx" # Alternatively, use external browser authentication: # SNOWFLAKE_AUTHENTICATOR="externalbrowser"
*   Replace the placeholder values with your actual Snowflake credentials.
*   Alternatively, you can use external browser authentication by setting the `SNOWFLAKE_AUTHENTICATOR` variable to `externalbrowser`.

4. [Optional] Modify runtime_config.json: Customize the server's behavior by modifying the runtime_config.json file. This file allows you to set exclusion patterns for databases, schemas, or tables, preventing them from being exposed to the LLM.

  1. Test Locally: Run the server locally using the following command:
uv --directory /absolute/path/to/mcp_snowflake_server run mcp_snowflake_server
*   Replace `/absolute/path/to/mcp_snowflake_server` with the actual path to the server's directory.

6. Add the server to your claude_desktop_config.json: Add the following configuration block to your claude_desktop_config.json file:

"mcpServers": { "snowflake_local": { "command": "/absolute/path/to/uv", "args": [ "--python=3.12", // Optional "--directory", "/absolute/path/to/mcp_snowflake_server", "run", "mcp_snowflake_server" // Optionally: "--allow_write" // Optionally: "--log_dir", "/absolute/path/to/logs" // Optionally: "--log_level", "DEBUG"/"INFO"/"WARNING"/"ERROR"/"CRITICAL" // Optionally: "--exclude_tools", "{tool_name}", ["{other_tool_name}"] ] } }
*   **`command`**: Specifies the command to execute the server.
*   **`args`**: An array of arguments passed to the command.
    *   `--python=3.12`: (Optional) Specifies the Python version to use.
    *   `--directory`: Specifies the directory containing the server's code.
    *   `run mcp_snowflake_server`: Executes the server.
    *   `--allow_write`: (Optional) Enables write operations. Use with caution.
    *   `--log_dir`: (Optional) Specifies the directory for log files.
    *   `--log_level`: (Optional) Sets the logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL).
    *   `--exclude_tools`: (Optional) Excludes specific tools from being exposed to the LLM.

Important Considerations: Security and Customization

  • Write Operations: By default, write operations are disabled for security reasons. To enable them, you must explicitly use the --allow-write flag. Exercise caution when enabling write operations, as they can potentially modify data within your Snowflake instance.

  • Data Filtering: The server supports filtering out specific databases, schemas, or tables via exclusion patterns defined in the runtime_config.json file. This allows you to control the data that is exposed to the LLM, ensuring data privacy and security.

  • Prefetching: Enabling prefetching exposes additional per-table context resources, providing LLMs with more detailed information about the data structure. This can improve the accuracy and relevance of LLM-generated queries and analyses.

  • Dynamic Insights: The append_insight tool dynamically updates the memo://insights resource, ensuring that LLMs always have access to the latest discoveries and insights.


Visit More

View All