BigQuery database integration with schema inspection and query capabilities
This document details the BigQuery Model Context Protocol (MCP) server, a crucial component for enabling Large Language Models (LLMs) to interact with and leverage the power of Google BigQuery. By implementing the MCP, this server provides a standardized interface for LLMs to access database schemas, execute queries, and ultimately gain a deeper understanding of the data landscape.
The BigQuery MCP server acts as a translator, allowing LLMs to seamlessly integrate with BigQuery's vast data warehousing capabilities. This integration unlocks a multitude of possibilities, including:
The server exposes the following tools, providing LLMs with the necessary functionalities to interact with BigQuery:
execute-query
: This tool allows LLMs to execute arbitrary SQL queries against the BigQuery database. The server supports the standard BigQuery SQL dialect, ensuring compatibility with existing queries and data structures.list-tables
: This tool provides LLMs with a comprehensive list of all tables available within the specified BigQuery datasets. This allows LLMs to discover and understand the available data assets.describe-table
: This tool enables LLMs to retrieve detailed schema information for a specific table, including column names, data types, and descriptions. This information is crucial for LLMs to understand the structure of the data and formulate effective queries.The BigQuery MCP server offers flexible configuration options to adapt to different environments and use cases:
--project
(required): Specifies the Google Cloud Platform (GCP) project ID. This is essential for authenticating and authorizing access to BigQuery resources.--location
(required): Defines the GCP location (e.g., europe-west9
) where the BigQuery dataset resides. This ensures that the server connects to the correct regional endpoint.--dataset
(optional): Allows you to restrict the server's scope to specific BigQuery datasets. By specifying one or more datasets using the --dataset
argument (e.g., --dataset my_dataset_1 --dataset my_dataset_2
), you can improve performance and security by limiting the server's access to only the necessary data. If this argument is omitted, the server will consider all datasets within the specified project.This section provides a step-by-step guide to quickly set up and run the BigQuery MCP server.
For a streamlined installation experience, you can leverage Smithery to automatically install the BigQuery Server for Claude Desktop:
npx -y @smithery/cli install mcp-server-bigquery --client claude
To manually configure the server for Claude Desktop, follow these steps:
Locate the Configuration File:
~/Library/Application\ Support/Claude/claude_desktop_config.json
%APPDATA%/Claude/claude_desktop_config.json
Edit the mcpServers
Section:
Add or modify the mcpServers
section in the claude_desktop_config.json
file to include the BigQuery server configuration.
Development/Unpublished Servers Configuration:
"mcpServers": { "bigquery": { "command": "uv", "args": [ "--directory", "{{PATH_TO_REPO}}", "run", "mcp-server-bigquery", "--project", "{{GCP_PROJECT_ID}}", "--location", "{{GCP_LOCATION}}" ] } }
Published Servers Configuration:
"mcpServers": { "bigquery": { "command": "uvx", "args": [ "mcp-server-bigquery", "--project", "{{GCP_PROJECT_ID}}", "--location", "{{GCP_LOCATION}}" ] } }
Replace Placeholders:
Replace the following placeholders with the appropriate values:
{{PATH_TO_REPO}}
: The path to the BigQuery MCP server repository on your local machine.{{GCP_PROJECT_ID}}
: Your Google Cloud Project ID.{{GCP_LOCATION}}
: The GCP location of your BigQuery dataset (e.g., europe-west9
).This section outlines the development workflow for contributing to the BigQuery MCP server.
To prepare the package for distribution, follow these steps:
Synchronize Dependencies and Update Lockfile:
uv sync
Build Package Distributions:
uv build
This command will generate source and wheel distributions in the dist/
directory.
Publish to PyPI:
uv publish
Note: You will need to configure your PyPI credentials using environment variables or command-line flags:
--token
or UV_PUBLISH_TOKEN
--username
/UV_PUBLISH_USERNAME
and --password
/UV_PUBLISH_PASSWORD
Debugging MCP servers that communicate over standard input/output (stdio) can be challenging. To simplify the debugging process, we highly recommend using the MCP Inspector.
You can launch the MCP Inspector using npm
:
npx @modelcontextprotocol/inspector uv --directory {{PATH_TO_REPO}} run mcp-server-bigquery
After launching the Inspector, it will display a URL that you can access in your browser to start debugging.
The BigQuery MCP server provides a powerful and standardized way to connect LLMs with the vast data resources of BigQuery. By enabling LLMs to access database schemas, execute queries, and understand data structures, this server unlocks a wide range of possibilities for AI-powered data analysis, intelligent data exploration, and automated report generation. This document provides a comprehensive guide to understanding, configuring, and developing with the BigQuery MCP server, empowering you to build innovative and data-driven applications.
๐ โ๏ธ Biomedical research server providing access to PubMed, ClinicalTrials.gov, and MyVariant.info.
๐ MCP server that provides SQL analysis, linting, and dialect conversion using [SQLGlot](https://github.com/tobymao/sqlglot)
๐ ๐ All-in-one MCP server for Postgres development and operations, with tools for performance analysis, tuning, and health checks
Supabase MCP Server with support for SQL query execution and database exploration tools