k8

k8m

Provides MCP multi-cluster Kubernetes management and operations, featuring a management interface, logging, and nearly 50 built-in tools covering common DevOps and development scenarios. Supports both standard and CRD resources.

Publisherk8m
Submitted date4/13/2025
# ✨ k8m: AI-Powered Mini Kubernetes Dashboard - Simplify Cluster Management with AI and MCP Integration ✨

k8m is an AI-driven, lightweight Kubernetes dashboard designed to simplify cluster management. Built on AMIS and leveraging kom as a Kubernetes API client, k8m integrates Qwen2.5-Coder-7B and supports deepseek-ai/DeepSeek-R1-Distill-Qwen-7B models for enhanced interaction. It also allows integration with your own private large language models (LLMs).

🚀 Demo

Explore the live demo here with the credentials demo/demo.

📚 Documentation

Refer to the documentation for detailed configuration and usage instructions.

🌟 Key Features

  • Miniature Design: A single executable file for easy deployment and simple usage.
  • User-Friendly: Intuitive interface and streamlined workflows for effortless Kubernetes management.
  • High Performance: Golang backend and Baidu AMIS frontend ensure high resource utilization and fast response times.
  • AI-Driven Integration:
    • ChatGPT-powered features: word highlighting for explanations, resource guides, YAML attribute translation, Describe information interpretation, log AI diagnostics, and command recommendations.
    • Integration with k8s-gpt for intelligent support in managing Kubernetes.
  • Model Context Protocol (MCP) Integration:
    • Visualize and manage MCP for LLM-driven tool execution.
    • Includes 49 built-in Kubernetes multi-cluster MCP tools, combinable for hundreds of cluster operations.
    • Acts as an MCP Server for other LLM applications, enabling LLM-driven Kubernetes management.
    • Supports mainstream mcp.so services.
  • MCP Permission Integration: Seamlessly integrates multi-cluster management permissions with MCP LLM call permissions, ensuring secure usage and preventing unauthorized operations.
  • Multi-Cluster Management: Automatically identifies in-cluster configurations and scans for kubeconfig files in the same directory to manage multiple clusters.
  • Multi-Cluster Permission Management: Supports authorization for users and user groups, with cluster-specific permissions including read-only, Exec commands, and cluster administrator roles.
  • Pod File Management: Browse, edit, upload, download, and delete files within Pods.
  • Pod Runtime Management: Real-time Pod log viewing, log downloading, and direct Shell command execution within Pods.
  • CRD Management: Automatically discovers and manages CRD resources.
  • Helm Marketplace: Freely add Helm repositories and install, uninstall, and upgrade Helm applications with one click.
  • Cross-Platform Support: Compatible with Linux, macOS, and Windows, supporting x86, ARM, and other architectures.
  • Fully Open Source: All source code is open and unrestricted, allowing for customization, extension, and commercial use.

k8m is designed with the philosophy of "AI-driven, lightweight, efficient, and simplified," empowering developers and operations personnel to quickly and easily manage Kubernetes clusters.

🏃 Running k8m

  1. Download: Get the latest version from GitHub.
  2. Run: Execute ./k8m to start the dashboard and access it at http://127.0.0.1:3618.
  3. Parameters:
Usage of ./k8m: --admin-password string Administrator password, enabled when temporary admin account configuration is enabled (default "123456") --admin-username string Administrator username, enabled when temporary admin account configuration is enabled (default "admin") --any-select Enable arbitrary selection of word highlighting for explanation, enabled by default (default true) --print-config Print configuration information (default false) -k, --chatgpt-key string Custom API Key for the large language model (default "sk-xxxxxxx") -m, --chatgpt-model string Custom model name for the large language model (default "Qwen/Qwen2.5-7B-Instruct") -u, --chatgpt-url string Custom API URL for the large language model (default "https://api.siliconflow.cn/v1") --connect-cluster Automatically connect to existing clusters when starting the cluster, disabled by default -d, --debug Debug mode --enable-ai Enable AI features, enabled by default (default true) --enable-temp-admin Enable temporary administrator account configuration, disabled by default --in-cluster Automatically register and manage the host cluster, enabled by default --jwt-token-secret string Secret used to generate JWT tokens after login (default "your-secret-key") -c, --kubeconfig string Path to the kubeconfig file (default "/root/.kube/config") --kubectl-shell-image string Kubectl Shell image. Defaults to bitnami/kubectl:latest, must include the kubectl command (default "bitnami/kubectl:latest") --log-v int Log level for klog klog.V(2) (default 2) --login-type string Login method, password, oauth, token, etc., default is password (default "password") --node-shell-image string NodeShell image. Defaults to alpine:latest, must include the `nsenter` command (default "alpine:latest") -p, --port int Listening port (default 3618) --sqlite-path string Path to the sqlite database file (default "./data/k8m.db") -s, --mcp-server-port int MCP Server listening port, default 3619 (default 3619) --use-builtin-model Use built-in large language model parameters, enabled by default (default true) -v, --v Level Log level for klog (default 2)

Alternatively, use Docker Compose (recommended):

services: k8m: container_name: k8m image: registry.cn-hangzhou.aliyuncs.com/minik8m/k8m restart: always ports: - "3618:3618" - "3619:3619" environment: TZ: Asia/Shanghai volumes: - ./data:/app/data

After starting, access port 3618 with the default user admin and password 123456.

For a quick online experience, visit: k8m.

🤖 ChatGPT Configuration Guide

Built-in GPT

From version v0.0.8 onwards, GPT is built-in and requires no configuration. If you need to use your own GPT, please refer to the following steps.

Environment Variable Configuration

Set the following environment variables to enable ChatGPT:

export OPENAI_API_KEY="sk-XXXXX" export OPENAI_API_URL="https://api.siliconflow.cn/v1" export OPENAI_MODEL="Qwen/Qwen2.5-7B-Instruct"

ChatGPT Status Debugging

If the settings do not take effect, try using ./k8m -v 6 to get more debugging information. Check the logs to confirm whether ChatGPT is enabled.

ChatGPT 开启状态:true ChatGPT 启用 key:sk-hl**********************************************, url:https: // api.siliconflow.cn/v1 ChatGPT 使用环境变量中设置的模型:Qwen/Qwen2.5-7B-Instruc

ChatGPT Account

This project integrates the github.com/sashabaranov/go-openai SDK. For access within China, it is recommended to use the services of Silicon Flow. Create an API_KEY at https://cloud.siliconflow.cn/account/ak after logging in.

⚙️ k8m Environment Variable Settings

The following table lists the environment variables supported by k8m and their functions:

| Environment Variable | Default Value | Description

Visit More

View All