Klama is a CLI tool that helps diagnose and troubleshoot Kubernetes-related issues using AI-powered assistance. It interacts with language models to interpret user queries, validate and execute safe Kubernetes commands, and provide insights based on the results.
- Klama sends your query to the main AI model.
- The AI interprets the query and may suggest Kubernetes commands to gather more information.
- If a command is suggested, Klama will validate it for safety using either:
- A separate AI model (if provided in the configuration)
- User approval (if no validation model is configured)
- The command is executed if deemed safe, and the output is sent back to the main AI for further analysis.
- This process repeats until the AI has enough information to provide a final answer.
- Klama presents the AI's findings and any relevant Kubernetes information.
This approach allows for flexibility in model selection. A more capable model can be used for the main logic, while a faster, lighter model can optionally be used for command validation, potentially saving costs and increasing speed. If no validation model is provided, Klama will ask the user to approve each command before execution, ensuring safety and giving users full control over the commands run in their Kubernetes environment.
- Go 1.22 or higher
- Access to a Kubernetes cluster (for actual command execution)
You can install Klama directly from GitHub:
go install github.com/eliran89c/klama@latest
This will download the source code, compile it, and install the klama
binary in your $GOPATH/bin
directory. Make sure your $GOPATH/bin
is in your system's PATH.
Klama requires a YAML configuration file to set up the AI models. The configuration file is searched for in the following order:
- Custom location specified by the
--config
flag $HOME/.klama.yaml
.klama.yaml
in the current directory
A valid configuration file with at least the required fields must be present for Klama to function properly.
The following fields are required in your configuration:
agent.name
: The name of the main AI modelagent.base_url
: The API endpoint for the main AI model
Klama will not run if these required fields are missing from the configuration file.
Klama requires an OpenAI or OpenAI-compatible server to function. The application has been tested with the following frameworks and services:
- OpenAI models
- Self-hosted models using vLLM
- Amazon Bedrock models via Bedrock Access Gateway
While these have been specifically tested, any server that implements the OpenAI API should be compatible with Klama.
Create a file named .klama.yaml
in your home directory or in the directory where you run Klama. Here's an example of what the file should contain:
agent:
model:
name: "anthropic.claude-3-5-sonnet-20240620-v1:0" # Required
base_url: "https://bedrock-gateway.example.com/api/v1" # Required
auth_token: "" # Set via KLAMA_AGENT_TOKEN environment variable
pricing: # Optional, will be used to calculate session price
input: 0.003 # Price per 1K input tokens (optional)
output: 0.015 # Price per 1K output tokens (optional)
validation: # Comment this block out to manually approve the agent commands
model:
name: "meta-llama/Meta-Llama-3-8B"
base_url: "https://vllm.example.com/v1"
auth_token: "" # Set via KLAMA_VALIDATION_TOKEN environment variable
# pricing:
# input: 0
# output: 0
If the validation model is not specified, Klama will prompt the user to approve each command before execution.
You can set the authentication tokens using environment variables:
KLAMA_AGENT_TOKEN
: Set the authentication token for the agent modelKLAMA_VALIDATION_TOKEN
: Set the authentication token for the validation model
Example:
export KLAMA_AGENT_TOKEN="your-agent-token-here"
export KLAMA_VALIDATION_TOKEN="your-validation-model-token-here"
You can specify a custom configuration file location using the --config
flag:
klama --config /path/to/your/config.yaml "Your Kubernetes query here"
Run Klama with your Kubernetes-related query:
klama [flags] <prompt>
For example:
klama "Why is my pod not starting?"
--config
: Specify a custom configuration file location--debug
: Enable debug mode--show-usage
: Show usage information
Example with flags:
klama --debug --config /path/to/config.yaml "Check the status of all pods"
If Klama fails to start due to missing or invalid configuration, it will provide an error message indicating the issue. Ensure that your configuration file is properly formatted and contains all required fields before running Klama.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.