LLxprt Code is a powerful fork of Google's Gemini CLI, enhanced with multi-provider support and improved theming. We thank Google for their excellent foundation and will continue to track and merge upstream changes as long as practical.
- Multi-Provider Support: Direct access to OpenAI (o3), Anthropic (Claude), Google Gemini, plus OpenRouter, Fireworks, and local models
- Enhanced Theme Support: Beautiful themes applied consistently across the entire tool
- Full Gemini CLI Compatibility: All original features work seamlessly, including Google authentication via
/auth - Local Model Support: Run models locally with LM Studio, llama.cpp, or any OpenAI-compatible server
- Flexible Configuration: Switch providers, models, and API keys on the fly
- Advanced Settings & Profiles: Fine-tune model parameters, manage ephemeral settings, and save configurations for reuse. Learn more →
With LLxprt Code you can:
- Query and edit large codebases with any LLM provider
- Generate new apps from PDFs or sketches, using multimodal capabilities
- Use local models for privacy-sensitive work
- Switch between providers seamlessly within a session
- Leverage all the powerful tools and MCP servers from Gemini CLI
- Use tools and MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
- Ground your queries with the Google Search tool when using Gemini
- Enjoy a beautifully themed interface across all commands
You have two options to install LLxprt Code.
-
Prerequisites: Ensure you have Node.js version 20 or higher installed.
-
Install LLxprt Code:
npm install -g @vybestack/llxprt-code
Or run directly with npx:
npx https://github.com/acoliver/llxprt-code
-
Prerequisites: Ensure you have Homebrew installed.
-
Install the CLI: Execute the following command in your terminal:
brew install llxprt-code
Then, run the CLI from anywhere:
llxprt
-
Run and configure:
llxprt
- Pick a beautiful theme
- Choose your provider with
/provider(defaults to Gemini) - Set up authentication as needed
Direct access to o3, o1, GPT-4.1, and other OpenAI models:
- Get your API key from OpenAI
- Configure LLxprt Code:
/provider openai /key sk-your-openai-key-here /model o3-mini
Access Claude Sonnet 4, Claude Opus 4, and other Anthropic models:
- Get your API key from Anthropic
- Configure:
/provider anthropic /key sk-ant-your-key-here /model claude-sonnet-4-20250115
Run models locally for complete privacy and control. LLxprt Code works with any OpenAI-compatible server.
Example with LM Studio:
- Start LM Studio and load a model (e.g., Gemma 3B)
- In LLxprt Code:
/provider openai /baseurl http://127.0.0.1:1234/v1/ /model gemma-3b-it
Example with llama.cpp:
- Start llama.cpp server:
./server -m model.gguf -c 2048 - In LLxprt Code:
/provider openai /baseurl http://localhost:8080/v1/ /model local-model
List available models:
/model
This shows all models available from your current provider.
Access 100+ models through OpenRouter:
- Get your API key from OpenRouter
- Configure LLxprt Code:
/provider openai /baseurl https://openrouter.ai/api/v1/ /keyfile ~/.openrouter_key /model qwen/qwen3-coder /profile save qwen3-coder
For fast inference with popular open models:
- Get your API key from Fireworks
- Configure:
/provider openai /baseurl https://api.fireworks.ai/inference/v1/ /key fw_your-key-here /model accounts/fireworks/models/llama-v3p3-70b-instruct
Access Grok models through xAI's API:
-
Get your API key from xAI
-
Configure using command line:
llxprt --provider openai --baseurl https://api.x.ai/v1/ --model grok-3 --keyfile ~/.mh_keyOr configure interactively:
/provider openai /baseurl https://api.x.ai/v1/ /model grok-3 /keyfile ~/.mh_key -
List available Grok models:
/model
You can still use Google's services:
- With Google Account: Use
/authto sign in - With API Key:
Or use
export GEMINI_API_KEY="YOUR_API_KEY"
/key YOUR_API_KEYafter selecting the gemini provider
- Set key for current session:
/key your-api-key - Load key from file:
/keyfile ~/.keys/openai.txt - Environment variables: Still supported for all providers
Start a new project:
cd new-project/
llxprt
> Create a Discord bot that answers questions using a FAQ.md file I will provideWork with existing code:
git clone https://github.com/acoliver/llxprt-code
cd llxprt-code
llxprt
> Give me a summary of all the changes that went in yesterdayUse a local model for sensitive code:
llxprt
/provider openai
/baseurl http://localhost:1234/v1/
/model codellama-7b
> Review this code for security vulnerabilitiesLLxprt Code provides powerful configuration options through model parameters and profiles:
# Fine-tune model behavior
/set modelparam temperature 0.8
/set modelparam max_tokens 4096
# Configure context handling
/set context-limit 100000
/set compression-threshold 0.7
# Save your configuration
/profile save my-assistant
# Load it later
llxprt --profile-load my-assistantSee the complete settings documentation for all configuration options.
LLxprt Code features a sophisticated prompt configuration system that allows you to customize the AI's behavior for different providers, models, and use cases. You can:
- Create custom system prompts for specific tasks
- Override provider-specific behaviors
- Add environment-aware instructions
- Customize tool usage guidelines
Learn more in the Prompt Configuration Guide.
- Learn how to contribute to or build from the source.
- Explore the available CLI Commands.
- If you encounter any issues, review the troubleshooting guide.
- For more comprehensive documentation, see the full documentation.
- Take a look at some popular tasks for more inspiration.
- Check out our Official Roadmap
/provider- List available providers or switch provider/model- List available models or switch model/baseurl- Set custom API endpoint/key- Set API key for current session/keyfile- Load API key from file/auth- Authenticate with Google (for Gemini provider)
See the troubleshooting guide if you encounter issues.
Start by cding into an existing or newly-cloned repository and running llxprt.
> Describe the main pieces of this system's architecture.
> What security mechanisms are in place?
> Provide a step-by-step dev onboarding doc for developers new to the codebase.
> Summarize this codebase and highlight the most interesting patterns or techniques I could learn from.
> Identify potential areas for improvement or refactoring in this codebase, highlighting parts that appear fragile, complex, or hard to maintain.
> Which parts of this codebase might be challenging to scale or debug?
> Generate a README section for the [module name] module explaining what it does and how to use it.
> What kind of error handling and logging strategies does the project use?
> Which tools, libraries, and dependencies are used in this project?
> Implement a first draft for GitHub issue #123.
> Help me migrate this codebase to the latest version of Java. Start with a plan.
Use MCP servers to integrate your local system tools with your enterprise collaboration suite.
> Make me a slide deck showing the git history from the last 7 days, grouped by feature and team member.
> Make a full-screen web app for a wall display to show our most interacted-with GitHub issues.
> Convert all the images in this directory to png, and rename them to use dates from the exif data.
> Organize my PDF invoices by month of expenditure.
Head over to the Uninstall guide for uninstallation instructions.
LLxprt Code does not collect telemetry by default. Your privacy is important to us.
When using Google's services through LLxprt Code, you are bound by Google's Terms of Service and Privacy Notice. Other providers have their own terms that apply when using their services.
