Tired of the so-called "free" Copilot alternatives that are filled with paywalls and signups? Look no further, developer friend!
Twinny is your definitive, no-nonsense AI code completion plugin for Visual Studio Code and compatible editors like VSCodium. It's designed to integrate seamlessly with various tools and frameworks:
Like Github Copilot but 100% free!
Get AI-based suggestions in real time. Let Twinny autocomplete your code as you type.
Discuss your code via the sidebar: get function explanations, generate tests, request refactoring, and more.
- Operates online or offline
- Highly customizable API endpoints for FIM and chat
- Chat conversations are preserved
- Conforms to the OpenAI API standard
- Supports single or multiline fill-in-middle completions
- Customizable prompt templates
- Generate git commit messages from staged changes (
CTRL+SHIFT+T CTRL+SHIFT+G
) - Easy installation via the Visual Studio Code extensions marketplace
- Customizable settings for API provider, model name, port number, and path
- Compatible with Ollama, llama.cpp, oobabooga, and LM Studio APIs
- Accepts code solutions directly in the editor
- Creates new documents from code blocks
- Copies generated code solution blocks
- Install the VS Code extension here or VSCodium here.
- Set up Ollama as the backend by default: Install Ollama
- Select your model from the Ollama library (e.g.,
codellama:7b-instruct
for chats andcodellama:7b-code
for auto complete).
ollama run codellama:7b-instruct
ollama run codellama:7b-code
- Open VS code (if already open a restart might be needed) and press
ctr + shift + T
to open the side panel.
You should see the 🤖 icon indicating that twinny is ready to use.
- See Keyboard shortcuts to start using while coding 🎉
For setups with llama.cpp, LM Studio, Oobabooga, LiteLLM, or any other provider, you can find more details on provider configurations and functionalities here in providers.md.
- Install the VS Code extension here.
- Obtain and run your chosen model locally using the provider's setup instructions.
- Restart VS Code if necessary and press
CTRL + SHIFT + T
to open the side panel. - At the top of the extension, click the 🔌 (plug) icon to configure your FIM and chat endpoints in the providers tab.
- It is recommended to use separate models for FIM and chat as they are optimized for different tasks.
- Update the provider settings for chat, including provider, port, and hostname to correctly connect to your chat model.
- After setup, the 🤖 icon should appear in the sidebar, indicating that Twinny is ready for use.
- Results may vary from provider to provider especailly if using the same model for chat and FIM interchangeably.
Twinny supports OpenAI API-compliant providers.
- Use LiteLLM as your local proxy for the best compatibility.
- If there are any issues, please open an issue on GitHub with details.
Models for Chat:
- For powerful machines:
deepseek-coder:6.7b-base-q5_K_M
orcodellama:7b-instruct
. - For less powerful setups, choose a smaller instruct model for quicker responses, albeit with less accuracy.
Models for FIM Completions:
- High performance:
deepseek-coder:base
orcodellama:7b-code
. - Lower performance:
deepseek-coder:1.3b-base-q4_1
for CPU-only setups.
Shortcut | Description |
---|---|
ALT+\ |
Trigger inline code completion |
CTRL+SHIFT+/ |
Stop the inline code generation |
Tab |
Accept the inline code generated |
CTRL+SHIFT+T |
Open Twinny sidebar |
CTRL+SHIFT+T CTRL+SHIFT+G |
Generate commit messages from staged changes |
Enable useFileContext
in settings to improve completion quality by tracking sessions and file access patterns. This is off by default to ensure performance.
Visit the GitHub issues page for known problems and troubleshooting.
Interested in contributing? Reach out on Twitter, describe your changes in an issue, and submit a PR when ready. Twinny is open-source under the MIT license. See the LICENSE for more details.
Twinny is actively developed and provided "as is". Functionality may vary between updates.