Maverick is a code completion tool powered by AI. Built at Yurts, Maverick focuses on delivering the best code completion on your local machine without reaching out to any APIs or knowledge bases. Best of all? It's free.
Table of contents:
Maverick can be installed on the following platforms:
darwin-x64
(macOS 64-bit Intel)darwin-arm64
(macOS M-Series)linux-x64
linux-arm64
November 4, 2022: We have also released Maverick on windows-x64
(Windows 64-bit) but have noticed issues regarding installation across various systems. We are actively trying to address these bugs, so please be patient. For now, we omit windows-x64
from our recommended and supported platforms.
During installation, the Maverick packaged application as well as the code completion model is downloaded. Installation can take ~10 minutes and may vary based off internet speed and compute resources.
DISCLAIMER: Inference speeds may be slow, but we are actively working to further optimize them. Please be patient and see Advanced Settings to configure Maverick to best suit your computer's architecture.
Have questions or issues with install? Join our Discord server or file a Github Issue.
- Port is not available. If this is the case, hit
cmd/crtl + shift + p
and type"Settings"
. Then, selectOpen Settings (UI)
and search"Maverick port"
. By default, the Maverick model runs on port9401
, but you can change this to whichever port you prefer. - When pressing
Run debugger
, it shows different target options (nodejs, edge, etc.). Your VSCode root directory might be incorrect. Make sure your root directory is the folder in which thepackage.json
file is. - Error message
module "node-fetch" not found...
. You need to runnpm install
.
- You haven't enabled the inline completion feature. To enable, set VSCode config
"editor.inlineSuggest.enabled": true
- It might conflict with some other plugins. You might need to disable plugins to check
If none of the above works, open a thread or join our Discord channel and have a chat.
If latency is an issue, try to decrease maxTokensToGenerate
or numLinesForContext
in your VSCode settings. To access these settings, hit cmd/ctrl + shift + p
and type "Settings"
. Then, select Open Settings (UI)
and search "Maverick"
. For more information, see Advanced Setttings.
AI inline completion triggers on the key command cmd + shift + m
(macOS) or ctrl + shift + m
(Windows/Linux).
For example, if the following was typed into your editor:
class LinkedList:
Hitting cmd + shift + m
or ctrl + shift + m
would then send the prediction request to Maverick. (You can tell a prediction is in progress if the Status Message in the bottom left of VSCode reads "Maverick generating code...") The prediction will then render as an inline suggestion!
Maverick comes equipped with three tunable settings depending on a user's desired workflow, including:
port
: Defaults to9401
, the port where the Maverick model will be hosted.maxTokensToGenerate
: Defaults to32
, the number of tokens you would like to generate for every Maverick prediction. A token is similar to a word, but sometimes may be smaller.numLinesForContext
: Defaults to10
, the number of previous lines of code to send as context for every Maverick prediction.
WARNING: Latency is positively correlated with
maxTokensToGenerate
andnumLinesForContext
, i.e., increasing these values may increase latency and vise versa.
Feel free to modify these settings to best fit your workflow.
- Nov 03, 2022 - Add deload model logic
- Nov 02, 2022 - Publish the initial version
Love Maverick? Please drop us a star :) and expand the yurt.