Generate text using LLMs with customizable prompts
- Ollama with an appropriate model, e.g.
mistral:instruct
orzephyr
(customizable)
Use command Gen
to generate text based on predefined and customizable prompts.
Example key maps:
vim.keymap.set('v', '<leader>]', ':Gen<CR>')
vim.keymap.set('n', '<leader>]', ':Gen<CR>')
You can also directly invoke it with one of the predefined prompts:
vim.keymap.set('v', '<leader>]', ':Gen Enhance_Grammar_Spelling<CR>')
All prompts are defined in require('gen').prompts
, you can enhance or modify them.
Example:
require('gen').prompts['Elaborate_Text'] = {
prompt = "Elaborate the following text:\n$text",
replace = true
}
require('gen').prompts['Fix_Code'] = {
prompt = "Fix the following code. Only ouput the result in format ```$filetype\n...\n```:\n```$filetype\n$text\n```",
replace = true,
extract = "```$filetype\n(.-)```"
}
You can use the following properties per prompt:
prompt
: (string | function) Prompt either as a string or a function which should return a string. The result can use the following placeholders:$text
: Visually selected text$filetype
: Filetype of the buffer (e.g.javascript
)$input
: Additional user input$register
: Value of the unnamed register (yanked text)
replace
:true
if the selected text shall be replaced with the generated outputextract
: Regular expression used to extract the generated resultmodel
: The model to use, e.g.zephyr
, default:mistral:instruct
You can change the default model by setting require('gen').model = 'your_model'
, e.g.
require('gen').model = 'zephyr' -- default 'mistral:instruct'
Here are all available models.
You can also change the complete command with
require('gen').command = 'your command' -- default 'ollama run $model $prompt'
You can use the placeholders $model
and $prompt
.