prompts
mertkarayakuplu opened this issue · 5 comments
I've been using GPT for a bit.
Over time, my workflow has evolved into using precrafted pieces of prompts.
As an example, one case was targeting a certain environment which didn't have an otherwise common certain standard library functions available, some had replacements available, and some were stuff you could avoid using that had an alternative available that was just worse, which is the best thing when you don't have the better thing available.
These would become write ... don't use 'x', use 'y' instead, ....
.
There was also, ..., if you need 'x', you can use the variable 'y', ...
.
One I just made up is this,
write a javascript prototype. include prototypes for assert and log as well. the assert should verify types and parameters. log should print the parameters. this prototype is named
And this would be followed by, for example,
user. it should have an age, always an integer greater than 0, a name, which is nullable or a string, when it is not null, it should be longer than 10 characters.
This creates the output,
It's made up so it's not really useful, I hope people can see the use case nevertheless.
Once you have a high quality prompt template, it really helps a lot. Even just the boilerplate.
Anyway, in practice, these weren't always the start of a prompt, sometimes you'd get a better result when your prompt ended with certain parameters, other times a prompt in the middle was useful, the latter was a rare case though.
There's a certain art to it, somewhere between being too complicated and too simple, GPT can produce really good results at times.
I'd like to suggest a configuration option that does string replacement. Such as defining x
and being able to write x
which would place your pre-configured prompt.
I wonder what do people think of something along these lines?
I have been playing around a bit.
Seems useful to be able to make these kinds of definitions, a few basic ones are like use 4 spaces for indentation
since prompt doesn't care about context and GPT tends to return 2 or 4 spaces of indentation, depending on how it's feeling. Providing an example is another useful one.
Most of the style
prompts are probably better off being handled by a LSP/linter/parser or as a function in the user config, instead of a longer prompt, but while playing around, it was just so much easier to define a global post prompt as opposed to writing functions for these.
Machine learning generated code will be unpredictable and unlikely to produce results in the exact style you need. Probably best to use something like ALE to fix the style of the results generated by neural.
@Angelchev If you trigger a User
autocmd event with doautocmd
when all of the text has been written, people will be able to write something like:
autocmd User NeuralWritePost ALEFix!
That makes sense to me, and possibly a piece of the puzzle that'd be useful.
Though I've been unintentionally confusing in my text.
I was thinking something along the lines of being able to define $nostd = 'when you need to use rand(), use x() instead,'
and writing $nostd write a function that generates an uuid
as the prompt, a middle way between a finely tuned model and a well engineered prompt.
It could be that I'm unique in the start every prompt the same way when working on [x] project, when using [y], make sure to mention [z]
camp, I don't know.
Machine learning generated code will be unpredictable and unlikely to produce results in the exact style you need. Probably best to use something like ALE to fix the style of the results generated by neural.
@Angelchev If you trigger a
User
autocmd event withdoautocmd
when all of the text has been written, people will be able to write something like:autocmd User NeuralWritePost ALEFix!
@w0rp Great idea. I have now added this at 90fe519 with documentation at Events.
@mertkarayakuplu Try it out and let me know if it works well for you.
It's been more fruitful than I thought, to be honest.
for js
files, I use something along the lines of Write JSDoc comments. Don't include any comments other than JSDoc.
on all prompts, and there's the optional prototype stuff and so on as well as prefixes. It's hacked together so it's just 3 lines of a loop that does string replacement based on options.prompt (that I made up for this)
, nothing much to show.
It really helps a lot that any output I get come with a documentation(that I need to fix), and GPT tends to comment extensively, more so than necessary, so I was usually deleting most of those before adding this.
At this point I'm using it for stuff that I know it can't write correctly just so I get the documentation and definitions and everything and replace the code with the one I write.
I'm not using fixing and completion features though, just prompts, maybe those are better fit for such an use case, I don't know.
Anyway, I'm closing the issue as it doesn't look like this kind of a workflow is common at all.