[Various] Proposed Functionalities / Roadmap (from some guy)
zudsniper opened this issue ยท 4 comments
Hello kurtosis
tech team!
I have various thoughts about this project, all of which are simply ways to improve -- at least as I see it -- the usability, execution speed, functional application space, or simply the quality of life. This is to say, I think the current state of this project is already great.
BUT...
Here is my vision for how this package could be used which would, for me at least, supercharge usage of this project.
๐ฌ Suggestions
These are not provided in any particular order, and it may well be that some should definitely be implemented before others. I leave that up to the more knowledgeable.
-
๐ Implementthanks!gpt4all
support - ๐งฑ Supporting Local/Unpublished/Private Auto-GPT Plugins
-
๐ฆ Addingautogpt
Branch/Version Specification at Runtime - ๐ Creating a Wiki / Nested Documentation
- โธ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
-
๐ Constructing a "build-config
" System for Configuration / Local Caching / Pausable Instances / etckinda dumb lol
I guess you can't have
scrollTo
id links to Headers in issues, or I'm being dumb... Either way sorry, the links above are, save the first, bogus.
๐งฑ Supporting Local/Unpublished/Private Auto-GPT Plugins
I think it would be a very useful feature to allow users who are actively developing, working with a proprietary codebase, or for any other reason are deploying autogpt
with currently unpublished forks of Auto-GPT-Plugin-Template
to attach their local code to an instance of autogpt-package
.
As for the implementation of this, I am less clear on how it would work, as I don't fully understand kurtosis
under the hood. However, I do think it is possible. I also always think that... but I digress
๐ฆ Adding autogpt
Branch/Version Specification at Runtime
A method with which we could specify the version of autogpt
we want to use would be very helpful, especially given the drastic changes version to version -- such as the scrapping of the MEMORY_BACKEND
system in Auto-GPT v0.4.0
.
Currently, I would like to be able to use autogpt
version 0.3.1
, which can be located under the branch stable-0.3.1
in the Auto-GPT
repository.
๐ Creating a Wiki / Nested Documentation
While you guys have definitely documented things well enough to use your project through just README.md
, I think it's time you have a full-on wiki. I think the easiest, most user-friendly way to do this would be to utilize GitHub repository Wikis, but there are various possible ways. I would be happy to help create such documentation, if you would like.
โธ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
Finding a way to save the state of your autogpt-package
instance, write it to a file or files, and then return to it at a later date, would make utilizing this tool much nicer in terms of quality of life (in my opinion of course)
This could also potentially be a benefit when it comes to fault-tolerance / error recovery, which would be a great thing to support, both implemented by you guys, but also opening up a way for us to write our own conditions for "failure" and hook in our own subsequent handling systems.
๐ Constructing a "build
" System for Configuration / Local Caching / Pausable Instances / etc
This one is by far the biggest ask. (save the best for last or something)
I am personally tired of having to provide all my arguments to the project in the form of a single CLI string. Even escaping special characters aside, growth while ultimately hampered by this constraint adds in my opinion unnecessary overhead.
As an alternative, I propose a build
system not unlike that of npm
or setuptools
for Node.js and Python respectively, involving a json
1 file, maybe by convention called agpkg_config.json
2, which would facilitate multiple executions of the same instance -- especially if pausing
is supported. (somehow)
Here's my suggestion, in terms of file examples.
agpkg_config.jsonc
3
// =============================================================================== //
/*
__ __ __
____ ___ __/ /_____ ____ _____ / /_ ____ ____ ______/ /______ _____ ____
/ __ `/ / / / __/ __ \/ __ `/ __ \/ __/_____/ __ \/ __ `/ ___/ //_/ __ `/ __ `/ _ \
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ / /_/_____/ /_/ / /_/ / /__/ ,< / /_/ / /_/ / __/
\__,_/\__,_/\__/\____/\__, / .___/\__/ / .___/\__,_/\___/_/|_|\__,_/\__, /\___/
/____/_/ /_/ /____/
*/
// =============================================================================== //
// VERSION: ${VERSION}
// <insert project credits, authors, disclaimers, whatever else>
{
"log_level": "DEBUG" // simple stuff like log level can go here
"model_type":"GPT4ALL", // "OpenAI", "GPT4ALL", "llama.cpp", etc (potentially C:)
"autogpt": {
"branch":"stable-0.3.1",
/* https://github.com/Significant-Gravitas/Auto-GPT/blob/master/.env.template
This file (.env) includes all necessary configuration for autogpt, even removing the need to pass
OpenAI API Keys to `kurtosis` in plain-text. */
"env":"path/to/a/.env",
"settings": "path/to/ai_settings.yaml"
/* ...etc */
},
// if "model_type" is GPT4ALL... (or if you have some other want from the config process)
"gpt4all": {
"local": false, // if true, "model" should be a local filepath instead of a URL.
/* if "local": true & "model" typeof URL, then a download of the model could be performed to a standard default location -- probably just ${CWD}/models/... -- wherein after the download is finished, the URL value of "model" is replaced with the local filepath to the newly downloaded model. (this is a bit much to ask, but would be very, very cool!) */
"model": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin",
"temperature": 0.28
/* other LLM settings */
},
"network": {
"proxies": [
{
"url":"http://localhost",
"port": 80
}
]
// idk what else, VPN via OpenVPN support? or user-agent swapping? seems like it would be
// specified in the autogpt specifications
}
}
With this concept, a .env
file which is exactly the same format as that which is used in standard autogpt
instances can be utilized off-the-shelf by a new user intending to try out autogpt-package
. Same idea with ai_settings.yaml
.
Along with that, all the specification for other things, such as gpt4all
configuration, or networking, or any other custom stuff you decide to put on top of autogpt
in this package to increase its usefulness, could all be specified from a central file, and we wouldn't need to parse and merge .env
files into json
data then escape it. It just seems like this would be a more scalable approach in my eyes, but I could be wrong.
I also understand this would NOT be as simple as I have put it out to be -- it would be a somewhat major undertaking for the codebase -- but I wanted to put my suggestion out here. Feel free to use it as you see fit; I will not be offended. C:
๐ Summary
Here is the current README.md
's method of initiating autogpt-package
with a redis
memory backend.4
$ kurtosis run github.com/kurtosis-tech/autogpt-package '{"GPT_4ALL": true, "MODEL_URL": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin"}'
Here is how I would have things work, in a perfect world for me -- which is obviously not everyone's perfect world.
$ kurtosis run github.com/kurtosis-tech/autogpt-package --enclave autogpt `/path/to/agpkg_config.json`
Alright, this is insanely long, and I apologize in advance! Hopefully something in here will inspire somebody. Thank you to everyone at kurtosis-tech
again for making this package. It was the first way that I got Auto-GPT to work at all when I was first playing with it, and it was as seamless as it purported to be.
But then I wanted more... hah.
Cheers,
Jason, or zod
Footnotes
-
Or
yaml
, or even justxml
if you don't like us very much. Format doesn't really matter here โฉ -
This should be able to be overwritten by a CLI flag, perhaps
--build-configuration <filename.json>
โฉ -
This is just a simple extension of
json
which supports multi line, single line, and inline comments usingjs
syntax. Here is a parser fromMicrosoft
written for Node.js This is not required by any means; simplejson
should be used, & developers can precompile ourjsonc
orjsonl
into appropriatejson
conforming structures on our own time. โฉ -
I realize that the
README.md
document is, while replete with information about the actual use of the repository, a bit scattered. I think you guys should add either just adocs/
directory with various.md
files, or a full-on GitHub Pages Wiki -- but I will make a separate issue for that โฉ
Hey! Thank you for being so thorough with the ticket & list of features :) Addressing a few of these for now -
๐ฆ Adding autogpt Branch/Version Specification at Runtime
Kurtosis runs of Docker Images that are published to the DockerHub (or that are available locally in your Docker). I have added support for changing the version of AutoGPT we spin up using the AUTOGPT_IMAGE
flag. Last code snippet in this section.
๐ Creating a Wiki / Nested Documentation
Would love for you to start doing this! You should be able to do so now!
๐ Constructing a "build" System for Configuration / Local Caching / Pausable Instances / etc
This I am trying to understand a bit more! Say you a have a file called args.json
in your current working directory and say it looks like
{"GPT_4ALL": true, "MODEL_URL": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin"}
You should be able to pass it to Kurtosis using
kurtosis run github.com/kurtosis-tech/autogpt-package --enclave autogpt "$(cat args.json)"
Does that work for you?
๐งฑ Supporting Local/Unpublished/Private Auto-GPT Plugins
I can think of some ways of supporting this. I can definitely add support for repositories that aren't in the main.star but are still public and available on GitHub.
โธ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
This we are working on as a team as an addition to core Kurtosis where users should be able to have persistent environments that they can stop and restart. This might take a while!
๐ฆ Adding autogpt Branch/Version Specification at Runtime
Kurtosis runs of Docker Images that are published to the DockerHub (or that are available locally in your Docker). I have added support for changing the version of AutoGPT we spin up using the AUTOGPT_IMAGE flag. Last code snippet in this section.
Sweet, and thank you for pointing me to this.
๐ Creating a Wiki / Nested Documentation
Would love for you to start doing this! You should be able to do so now!
Sure, though it might be sporadic due to my obligations at work and at home. I do write markdown really, really hard though... whatever that means.
๐ Constructing a "build" System for Configuration / Local Caching / Pausable Instances / etc
This I am trying to understand a bit more! Say you a have a file called args.json in your current working directory and say it looks like
[etc]
Yes... I could definitely just do this... don't mind me, I was tired or something oops!
๐งฑ Supporting Local/Unpublished/Private Auto-GPT Plugins
I can think of some ways of supporting this. I can definitely add support for repositories that aren't in the main.star but are still public and available on GitHub.
Adding support for public repositories is definitely the right direction to go in, but ideally I think being able to develop proprietary ones, or just develop offline, would make this project usable as the only development tool necessary for autogpt
. If you guys think it's the right thing to do or not: that's up to you! haha
โธ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
This we are working on as a team as an addition to core Kurtosis where users should be able to have persistent environments that they can stop and restart. This might take a while!
That's ok! It's cool to know I stumbled upon a feature under active development. It's certainly a difficult problem, especially when addressed from the scale of the entire kurtosis
project. I hope to see it developed one day -- I'll be waiting C:
Thanks for the thorough response to my thorough suggestions. I will be sharing this project as much as possible to other autogpt
enthusiasts, especially with the GPT4ALL
addition - I would say that is a game changer.
Hey @zudsniper , catching up on this thread! As I understand, the two potential outstanding pieces of work are:
- Supporting Local/Unpublished/Private Auto-GPT Plugins
2."Pausing" & "Unpausing" Instances for Prolonged Use / Execution
For no. 2, we're tracking something sort of similar here: kurtosis-tech/kurtosis#705 (though I think we'll also need the persistent data that I mentioned in #84 ).
For no. 1, would you mind opening a new ticket for just that piece of work, and what you had in mind, so we can track it individually? That way we can close this issue and track the individual followups
Hey @zudsniper , catching up on this thread! As I understand, the two potential outstanding pieces of work are:
- Supporting Local/Unpublished/Private Auto-GPT Plugins
2."Pausing" & "Unpausing" Instances for Prolonged Use / ExecutionFor no. 2, we're tracking something sort of similar here: kurtosis-tech/kurtosis#705 (though I think we'll also need the persistent data that I mentioned in #84 ).
For no. 1, would you mind opening a new ticket for just that piece of work, and what you had in mind, so we can track it individually? That way we can close this issue and track the individual followups
Sure, I will make an issue just for it @mieubrisse