nanoAGI is an implementation of popular autonomous task management systems like BabyAGI and Auto-GPT. It also takes a lot of inspiration from LangChain Agent examples. There is nothing innovative about it in regards to existing solutions and much less capable in terms of features. The main reason for it was to learn and to build it in a way that I feel more comfortable researching and trying new features.
For a more detailed overview of the approach, please refer to BabyAGI's "generated paper" here.
- This project is not intended to be used in production. It was build for learning purposes and should not be used for any other purpose.
- Still under development, so it might not work as expected.
- Tested with GPT-3.5, I still don't have GPT-4 API access 😥
- Please beware of costs when using OpenAI's APIs. While Pinecone has a free tier that should be enough for the purpose of testing this tool, this is not the - - case for OpenAI's APIs.
- Python 3.8+
- Pinecone API Key
- OpenAI API Key
- SERPER API Key
Setup a virtual environment and install the requirements:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Setup the environment variables:
export PINECONE_API_KEY=<your_pinecone_api_key>
export OPENAI_API_KEY=<your_openai_api_key>
export SERPER_API_KEY=<your_serper_api_key>
Run the main module and pass the objective as an argument:
python main.py "I want to build an Hello world Flask app."
-
It never stops. Even with straightforward objectives and tasks and after getting the correct answer, it keeps adding new tasks to the todo list. When asked "How much is Joe Biden's age raised to the power of 3?", it initially replies without doing a search which makes it reply incorrectly and based on its knowledge up to 2021. On the second iteration, it does the search and answers correctly.
-
It tends to keep adding tasks to the todo list, changing priorities and "rambling" over small details. The algorithm needs to be improved to avoid this behavior.
-
The execution between iterations takes longer than expected. Needs to be understood and improved.
-
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 5483 tokens (5227 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
Happened when I removed the Calculator and got an HTML from a calculator website :) Maybe I can trim the HTML to get the answer. Still, it's not a good solution. With GPT-4 this shouldn't be a problem due to the increased context length.
- Cleanup the code
- Add more tools. Currently, it only has
Search
,Todo
,Wikipedia
,Calculator
andRequests
. Including LLama Index - Improve the algorithm to avoid "rambling" behavior
- Improve the execution time between iterations
- Add new agent types and iterate with different prompts
- Add a way for user feedback during execution to improve the agent's behavior
- When the algorithm is stable, consider adding a reviewer Agent to improve the quality of the answers
Contributions, issues and feature requests are welcome!
MIT License (MIT) - see LICENSE for more details.