This project gives a large language model (LLM) control of a Linux machine.
In the example below, we start with the prompt:
You now have control of an Ubuntu Linux server. Your goal is to run a Minecraft server. Do not respond with any judgement, questions or explanations. You will give commands and I will respond with current terminal output.
Respond with a linux command to give to the server.
The AI first does a sudo apt-get update, then installs openjdk-8-jre-headless. Each time it runs a command we return the result of this command back to OpenAI and ask for a summary of what happened, then use this summary as part of the next prompt.
Inspired by xkcd.com/350 and Optimality is the tiger, agents are its teeth
docker network create aquarium
docker build -t aquarium .
go build
Pass your prompt in the form of a goal. For example, --goal "Your goal is to run a minecraft server."
OPENAI_API_KEY=$OPENAI_API_KEY ./aquarium --goal "Your goal is to run a Minecraft server."
arguments
./aquarium -h
Usage of ./aquarium:
-debug
Enable logging of AI prompts to debug.log
-goal string
Goal to give the AI. This will be injected within the following statement:
> You now have control of an Ubuntu Linux server.
> [YOUR GOAL WILL BE INSERTED HERE]
> Do not respond with any judgement, questions or explanations. You will give commands and I will respond with current terminal output.
>
> Respond with a linux command to give to the server.
(default "Your goal is to execute a verbose port scan of amazon.com.")
-limit int
Maximum number of commands the AI should run. (default 30)
-preserve-container
Persist docker container after program exits.
-split-limit int
When parsing long responses, we split up the response into chunks and ask the AI to summarize each chunk.
split-limit is the maximum number of times we will split the response. (default 3)
The left side of the screen contains general information about the state of the program. The right side contains the terminal, as seen by the AI.
These are written to aquarium.log and terminal.log.
Calls to OpenAI are not logged unless you add the --debug
flag. API requests and responses will be appended to debug.log.
- Send the OpenAI api the list of commands (and their outcomes) executed so far, asking it what command should run next
- Execute command in docker VM
- Read output of previous command- send this to OpenAI and ask text-davinci-003 for a summary of what happened
- If the output was too long, OpenAI api will return a 400
- Recursively break down the output into chunks, ask it for a summary of each chunk
- Ask OpenAI for a summary-of-summaries to get a final answer about what this command did
Prompt: Your goal is to execute a verbose port scan of amazon.com.
The bot replies with nmap -v amazon.com. nmap is not installed; we return the failure to the AI, which then installs it and continues.
portscan.mp4
Prompt: Your goal is to install a ngircd server.
(an IRC server software)
Installs the software, helpfully allows port 6667 through the firewall, then tries to run sudo -i and gets stuck.
- There's no success criteria- the program doesn't know when to stop. The flag
-limit
controls how many commands are run (default 30) - The AI cannot give input to running programs. For example, if you ask it to SSH into a server using a password, it will hang at the password prompt. For
apt-get
, i've hacked around this issue by injecting-y
to prevent asking the user for input. - I don't have a perfect way to detect when the command completes; right now I'm taking the # of running processes beforehand, running the command, then I poll the num procs until it returns back to the original value. This is a brittle solution
- The terminal output handling is imperfect. Some commands, like wget, use \r to write the progress bar... I rewrite that as a \n instead. I also don't have any support for terminal colors, which i'm suppressing with
ansi2txt
- I haven't tried this with GPT-3 or GPT-4 yet, only text-davinci-003. OpenAI doesn't yet support text completion with gpt-4 (only conversational chat) so it would require restructuring the prompt.