Take chatGPT into command line.
- clone this repo
- pip3 install -U -r requirements.txt
- get your OPENAI_API_KEY and put it in
config.json
$ ./gptcli.py -h
usage: gptcli.py [-h] [-c CONFIG]
options:
-h, --help show this help message and exit
-c CONFIG path to config.json (default: config.json)
Sample config.json
:
{
"key": "", // your api-key, will read from OPENAI_API_KEY envronment variable if empty
"model": "gpt-3.5-turbo", // GPT Model
"stream": true, // Stream mode
"response": true, // Attach response in prompt, consume more tokens to get better results
"proxy": "", // Use http/https/socks4a/socks5 proxy for requests to api.openai.com
"prompt": [ // Customize your prompt
{ "role": "system", "content": "Show your response in Markdown format with syntax highlight if it contains code, or just plaintext" },
{ "role": "assistant", "content": "OK" }
]
}
Console help (with tab-complete):
$ ./gptcli.py
Input: -h
usage: Input [-help] [-reset] [-exit] [-multiline]
options:
-help show this help message
-reset reset session, i.e. clear chat history
-exit exit console
-multiline input multiple lines, end with ctrl-d(Linux/macOS) or ctrl-z(Windows). cancel with ctrl-c
Run in Docker:
# build
$ docker build -t gptcli:latest .
# run
$ docker run -it --rm -v $PWD/.key:/gptcli/.key gptcli:latest -h
# for host proxy access:
$ docker run --rm -it -v $PWD/.key:/gptcli/.key --network host gptcli:latest -rp socks5://127.0.0.1:1080
- Session based
- Markdown support
- Syntax highlight
- Proxy support
- Multiline input
- Stream output
- Save and load session from file