what

summarize.py
receives a long text via stdin, feeds it to gpt in chunks, maintains a reducer with prior chunk states.
ask.py
sends stdin to gpt, stdouts the response, temperature accepted as argument
concat.py
receives a list of files over stdin, concatenates and outputs them in a gpt-understood format to be fed into ask.py
extract.py
extract code blocks from a markdown gpt response

why

summarize.py
tries to overcome the token size limitation of LLMs.
ask,concat,extract
are an exploratory attempt to introduce gpt into the author’s CLI-based development cycle.

usage

long text summirazation

cat long-text.txt | summarize.py long-text-cache.json

code generation

here is how a typical development scenario may look like.

 T=chunking-of-concat-ask-flow ;
 (echo tasks/$T.org ; git ls-files) \
	| concat.py "Follow the directives in tasks/$T.org" \
	| ask.py \
	| tee "$T.log" \
	| extract.py "$T.1.diff" "$T.2.py"

how

pip install -r requirements.txt

development directives for GPT

  1. all code must be provided in the form of valid diffs that can exracted with extract.py and applied on the spot.
  2. when applicable, reply with the specific concat.py | ask.py command pipeline including only the files necessary for the given task.
  3. tasks and text must be output in org-mode format.
  4. testing must come ahead of implementation