Random collection of tools for interacting with LLMs.
- memocache.py caches the results of function calls. It is useful for projects that make bulk API calls.
- parallel_processor.py is for making parallel API calls.
- equivalent_model_wrapper.py stores OpenAI models names that currently point to the same model, which is useful for making parallel calls.
TODO: add demo.