Chatify is a python package that enables ipython magic commands to Jupyter notebooks that provide LLM-driven enhancements to markdown and code cells. This package is currently in the alpha stage: expect broken things, crashes, bad (wrong, misleading) answers, and other serious issues. That said, we think Chatify is pretty neat even in this early form, and we're excited about its future!
This tool was originally created to supplement the Neuromatch Academy materials. A "Chatify-enhanced" version of the Neuromatch computational neuroscience course may be found here, and an enhanced version of the deep learning course may be found here.
To install and enable chatify in any Jupyter (iPython) notebook, add the following two cells to the top of your notebook (and run them):
%pip install davos
import davos
davos.config.suppress_stdout = True
smuggle chatify # pip: git+https://github.com/ContextLab/chatify.git
%load_ext chatify
No further setup is required. To interact with Chatify about any code in the notebook, simply insert the %%explain
magic command at the top of the code cell and then run it (shift + enter) to access the Chatify interface. To disable Chatify and run the code block as usual, just delete the %%explain
command and re-run the cell (e.g., by pressing shift + enter again).
Chatify is designed to work by default in the free tiers of Colaboratory and Kaggle notebooks, and to operate without requiring any additional costs or setup beyond installing and enabling Chatify itself.
Chatify is designed to work on a variety of systems and setups, including the "free" tiers on Google Colaboratory and Kaggle. For setups with additional resources, it is possible to switch to better-performing or lower-cost models. Chatify works in CPU-only environments, but is GPU-friendly (for both CUDA-enabled and Metal-enabled systems). We support any text-generation model on Hugging Face, Meta's Llama 2 models, and OpenAI's ChatGPT models (both ChatGPT-3.5 and ChatGPT-4). Models that run on Hugging Face or OpenAI's servers require either a Hugging Face API key or an OpenAI API key, respectively.
Once you have your API key(s), if needed, create a config.yaml
file in the directory where you launch your notebook. For the OpenAI configuration, replace <OPANAI API KEY>
with your actual OpenAI API key (with no quotes) and then create a config.yaml
file with the following contents:
If you have an OpenAI API key, adding this config.yaml file to your local directory (after adding your API key) will substantially improve your experience:
cache_config:
cache: False
caching_strategy: exact # alternative: similarity
cache_db_version: 0.1
url: <URL> # ignore this
feedback: False
model_config:
open_ai_key: <OPENAI API KEY>
model: open_ai_model
model_name: gpt-3.5-turbo
max_tokens: 2500
chain_config:
chain_type: default
prompts_config:
prompts_to_use: [tutor, tester, inventer, experimenter]
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! The 7B and 13B variants of llama 2 both run on the free tier of Google Colaboratory and Kaggle, but the 13B is substantially slower (hence we use the 7B variant by default). Note that using this configuration requires installing the "HuggingFace" dependencies (pip install chatify[hf]
).
cache_config:
cache: False
caching_strategy: exact # alternative: similarity
cache_db_version: 0.1
url: <URL> # ignore this
feedback: False
model_config:
model: llama_model
model_name: TheBloke/Llama-2-70B-Chat-GGML # can also replace "70B" with either "7B" or "13B" on this line and the next
weights_fname: llama-2-70b-chat.ggmlv3.q5_1.bin
max_tokens: 2500
n_gpu_layers: 40
n_batch: 512
chain_config:
chain_type: default
prompts_config:
prompts_to_use: [tutor, tester, inventer, experimenter]
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! This will likely require lots of RAM. It's a nice way to explore a wide variety of models. Note that using this configuration requires installing the "HuggingFace" dependencies (pip install chatify[hf]
).
cache_config:
cache: False
caching_strategy: exact # alternative: similarity
cache_db_version: 0.1
url: <URL> # ignore this
feedback: False
model_config:
model: huggingface_model
model_name: TheBloke/Llama-2-70B-Chat-GGML # replace with any text-generation model on Hugging Face!
max_tokens: 2500
n_gpu_layers: 40
n_batch: 512
chain_config:
chain_type: default
prompts_config:
prompts_to_use: [tutor, tester, inventer, experimenter]
After saving your config.yaml
file, follow the "Installing and enabling Chatify" instructions.
We'd love to hear from you! Please consider filling out our feedback survey or submitting an issue.
Yay-- welcome 🎉! This is a new project (in the "concept" phase) and we're looking for all the help we can get! If you're new around here and want to explore/contribute, here's how:
- Fork this repository so that you can work with your own "copy" of the code base
- Take a look at our Project Board and/or the list of open issues to get a sense of the current project status, todo list, etc.
- Feel free to add your own issues/tasks, comment on existing issues, etc.
In general, we've broken down tasks into "coding" tasks (which require some amount of coding, likely in Python) and "non-coding" tasks (which do not require coding).
If you have questions, ideas, etc., also please check out the discussion board!