🦋 Build complex chat apps using LLMs in 4 clicks ⚡️ Try it out here
ChainFury is a powerful tool that simplifies the creation and management of chains of prompts, making it easier to build complex chat applications using LLMs.
With a simple GUI inspired by LangFlow, ChainFury enables you to chain components of LangChain together, allowing you to embed more complex chat applications with a simple JS snippet.
ChainFury supports a range of features, including but not limited to:
- Recording all prompts and responses and storing them in a database
- Collecting metrics like response latency
- Querying OpenAI's API to obtain a rating for the response, which it stores in the database
- Separate scoring mechanism per ChatBot to easily view performance in a dashboard
- Plugins to extend the functionality of ChainFury with callbacks
Installing ChainFury is easy, with two methods available. We suggest using the Docker method.
The easiest way to install ChainFury is to use Docker. You can use the following command to run ChainFury:
docker build . -f Dockerfile -t chainfury:latest
docker run --env OPENAI_API_KEY=<your_key_here> -p 8000:8000 chainfury:latest
Now you can access the app on localhost:8000.
Optional environment variable for Database
You can also pass a Database URL to the docker container using the `DATABASE_URL` environment variable. If you do not pass a database URL, ChainFury will use a SQLite database.Example:
docker run -it -E DATABASE_URL="mysql+pymysql://<user>:<password>@127.0.0.1:3306/<database>" -p 8000:8000 chainfury
For this, you will need to build the frontend and and then run the backend. The frontend can be built using the following command:
cd client
yarn install
yarn build
To copy the frontend to the backend, run the following command:
cd ..
cp -r client/dist/ server/static/
mkdir -p ./server/templates
cp ./client/dist/index.html ./server/templates/index.html
Now you can install the backend dependencies and run the server. We recommend using Python 3.9 virtual environment for this:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cd server
python3 -m uvicorn app:app --log-level=debug --host 0.0.0.0 --port 8000 --workers 1
How to run
Assuming you are in server
directory, you can run the server using the following command:
python3 server.py --port 8000 --config_plugins='["echo"]'
Now you can access the app on localhost:8000.
-
Start the server by using the docker file provided or by using the manual method.
-
Log into ChainFury by entering username = “admin” and password = “admin”
-
Click on create chatbot
-
Use one of the pre-configured chatbots or use the elements to create a custom chatbot.
-
Save & create your chatbot and start chatting with it by clicking the chat on the bottom-right. You can see chatbot statistics and feedback metrics in your ChainFury dashboard.
There are six main areas that LangChain is designed to help with.
ChainFury consists of the same concepts to build LLM ChatBots. The components are, in increasing order of complexity:
Glossary | LangChain | ChainFury |
---|---|---|
📃 LLMs and Prompts | Prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs | Easy prompt management with GUI elements |
🔗 Chains | Chains are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications | Easy chain management with GUI |
📚 Data Augmented Generation | Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources | Coming soon |
🤖 Agents | Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents | Easy agent management with GUI |
🧠 Memory | Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory | Memory modules are supported, persistant memory coming soon |
🧐 Evaluation | [BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this | Auto evaluation of all prompts though OpenAI APIs |
ChainFury is an open-source project, and is currently in the alpha stage. We are open to contributions to the project in the form of features, infrastructure or documentation.
-
To contribute to this project, please follow a "fork and pull request" workflow. Please do not try to push directly to this repo unless you are maintainer.
-
Our issues page is kept up to date with bugs, improvements, and feature requests.
-
If you're looking for help with your code, consider posting a question on the GitHub Discussions board, so that more people can benefit from it.
-
Describing your issue: Try to provide as many details as possible. What exactly goes wrong? How is it failing? Is there an error? "XY doesn't work" usually isn't that helpful for tracking down problems. Always remember to include the code you ran and if possible, extract only the relevant parts and don't just dump your entire script. This will make it easier for us to reproduce the error.
-
Sharing long blocks of code or logs: If you need to include long code, logs or tracebacks, you can wrap them in
<details>
and</details>
. This collapses the content so it only becomes visible on click, making the issue easier to read and follow.