DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
Run on an RTX 4090 GPU.
dbgpt_demo.mp4
Currently, we have released multiple key features, which are listed below to demonstrate our current capabilities:
-
SQL language capabilities
- SQL generation
- SQL diagnosis
-
Private domain Q&A and data processing
- Knowledge Management(We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.)
- Database knowledge Q&A
- knowledge Embedding
-
ChatDB
-
ChatDashboard
-
Plugins
- Support custom plugin execution tasks and natively support the Auto-GPT plugin, such as:
- Automatic execution of SQL and retrieval of query results
- Automatic crawling and learning of knowledge
-
Unified vector storage/indexing of knowledge base
- Support for unstructured data such as PDF, TXT, Markdown, CSV, DOC, PPT, and WebURL
-
Multi LLMs Support, Supports multiple large language models, currently supporting
- 🔥 Vicuna-v1.5(7b,13b)
- 🔥 llama-2(7b,13b,70b)
- WizardLM-v1.2(13b)
- Vicuna (7b,13b)
- ChatGLM-6b (int4,int8)
- ChatGLM2-6b (int4,int8)
- guanaco(7b,13b,33b)
- Gorilla(7b,13b)
- baichuan(7b,13b)
DB-GPT creates a vast model operating system using FastChat and offers a large language model powered by Vicuna. In addition, we provide private domain knowledge base question-answering capability. Furthermore, we also provide support for additional plugins, and our design natively supports the Auto-GPT plugin.Our vision is to make it easier and more convenient to build applications around databases and llm.
Is the architecture of the entire DB-GPT shown in the following figure:
The core capabilities mainly consist of the following parts:
- Knowledge base capability: Supports private domain knowledge base question-answering capability.
- Large-scale model management capability: Provides a large model operating environment based on FastChat.
- Unified data vector storage and indexing: Provides a uniform way to store and index various data types.
- Connection module: Used to connect different modules and data sources to achieve data flow and interaction.
- Agent and plugins: Provides Agent and plugin mechanisms, allowing users to customize and enhance the system's behavior.
- Prompt generation and optimization: Automatically generates high-quality prompts and optimizes them to improve system response efficiency.
- Multi-platform product interface: Supports various client products, such as web, mobile applications, and desktop applications.
- DB-GPT-Hub Text-to-SQL parsing with LLMs
- DB-GPT-Plugins DB-GPT Plugins, Can run autogpt plugin directly
- DB-GPT-Web ChatUI for DB-GPT
In the .env configuration file, modify the LANGUAGE parameter to switch to different languages. The default is English (Chinese: zh, English: en, other languages to be added later).
If nltk-related errors occur during the use of the knowledge base, you need to install the nltk toolkit. For more details, please refer to: nltk documents Run the Python interpreter and type the commands:
>>> import nltk
>>> nltk.download()
This project is standing on the shoulders of giants and is not going to work without the open-source communities. Special thanks to the following projects for their excellent contribution to the AI industry:
- FastChat for providing chat services
- vicuna-13b as the base model
- langchain tool chain
- Auto-GPT universal plugin template
- Hugging Face for big model management
- Chroma for vector storage
- Milvus for distributed vector storage
- ChatGLM as the base model
- llama_index for enhancing database-related knowledge using in-context learning based on existing knowledge bases.
- Please run
black .
before submitting the code. contributing guidelines, how to contribution
The MIT License (MIT)
We are working on building a community, if you have any ideas about building the community, feel free to contact us.