Weaviate is an open source vector database that is robust, scalable, cloud-native, and fast.
If you just want to get started, great! Try:
- the quickstart tutorial if you are looking to use Weaviate, or
- the contributor guide if you are looking to contribute to the project.
And you can find our documentation here.
If you have a bit more time, stick around and check out our summary below 😉
With Weaviate, you can turn your text, images and more into a searchable vector database using state-of-the-art ML models.
Some of its highlights are:
Weaviate typically performs a 10-NN neighbor search out of millions of objects in single-digit milliseconds. See benchmarks.
You can use Weaviate to conveniently vectorize your data at import time, or alternatively you can upload your own vectors.
These vectorization options are enabled by Weaviate modules. Modules enable use of popular services and model hubs such as OpenAI, Cohere or HuggingFace and much more, including use of local and custom models.
Weaviate is designed to take you from rapid prototyping all the way to production at scale.
To this end, Weaviate is built with scaling, replication, and security in mind, among others.
Weaviate powers lightning-fast vector searches, but it is capable of much more. Some of its other superpowers include recommendation, summarization, and integrations with neural search frameworks.
For starters, you can build vector databases with text, images, or a combination of both.
You can also build question and answer extraction, summarization and classification systems.
You can see code examples here, and you might find these blog posts useful:
- The ChatGPT Retrieval Plugin - Weaviate as a Long-term Memory Store for Generative AI
- Giving Auto-GPT Long-Term Memory with Weaviate
- Combining LangChain and Weaviate
- How to build an Image Search Application with Weaviate
- Cohere Multilingual ML Models with Weaviate
- Weaviate Podcast Search
- The Sphere Dataset in Weaviate
Examples and/or documentation of Weaviate integrations (a-z).
- Auto-GPT (blogpost) – use Weaviate as a memory backend for Auto-GPT
- Cohere (blogpost) - Use Cohere embeddings with Weaviate.
- DocArray - Use Weaviate as a document store in DocArray.
- Haystack (blogpost) - Use Weaviate as a document store in Haystack.
- Hugging Face - Use Hugging Face models with Weaviate.
- LangChain (blogpost) - Use Weaviate as a memory backend for LangChain.
- LlamaIndex (blogpost)- Use Weaviate as a memory backend for LlamaIndex.
- OpenAI - ChatGPT retrieval plugin - Use Weaviate as a memory backend for ChatGPT.
- OpenAI - use OpenAI embeddings with Weaviate.
- yours?
Speaking of content - we love connecting with our community through these. We love helping amazing people build cool things with Weaviate, and we love getting to know them as well as talking to them about their passions.
To this end, our team does an amazing job with our blog and podcast.
Some of our past favorites include:
- What to expect from Weaviate in 2023
- Why is vector search so fast?
- Cohere Multilingual ML Models with Weaviate
- Vamana vs. HNSW - Exploring ANN algorithms Part 1
- HNSW+PQ - Exploring ANN algorithms Part 2.1
- The Tile Encoder - Exploring ANN algorithms Part 2.2
- How GPT4.0 and other Large Language Models Work
- Monitoring Weaviate in Production
Subscribe to our 🗞️ newsletter to keep up to date including new releases, meetup news and of course all of the content,.
We invite you to:
You can also say hi to us below:
-
Software Engineers - Who use Weaviate as an ML-first database for your applications.
- Out-of-the-box modules for: AI-powered searches, Q&A, integrating LLMs with your data, and automatic classification.
- With full CRUD support like you're used to from other OSS databases.
- Cloud-native, distributed, runs well on Kubernetes and scales with your workloads.
-
Data Engineers - Who use Weaviate as fast, flexible vector database
- Use your own ML mode or out-of-the-box ML models, locally or with an inference service.
- Weaviate takes care of the scalability, so that you don't have to.
-
Data Scientists - Who use Weaviate for a seamless handover of their Machine Learning models to MLOps.
- Deploy and maintain your ML models in production reliably and efficiently.
- Easily package any custom trained model you want.
- Smooth and accelerated handover of your ML models to engineers.
Read more in our documentation
You can use Weaviate with any of these clients:
You can also use its GraphQL API to retrieve objects and properties.