Weaviate is an open source βvector database that is robust, scalable, cloud-native, and fast.
If you just want to get started, great! Try:
- the quickstart tutorial if you are looking to use Weaviate, or
- the contributor guide if you are looking to contribute to the project.
And you can find our documentation here.
If you have a bit more time, stick around and check out our summary below π
With Weaviate, you can turn your text, images and more into a searchable vector database using state-of-the-art ML models.
Some of its highlights are:
Weaviate typically performs a 10-NN neighbor search out of millions of objects in single-digit milliseconds. See benchmarks.
You can use Weaviate to conveniently vectorize your data at import time, or alternatively you can upload your own vectors.
These vectorization options are enabled by Weaviate modules. Modules enable use of popular services and model hubs such as OpenAI, Cohere or HuggingFace and much more, including use of local and custom models.
Weaviate is designed to take you from rapid prototyping all the way to production at scale.
To this end, Weaviate is built with scaling, replication, and security in mind, among others.
Weaviate powers lightning-fast vector searches, but it is capable of much more. Some of its other superpowers include recommendation, summarization, and integrations with neural search frameworks.
For starters, you can build vector databases with text, images, or a combination of both.
You can also build question and answer extraction, summarization and classification systems.
You can find code examples here, and you might these blog posts useful:
- The ChatGPT Retrieval Plugin - Weaviate as a Long-term Memory Store for Generative AI
- Giving Auto-GPT Long-Term Memory with Weaviate
- Combining LangChain and Weaviate
- How to build an Image Search Application with Weaviate
- Cohere Multilingual ML Models with Weaviate
- Weaviate Podcast Search
- The Sphere Dataset in Weaviate
Examples and/or documentation of Weaviate integrations (a-z).
- Auto-GPT (blogpost) β use Weaviate as a memory backend for Auto-GPT
- Cohere (blogpost) - Use Cohere embeddings with Weaviate.
- DocArray - Use Weaviate as a document store in DocArray.
- Haystack (blogpost) - Use Weaviate as a document store in Haystack.
- Hugging Face - Use Hugging Face models with Weaviate.
- LangChain (blogpost) - Use Weaviate as a memory backend for LangChain.
- LlamaIndex - Use Weaviate as a memory backend for LlamaIndex.
- OpenAI - ChatGPT retrieval plugin - Use Weaviate as a memory backend for ChatGPT.
- OpenAI - use OpenAI embeddings with Weaviate.
- yours?
Speaking of content - we love connecting with our community through these. We love helping amazing people build cool things with Weaviate, and we love getting to know them as well as talking to them about their passions.
To this end, our team does an amazing job with our blog and podcast.
Some of our past favorites include:
- What to expect from Weaviate in 2023
- Why is vector search so fast?
- Cohere Multilingual ML Models with Weaviate
- Vamana vs. HNSW - Exploring ANN algorithms Part 1
- HNSW+PQ - Exploring ANN algorithms Part 2.1
- The Tile Encoder - Exploring ANN algorithms Part 2.2
- How GPT4.0 and other Large Language Models Work
- Monitoring Weaviate in Production
Both our π blogs and ποΈ podcasts are updated regularly. To keep up to date with all things Weaviate including new software releases, meetup news and of course all of the content, you can subscribe to our ποΈ newsletter.
Also, we invite you to join our Slack community. There, you can meet other Weaviate users and members of the Weaviate team to talk all things Weaviate and AI (and other topics!).
You can also say hi to us below:
Or connect to us via:
-
Software Engineers (docs) - Who use Weaviate as an ML-first database for your applications.
- Out-of-the-box modules for: NLP/semantic search, automatic classification and image similarity search.
- Easy to integrate into your current architecture, with full CRUD support like you're used to from other OSS databases.
- Cloud-native, distributed, runs well on Kubernetes and scales with your workloads.
-
Data Engineers (docs) - Who use Weaviate as a vector database that is built up from the ground with ANN at its core, and with the same UX they love from Lucene-based search engines.
- Weaviate has a modular setup that allows you to use your ML models inside Weaviate, but you can also use out-of-the-box ML models (e.g., SBERT, ResNet, fasttext, etc).
- Weaviate takes care of the scalability, so that you don't have to.
- Deploy and maintain ML models in production reliably and efficiently.
-
Data Scientists (docs) - Who use Weaviate for a seamless handover of their Machine Learning models to MLOps.
- Deploy and maintain your ML models in production reliably and efficiently.
- Weaviate's modular design allows you to easily package any custom trained model you want.
- Smooth and accelerated handover of your Machine Learning models to engineers.
You can use Weaviate with any of these clients:
You can also use its GraphQL API to retrieve objects and properties.