MongoDB!
AI withSuperDuperDB is an open-source environment to deploy, train and operate AI models and APIs in MongoDB using Python.
Easily integrate AI with your data: from LLMs and public AI APIs to bespoke machine learning models and custom use-cases.
No data duplication, no pipelines, no duplicate infrastructure — just Python.
- Explore the docs!
- Check out example use cases!
- Quickstart with Google Colab!
Installation | Quickstart | Contributing | Feedback | License
Introduction 🔰
🔮 What can you do with SuperDuperDB?
- Deploy all your AI models to automatically compute outputs (inference) in the database in a single environment with simple Python commands.
- Train models on your data in your database simply by querying without additional ingestion and pre-processing.
- Integrate AI APIs (such as OpenAI) to work together with other models on your data effortlessly.
- Search your data with vector-search, including model management and serving.
⁉️ Why choose SuperDuperDB?
- Avoid data duplication, pipelines and duplicate infrastructure with a single scalable deployment.
- Deployment always up-to-date as new data is handled automatically and immediately.
- A simple and familiar Python interface that can handle even the most complex AI use-cases.
👨💻🧑🔬👷 Who is SuperDuperDB for?
- Python developers using MongoDB who want to build AI into their applications easily.
- Data scientists & ML engineers who want to develop AI models using their favourite tools, with minimum infrastructural overhead.
- Infrastructure engineers who want a single scalable setup that supports both local, on-prem and cloud deployment.
🪄 SuperDuperDB transforms your MongoDB into:
- An end-to-end live AI deployment which includes a model repository and registry, model training and computation of outputs/ inference
- A feature store in which the model outputs are stored alongside the inputs in any data format.
- A fully functional vector database to easily generate vector embeddings of your data with your favorite models and APIs and connect them with MongoDB or LanceDB vector search
- (Coming soon) A model performance monitor enabling model quality and degradation to be monitored as new data is inserted
How to 🤷
in the docs here):
The following are examples of how you use SuperDuperDB with Python (find all how-tos and examples- Add a ML/AI model into your database (read more in the docs here):
import pymongo
from sklearn.svm import SVC
from superduperdb import superduper
# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)
# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())
# Add the model into the database
db.add(model)
# Predict on the selected data.
model.predict(X='input_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))
- Train/fine-tune a model (read more in the docs here):
import pymongo
from sklearn.svm import SVC
from superduperdb import superduper
# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)
# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())
# Predict on the selected data.
model.predict(X='input_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))
- Use MongoDB as your vector search database (read more in the docs here):
# First a "Watcher" makes sure vectors stay up-to-date
indexing_watcher = Watcher(model=OpenAIEmbedding(), key='text', select=collection.find())
# This "Watcher" is linked with a "VectorIndex"
db.add(VectorIndex('my-index', indexing_watcher=indexing_watcher))
# The "VectorIndex" may be used to search data. Items to be searched against are passed
# to the registered model and vectorized. No additional app layer is required.
# By default, SuperDuperDB uses LanceDB for vector comparison operations
db.execute(collection.like({'text': 'clothing item'}, 'my-index').find({'brand': 'Nike'}))
- Use OpenAI, PyTorch or Hugging face model as an embedding model for vector search (read more in the docs here):
# Create a ``VectorIndex`` instance with indexing watcher as OpenAIEmbedding and add it to the database.
db.add(
VectorIndex(
identifier='my-index',
indexing_watcher=Watcher(
model=OpenAIEmbedding(identifier='text-embedding-ada-002'),
key='abstract',
select=Collection(name='wikipedia').find(),
),
)
)
# The above also executes the embedding model (openai) with the select query on the key.
# Now we can use the vector-index to search via meaning through the wikipedia abstracts
cur = db.execute(
Collection(name='wikipedia')
.like({'abstract': 'philosophers'}, n=10, vector_index='my-index')
)
- Add Llama 2 model directly into your database! (read more in the docs here):
model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.float16,
device_map="auto",
)
model = Pipeline(
identifier='my-sentiment-analysis',
task='text-generation',
preprocess=tokenizer,
object=pipeline,
torch_dtype=torch.float16,
device_map="auto",
)
# You can easily predict on your collection documents.
model.predict(
X=Collection(name='test_documents').find(),
db=db,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200
)
- Use models outputs as inputs to downstream models (read more in the docs here):
model.predict(
X='input_col',
db=db,
select=coll.find().featurize({'X': '<upstream-model-id>'}), # already registered upstream model-id
listen=True,
)
Installation 🔌
1. Install SuperDuperDB via pip
(~1 minute):
pip install superduperdb
2. MongoDB Installation (~10-15 minutes):
- You already have MongoDB installed? Let's go!
- You need to install MongoDB? See the docs here.
found here (~as many minutes you enjoy)!
3. Try one of our example use cases/notebooksQuickstart 🚀
Try SuperDuperDB in Google Colab
This will set up a playground environment that includes:
- an installation of SuperDuperDB
- an installation of a MongoDB instance containing image data and
torch
models
Have fun!
Community & Getting Help 🙋
If you have any problems, questions, comments or ideas:
- Join our Slack (we look forward to seeing you there).
- Search through our GitHub Discussions, or add a new question.
- Comment an existing issue or create a new one.
- Feel free to contact a maintainer or community volunteer directly!
Contributing 🌱
There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:
- Bug reports
- Documentation improvements
- Enhancement suggestions
- Feature requests
- Expanding the tutorials and use case examples
Please see our Contributing Guide for details.
Feedback 💡
Help us to improve SuperDuperDB by providing your valuable feedback here!
License 📜
SuperDuperDB is open-source and intended to be a community effort, and it won't be possible without your support and enthusiasm. It is distributed under the terms of the AGPL (Affero GPLv3 Public License). Any contribution made to this project will be subject to the same provisions.