This repository contains tools for benchmarking and evaluating AI models on MongoDB-related tasks.
The repository is organized with three main packages:
- benchmarks/: Benchmarking suite for evaluating AI models on MongoDB tasks (text-to-driver code generation, natural language queries, etc.)
- datasets/: Utilities for importing, processing, and managing datasets used for training and evaluation
- mongodb-rag-core/: Core shared utilities, types, and functions used by other packages
To learn how to get started contributing to the project, refer to the Contributor Guide.
This project is licensed under the Apache 2.0 License.