Pinned Repositories
ailuminate
The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academia, and civil society.
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
ck
Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
croissant
Croissant is a high-level format for machine learning datasets that brings together four rich layers.
inference
Reference implementations of MLPerf® inference benchmarks
inference_results_v5.1
This repository contains the results and code for the MLPerf™ Inference v5.1 benchmark.
modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
tiny
MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
training
Reference implementations of MLPerf® training benchmarks
training_results_v5.0
This repository contains the results and code for the MLPerf® Training v5.0 benchmark.
MLCommons's Repositories
mlcommons/training
Reference implementations of MLPerf® training benchmarks
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/croissant
Croissant is a high-level format for machine learning datasets that brings together four rich layers.
mlcommons/ck
Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
mlcommons/algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
mlcommons/storage
MLPerf® Storage Benchmark Suite
mlcommons/medperf
An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
mlcommons/chakra
Repository for MLCommons Chakra schema and tools
mlcommons/modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
mlcommons/training_policies
Issues related to MLPerf™ training policies, including rules and suggested changes
mlcommons/inference_policies
Issues related to MLPerf™ Inference policies, including rules and suggested changes
mlcommons/mobile_app_open
Mobile App Open
mlcommons/mlperf_client
MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.
mlcommons/logging
MLPerf™ logging library
mlcommons/policies
General policies for MLPerf™ including submission rules, coding standards, etc.
mlcommons/mobile_models
MLPerf™ Mobile models
mlcommons/dynabench
mlcommons/power-dev
Dev repo for power measurement for the MLPerf™ benchmarks
mlcommons/mlcflow
MLCFlow: Simplifying MLPerf Automations
mlcommons/inference_results_v5.0
This repository contains the results and code for the MLPerf™ Inference v5.0 benchmark.
mlcommons/mlperf-automations
This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
mlcommons/mlperf_automotive
mlcommons/submissions_algorithms
mlcommons/r2-downloader
Cloudflare Access + R2 Object Storage Dataset Download Script
mlcommons/modelplane
mlcommons/r2-infra
MLC Cloudflare R2 Dataset Distribution Infrastructure Management
mlcommons/inference_results_v5.1
This repository contains the results and code for the MLPerf™ Inference v5.1 benchmark.
mlcommons/common-crawl-dmlr
mlcommons/mlperf_inference_test_submissions_v5.0
mlcommons/tiny_results_v1.3
This repository contains the results and code for the MLPerf™ Tiny Inference v1.3 benchmark.