Pinned Repositories
ailuminate
The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academia, and civil society.
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
ck
Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
croissant
Croissant is a high-level format for machine learning datasets that brings together four rich layers.
inference
Reference implementations of MLPerf™ inference benchmarks
inference_results_v5.0
This repository contains the results and code for the MLPerf™ Inference v5.0 benchmark.
modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
tiny
MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
training
Reference implementations of MLPerf® training benchmarks
training_results_v5.0
This repository contains the results and code for the MLPerf™ Training v5.0 benchmark.
MLCommons's Repositories
mlcommons/mlcube
MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
mlcommons/hpc
Reference implementations of MLPerf™ HPC training benchmarks
mlcommons/modelgauge
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
mlcommons/mobile_models
MLPerf™ Mobile models
mlcommons/power-dev
Dev repo for power measurement for the MLPerf™ benchmarks
mlcommons/dataperf
Data Benchmarking
mlcommons/inference_results_v3.0
This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
mlcommons/ck-mlops
A collection of portable workflows, automation recipes and components for MLOps in a unified CK format. Note that this repository is outdated - please check the 2nd generation of the CK workflow automation meta-framework with portable MLOps and DevOps components here:
mlcommons/training_results_v3.1
This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
mlcommons/training_results_v4.0
This repository contains the results and code for the MLPerf™ Training v4.0 benchmark.
mlcommons/inference_results_v3.1
This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
mlcommons/inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
mlcommons/mobile_open
MLPerf Mobile benchmarks
mlcommons/mlperf_client
MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.
mlcommons/algorithms_results_v0.5
This repository contains the results and code for the AlgoPerf v0.5 benchmark.
mlcommons/training_results_v4.1
This repository contains the results and code for the MLPerf™ Training v4.1 benchmark.
mlcommons/inference_results_v4.1
This repository contains the results and code for the MLPerf™ Inference v4.1 benchmark.
mlcommons/storage_results_v1.0
This repository contains the results and code for the MLPerf™ Storage v1.0 benchmark.
mlcommons/cm4abtf
CM interface and automation recipes for ABTF
mlcommons/abtf-ssd-pytorch
mlcommons/cm4mlperf-inference
mlcommons/medperf-website
mlcommons/mobile_results_v4.1
This repository contains the results and code for the MLPerf™ Mobile Inference v4.1 benchmark.
mlcommons/.github
mlcommons/cla-bot
mlcommons/common-crawl-dmlr
mlcommons/mlperf-automations_archived
This repository contains the automations and scripts used to automate MLPerf benchmarks (mainly MLPerf inference for now)
mlcommons/mlperf_inference_unofficial_submissions_v5.0
These are automated test submissions for validating the MLPerf inference workflows
mlcommons/mobile_results_v4.0
This repository contains the results and code for the MLPerf™ Mobile Inference v4.0 benchmark.
mlcommons/template