Pinned Repositories
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
ck
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
croissant
Croissant is a high-level format for machine learning datasets that brings together four rich layers.
inference
Reference implementations of MLPerf™ inference benchmarks
inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
policies
General policies for MLPerf™ including submission rules, coding standards, etc.
tiny
MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
training
Reference implementations of MLPerf™ training benchmarks
training_results_v3.1
This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
MLCommons's Repositories
mlcommons/training
Reference implementations of MLPerf™ training benchmarks
mlcommons/inference
Reference implementations of MLPerf™ inference benchmarks
mlcommons/ck
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
mlcommons/croissant
Croissant is a high-level format for machine learning datasets that brings together four rich layers.
mlcommons/tiny
MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
mlcommons/algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
mlcommons/mlcube
MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
mlcommons/GaNDLF
A generalizable application framework for segmentation, regression, and classification using PyTorch
mlcommons/medperf
An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
mlcommons/training_policies
Issues related to MLPerf™ training policies, including rules and suggested changes
mlcommons/storage
MLPerf™ Storage Benchmark Suite
mlcommons/chakra
Repository for MLCommons Chakra schema and tools
mlcommons/mobile_app_open
Mobile App Open
mlcommons/modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
mlcommons/hpc
Reference implementations of MLPerf™ HPC training benchmarks
mlcommons/logging
MLPerf™ logging library
mlcommons/policies
General policies for MLPerf™ including submission rules, coding standards, etc.
mlcommons/mobile_models
MLPerf™ Mobile models
mlcommons/modelgauge
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
mlcommons/dynabench
mlcommons/dataperf
Data Benchmarking
mlcommons/inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
mlcommons/cm4mlops
A collection of reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
mlcommons/cm4mlperf-results
CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
mlcommons/cm-mlops
mlcommons/cm4abtf
CM interface and automation recipes for ABTF
mlcommons/.github
mlcommons/mobile_results_v4.0
This repository contains the results and code for the MLPerf™ Mobile Inference v4.0 benchmark.
mlcommons/modeltune
mlcommons/tiny_results_v1.2
This repository contains the results and code for the MLPerf™ Tiny Inference v1.2 benchmark.