mlperf
There are 17 repositories under mlperf topic.
mlcommons/ck
Collective Knowledge (CK), Collective Mind (CM) and Common Metadata eXchange (CMX): community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf.
mlcommons/mlcube
MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
mlcommons/cm4mlops
Legacy CM repository with a collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the process of building, benchmarking and optimizing AI systems across diverse models, data sets, software and hardware
STMicroelectronics/stm32ai-perf
MLPerf (tm) Tiny Deep Learning Benchmarks for STM32 devices
mlcommons/cm4mlperf-results
CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
hls4ml-finn-mlperftiny/CIFAR10
CIFAR10 training repo for MLPerf Tiny Benchmark v0.7
freedomtan/coreml_models_for_mlperf
Converting models used by MLPerf Mobile working group to Core ML format
mlcommons/mlcflow
MLCFlow: Simplifying MLPerf Automations
Adlik/mlperf_benchmark
A benchmark suite to used to compare the performance of various models that are optimized by Adlik.
ivotron/mlperf-workflows
Popperized MLPerf benchmark workflows
mlcommons/mlperf-automations
This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
AICoE/mlperf-tekton
Tekton Pipelines to run MLPerf benchmarks on OpenShift
code-reef/ck-tensorflow-codereef
Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal:
ctuning/q2a-mlperf-visualizer
MLPerf explorer beta
huygnguyen04/MLPerf-Benchmark-Suite-Replication
Replication of MLPerf on UVA-GPU Servers
mlcommons/mlperf-automations_archived
This repository contains the automations and scripts used to automate MLPerf benchmarks (mainly MLPerf inference for now)
mlcommons/mlperf_inference_unofficial_submissions_v5.0
These are automated test submissions for validating the MLPerf inference workflows