mlperf

There are 17 repositories under mlperf topic.

  • mlcommons/ck

    Collective Knowledge (CK), Collective Mind (CM) and Common Metadata eXchange (CMX): community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf.

    Language:Python62251505119
  • mlcommons/mlcube

    MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.

    Language:Python154269832
  • mlcommons/cm4mlops

    Legacy CM repository with a collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the process of building, benchmarking and optimizing AI systems across diverse models, data sets, software and hardware

    Language:Python16215923
  • STMicroelectronics/stm32ai-perf

    MLPerf (tm) Tiny Deep Learning Benchmarks for STM32 devices

    Language:C13914
  • mlcommons/cm4mlperf-results

    CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$

    Language:Python5343
  • hls4ml-finn-mlperftiny/CIFAR10

    CIFAR10 training repo for MLPerf Tiny Benchmark v0.7

    Language:Python4163
  • freedomtan/coreml_models_for_mlperf

    Converting models used by MLPerf Mobile working group to Core ML format

    Language:Python3201
  • mlcommons/mlcflow

    MLCFlow: Simplifying MLPerf Automations

    Language:Python352610
  • Adlik/mlperf_benchmark

    A benchmark suite to used to compare the performance of various models that are optimized by Adlik.

    Language:Python2401
  • ivotron/mlperf-workflows

    Popperized MLPerf benchmark workflows

    Language:Python2218
  • mlcommons/mlperf-automations

    This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

    Language:Python2411611
  • AICoE/mlperf-tekton

    Tekton Pipelines to run MLPerf benchmarks on OpenShift

    Language:Shell1412
  • code-reef/ck-tensorflow-codereef

    Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal:

    Language:C++1300
  • ctuning/q2a-mlperf-visualizer

    MLPerf explorer beta

    Language:PHP00
  • huygnguyen04/MLPerf-Benchmark-Suite-Replication

    Replication of MLPerf on UVA-GPU Servers

  • mlcommons/mlperf-automations_archived

    This repository contains the automations and scripts used to automate MLPerf benchmarks (mainly MLPerf inference for now)

    Language:Python2
  • mlcommons/mlperf_inference_unofficial_submissions_v5.0

    These are automated test submissions for validating the MLPerf inference workflows

    Language:Mermaid402