cTuning foundation (founding member of MLCommons)
We develop open-source tools to help researchers and engineers improve productivity and focus on innovation
Paris, France
Pinned Repositories
artifact-evaluation
Collective Knowledge repository to support artifact evaluation and reproducibility initiatives:
ck-analytics
Collective Knowledge repository with actions to unify the access to different predictive analytics engines (scipy, R, DNN) from software, command line and web-services via CK JSON API:
ck-autotuning
CK automation actions to let users implement portable, customizable and reusable program workflows for reproducible, collaborative and multi-objective benchmarking, optimization and SW/HW co-design:
ck-env
CK repository with components and automation actions to enable portable workflows across diverse platforms including Linux, Windows, MacOS and Android. It includes software detection plugins and meta packages (code, data sets, models, scripts, etc) with the possibility of multiple versions to co-exist in a user or system environment.
ck-math
Collective Knowledge packages for various mathematical libs to be plugged into portable and customizable CK research workflows:
ck-tensorflow
Collective Knowledge components for TensorFlow (code, data sets, models, packages, workflows):
ck-tensorrt
Collective Knowledge repository for NVIDIA's TensorRT
cm4research
CM interface and automation recipes to access, manage, prepare, run and reproduce research projects from AI, ML and Systems conferences
ctuning-programs
Collective Knowledge extension with unified and customizable benchmarks (with extensible JSON meta information) to be easily integrated with customizable and portable Collective Knowledge workflows. You can easily compile and run these benchmarks using different compilers, environments, hardware and OS (Linux, MacOS, Windows, Android). More info:
reproduce-milepost-project
Collective Knowledge workflow for the MILEPOST GCC (machine learning based compiler). See how it is used in the collaborative project with the Raspberry Pi foundation to support collaborative research for multi-objective autotuning and machine learning techniques, and prototype reproducible papers with portable workflows:
cTuning foundation (founding member of MLCommons)'s Repositories
ctuning/artifact-evaluation
Collective Knowledge repository to support artifact evaluation and reproducibility initiatives:
ctuning/ck-quantum
Miscellaneous resources for Quantum Collective Knowledge
ctuning/cm4research
CM interface and automation recipes to access, manage, prepare, run and reproduce research projects from AI, ML and Systems conferences
ctuning/mlcommons-ck
Collective Knowledge (CK) and Common Metadata eXchange (CMX): community-driven projects to learn how to run AI, ML and other emerging workloads in a more efficient and cost-effective way across diverse models, datasets, software and hardware using MLPerf automations, CK playground and open reproducibility and optimization challenges
ctuning/ck_mlperf_results
Outdated
ctuning/cm4mlops
A collection of portable, reusable and cross-platform automation recipes (CM scripts) to make it easier to build and benchmark AI systems across diverse models, data sets, software and hardware
ctuning/cm4abtf-20241204
CM interface and automation recipes for ABTF
ctuning/cm4mlops-20241204
A collection of portable, reusable and cross-platform automation recipes (CM scripts) to make it easier to build and benchmark AI systems across diverse models, data sets, software and hardware
ctuning/cm4mlperf-inference-20241204
ctuning/cm4mlperf-results
CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
ctuning/go-cm-20241204
Collective Mind
ctuning/go-cm4mlops-20241204
A collection of reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
ctuning/go-cm4mlperf-inference-20241204
ctuning/go-inference_results_visualization_template-20241204
ctuning/go-mlperf-automations-20241204
This repository contains the automations and scripts used to automate MLPerf benchmarks (mainly MLPerf inference for now)
ctuning/go-mlperf_inference_unofficial_submissions_v5.0-20241204
These are automated test submissions for validating the MLPerf inference workflows
ctuning/inference_results_v3.1
This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
ctuning/inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
ctuning/inference_results_visualization_template-20241206
ctuning/mlcflow
MLCFlow: Simplifying MLPerf Automations
ctuning/mlcommons-ck-20241204
Collective Knowledge (CK) and Collective Mind (CM): educational community projects to learn how to run AI, ML and other emerging workloads in a more efficient and cost-effective way across diverse models, datasets, software and hardware using MLPerf and CM automations
ctuning/mlcommons-cm4abtf-20241205
CM interface and automation recipes for ABTF
ctuning/mlcommons-inference-20241204
Reference implementations of MLPerf™ inference benchmarks
ctuning/mlperf-automations
ctuning/mlperf-automations-20241204
This repository contains the automations and scripts used to automate MLPerf benchmarks (mainly MLPerf inference for now)
ctuning/mlperf-loadgen
Minimal copy of the MLPerf loadgen
ctuning/MLPerf-Power-HPCA-2025
ctuning/mlperf_inference_test_submissions_v5.0_20241204
ctuning/q2a-mlperf-visualizer
MLPerf explorer beta
ctuning/test_mlperf_inference_submissions