Pinned Repositories
sigs
Repository for ONNX SIG artifacts
compiler-explorer
Run compilers interactively from your web browser and interact with the assembly
cpyke
Easy integrated Python scripting embedded in C++
LLM-Viewer
Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline model in a user-friendly interface.
mbw
Memory Bandwidth Benchmark
nanoBench
A tool for running small microbenchmarks on recent Intel and AMD x86 CPUs.
onnx-model-analyzer
A parser, editor and profiler tool for ONNX models.
onnx-opcounter
Count number of parameters / MACs / FLOPS for ONNX models.
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
saurabhtangri's Repositories
saurabhtangri/compiler-explorer
Run compilers interactively from your web browser and interact with the assembly
saurabhtangri/cpyke
Easy integrated Python scripting embedded in C++
saurabhtangri/LLM-Viewer
Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline model in a user-friendly interface.
saurabhtangri/mbw
Memory Bandwidth Benchmark
saurabhtangri/nanoBench
A tool for running small microbenchmarks on recent Intel and AMD x86 CPUs.
saurabhtangri/onnx-model-analyzer
A parser, editor and profiler tool for ONNX models.
saurabhtangri/onnx-opcounter
Count number of parameters / MACs / FLOPS for ONNX models.
saurabhtangri/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
saurabhtangri/profile
saurabhtangri/wiki
saurabhtangri/working-groups
Repository for ONNX working group artifacts
saurabhtangri/pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
saurabhtangri/tutorials
Tutorials for creating and using ONNX models