Pinned Repositories
iree-amd-aie
IREE plugin repository for the AMD AIE accelerator
llama
Inference code for LLaMA models
pandas-mlir
Bridging Pandas and MLIR ecosystems
PI
A lightweight MLIR Python frontend with support for PyTorch
SHARK
SHARK - High Performance Machine Learning Distribution
SHARK-Turbine
Unified compiler/runtime for interfacing with PyTorch Dynamo.
sharktank
SHARK Inference Modeling and Serving
SRT
Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository for mainline development. This repository houses branches and configuration that aren't ready for commit upstream.
techtalks
transformer-benchmarks
benchmarking some transformer deployments
nod.ai's Repositories
nod-ai/SHARK
SHARK - High Performance Machine Learning Distribution
nod-ai/SRT
Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository for mainline development. This repository houses branches and configuration that aren't ready for commit upstream.
nod-ai/SHARK-Turbine
Unified compiler/runtime for interfacing with PyTorch Dynamo.
nod-ai/iree-amd-aie
IREE plugin repository for the AMD AIE accelerator
nod-ai/PI
A lightweight MLIR Python frontend with support for PyTorch
nod-ai/pandas-mlir
Bridging Pandas and MLIR ecosystems
nod-ai/techtalks
nod-ai/sharktank
SHARK Inference Modeling and Serving
nod-ai/TheRock
The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm
nod-ai/mlir-aie
An MLIR-based toolchain for AMD AI Engine-enabled devices.
nod-ai/torch-mlir
The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
nod-ai/ROCR-Runtime
(fork) ROCm Platform Runtime: ROCr a HPC market enhanced HSA based runtime
nod-ai/sdxl-scripts
nod-ai/SHARK-TestSuite
Temporary home of a test suite we are evaluating
nod-ai/convperf
nod-ai/mlir-air
Fork of the upstream mlir-air repository for dependency management.
nod-ai/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
nod-ai/prototype-aie-toolchain
AIE-RT + Python
nod-ai/rocm-gemm-benchmark
nod-ai/llm-dev
Temporary repo for llm development.
nod-ai/base-docker-images
Utility repository for publishing docker images that we depend on.
nod-ai/diffusers
🤗 Diffusers for SHARK
nod-ai/e2eshark-reports
nod-ai/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
nod-ai/ROCm
Shared Fork of ROCm repository for development
nod-ai/SHARK-devtools
Development tools for managing SHARK projects.
nod-ai/stablehlo
Backward compatible ML compute opset inspired by HLO/MHLO
nod-ai/vllm_backend
Triton Inference Server vLLM Backend
nod-ai/xla
A machine learning compiler for GPUs, CPUs, and ML accelerators
nod-ai/XRT
(fork) Run Time for AIE and FPGA based platforms