tvm
There are 139 repositories under tvm topic.
mlc-ai/mlc-llm
Universal LLM Deployment Engine with ML Compilation
apache/tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
mlc-ai/web-stable-diffusion
Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档
OAID/AutoKernel
AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。
flashinfer-ai/flashinfer
FlashInfer: Kernel Library for LLM Serving
zhiqwang/yolort
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
Ryan-yang125/ChatLLM-Web
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.
Zhen-Dong/HAWQ
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
apache/tvm-vta
Open, Modular, Deep Learning Accelerator
ton-society/grants-and-bounties
TON Foundation invites talent to imagine and realize projects that have the potential to integrate with the daily lives of users.
JackonYang/paper-reading
比做算法的懂工程落地,比做工程的懂算法模型。
merrymercy/tvm-mali
Optimizing Mobile Deep Learning on ARM GPU with TVM
everx-labs/TVM-Solidity-Compiler
Solidity compiler for TVM
tonkeeper/tongo
Go primitives to work with TON
apache/tvm-rfcs
A home for the final text of all TVM RFCs.
traveller59/torch2trt
convert torch module to tensorrt network or tvm function
YongtaoGe/RetinaFace
Reimplement RetinaFace using PyTorch.
tlc-pack/TLCBench
Benchmark scripts for TVM
andersy005/tvm-in-action
TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together
whitelok/tvm-lesson
动手学习TVM核心原理教程
xiayouran/VisuTVM
TVM Relay IR Visualization Tool (TVM 可视化工具)
UofT-EcoSystem/DietCode
DietCode Code Release
markson14/FaceRecognitionCpp
Large input size REAL-TIME Face Detector on Cpp. It can also support face verification using MobileFaceNet+Arcface with real-time inference. 480P Over 30FPS on CPU
l1nkr/DL-Compiler-Navigation
Machine Learning Compiler Road Map
Yulv-git/Model-Inference-Deployment
A curated list of awesome inference deployment framework of artificial intelligence (AI) models. OpenVINO, TensorRT, MediaPipe, TensorFlow Lite, TensorFlow Serving, ONNX Runtime, LibTorch, NCNN, TNN, MNN, TVM, MACE, Paddle Lite, MegEngine Lite, OpenPPL, Bolt, ExecuTorch.
ehsanmok/tvm-rust
(MERGED) Rust bindings for TVM runtime
tum-ei-eda/utvm_staticrt_codegen
This project contains a code generator that produces static C NN inference deployment code targeting tiny micro-controllers (TinyML) as replacement for other µTVM runtimes. This tools generates a runtime, which statically executes the compiled model. This reduces the overhead in terms of code size and execution time compared to having a dynamic on-device runtime.
everscale-org/docs
Docs of Everscale
LCAI-TIHU/SW
LCAI-TIHU SW is a software stack of the AI inference processor based on RISC-V