Pinned Repositories
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
wgpu
A cross-platform, safe, pure-Rust graphics API.
ArchProbe
A profiler to disclose and quantify hardware features on GPUs.
graphi-t
Handy tools & graphics API abstraction for blazing fast prototyping
inline-spirv-rs
Compile GLSL/HLSL/WGSL and inline SPIR-V right inside your crate.
Minimalist-TaichiAOT
A minimal Taichi AOT project.
rtaichi
Taichi Frontend for Rust
spirq-rs
Light weight SPIR-V reflection library
taichi
Productive, portable, and performant GPU programming in Python.
ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
PENGUINLIONG's Repositories
PENGUINLIONG/sairc
PENGUINLIONG/bolt
Bolt is a deep learning library with high performance and heterogeneous flexibility.
PENGUINLIONG/chisel3
Chisel 3: A Modern Hardware Design Language
PENGUINLIONG/clpeak
A tool which profiles OpenCL devices to find their peak capacities
PENGUINLIONG/clspv
Clspv is a prototype compiler for a subset of OpenCL C to Vulkan compute shaders
PENGUINLIONG/expr-simp
PENGUINLIONG/glfw
A multi-platform library for OpenGL, OpenGL ES, Vulkan, window and input
PENGUINLIONG/imnodes
A small, dependency-free node editor for dear imgui
PENGUINLIONG/libpng
LIBPNG: Portable Network Graphics support, official libpng repository
PENGUINLIONG/LigharS
PENGUINLIONG/LuisaCompute
Multi-Backend Heterogeneous Computing Framework
PENGUINLIONG/mace
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
PENGUINLIONG/macos-virtualbox
Push-button installer of macOS Catalina, Mojave, and High Sierra guests in Virtualbox for Windows, Linux, and macOS
PENGUINLIONG/MegEngine
MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架
PENGUINLIONG/MNN
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
PENGUINLIONG/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
PENGUINLIONG/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
PENGUINLIONG/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
PENGUINLIONG/Paddle-Lite
Multi-platform high performance deep learning inference engine (『飞桨』多平台高性能深度学习预测引擎)
PENGUINLIONG/Real-Time-Rendering-4th-Bibliography-Collection
Real-Time Rendering 4th (RTR4) 参考文献合集典藏 | Collection of <Real-Time Rendering 4th (RTR4)> Bibliography
PENGUINLIONG/riscv-assembler
RISC-V Assembly code assembler package for Python.
PENGUINLIONG/rust-gamedev.github.io
The repository for rust-gamedev.github.io
PENGUINLIONG/spvgen
Library to Generate SPIR-V Binary
PENGUINLIONG/Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
PENGUINLIONG/tensorflow
An Open Source Machine Learning Framework for Everyone
PENGUINLIONG/TNN
TNN: developed by Tencent Youtu Lab and Guangying Lab, a lightweight and high-performance deep learning framework for mobile inference. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework. TNN:由腾讯优图实验室和光影实验室协同打造,移动端高性能、轻量级推理框架,同时拥有跨平台、高性能、模型压缩、代码裁剪等众多突出优势。TNN框架在原有Rapidnet、ncnn框架的基础上进一步加强了移动端设备的支持以及性能优化,同时也借鉴了业界主流开源框架高性能和良好拓展性的优点。目前TNN已经在手Q、微视、P图等应用中落地,欢迎大家参与协同共建,促进TNN推理框架进一步完善。
PENGUINLIONG/tvm-1
Open deep learning compiler stack for cpu, gpu and specialized accelerators
PENGUINLIONG/wgpu
Native WebGPU implementation based on gfx-hal
PENGUINLIONG/wgpu-android
wgpu Hello Triangle on Android
PENGUINLIONG/wgpu-rs
Rust bindings to wgpu native library