Pinned Repositories
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
pytorch-ssd
MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in Pytorch 1.0 / Pytorch 0.4. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv.
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Tengine-Convert-Tools
Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.
tensorflow
An Open Source Machine Learning Framework for Everyone
test
测试
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Tengine-Convert-Tools
Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.
cccxinli's Repositories
cccxinli/mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
cccxinli/pytorch-ssd
MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in Pytorch 1.0 / Pytorch 0.4. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv.
cccxinli/Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
cccxinli/Tengine-Convert-Tools
Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.
cccxinli/tensorflow
An Open Source Machine Learning Framework for Everyone
cccxinli/test
测试
cccxinli/tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators