Pinned Repositories
ai_inference_tools
algorithm-pattern
算法模板,最科学的刷题方式,最快速的刷题路径,你值得拥有~
BNN-PYNQ
caffe
Caffe: a fast open framework for deep learning.
caffe-windows
Configure Caffe in one hour for Windows users.
CHaiDNN
HLS based Deep Neural Network Accelerator Library for Xilinx Ultrascale+ MPSoCs
cmake-examples
Useful CMake Examples
CodingInterviewChinese2
《剑指Offer》第二版源代码
Deep-Compression-AlexNet
Deep Compression on AlexNet
PYNQ
Python Productivity for ZYNQ
jxhekang's Repositories
jxhekang/ai_inference_tools
jxhekang/algorithm-pattern
算法模板,最科学的刷题方式,最快速的刷题路径,你值得拥有~
jxhekang/caffe
Caffe: a fast open framework for deep learning.
jxhekang/caffe-windows
Configure Caffe in one hour for Windows users.
jxhekang/cmake-examples
Useful CMake Examples
jxhekang/CodingInterviewChinese2
《剑指Offer》第二版源代码
jxhekang/Deep-Compression-AlexNet
Deep Compression on AlexNet
jxhekang/DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
jxhekang/DL_tensorflow
Tensorflow Basic Sample Code
jxhekang/Edge-AI-Platform-Tutorials
Tutorials for the Edge AI Platform
jxhekang/Efficient-Neural-Network-Bilibili
B站Efficient-Neural-Network学习分享的配套代码
jxhekang/gputil
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python
jxhekang/Hands-On-GPU-Accelerated-Computer-Vision-with-OpenCV-and-CUDA
Hands-On GPU Accelerated Computer Vision with OpenCV and CUDA, published by Packt
jxhekang/how-to-optimize-gemm
jxhekang/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
jxhekang/MLIR-TVM
jxhekang/NN-CUDA-Example
Several simple examples for popular neural network toolkits calling custom CUDA operators.
jxhekang/OpenVINO-Custom-Layers
Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)
jxhekang/pytorch-cifar
95.47% on CIFAR10 with PyTorch
jxhekang/pytorch-distributed
A quickstart and benchmark for pytorch distributed training.
jxhekang/roofline
Roofline prototype for Arm
jxhekang/shark-samples
jxhekang/tensorflow
An Open Source Machine Learning Framework for Everyone
jxhekang/tensorflow-predictor-cpp
tensorflow prediction using c++ api
jxhekang/Tensorflow-TensorRT
This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeNet-like model and YOLOv3 model, and get 3.7x and 1.5x faster for the former and the latter, respectively, compared to the original models.
jxhekang/TensorRT-Program
Running the TensorFlow Official ResNet-50 and VGG-19 with TensorRT
jxhekang/test0620
jxhekang/torch-mlir
The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
jxhekang/UVM_template
Generate UVM testbench framework files with Python 3
jxhekang/yolov5_cpp_openvino
用c++实现了yolov5使用openvino的部署