Pinned Repositories
ai_learning
ai_papers
AI Papers
AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
DIYRefresh
下拉刷新框架
fisheye
fisheye image calibration
llama.cpp
LLM inference in C/C++
llama3
The official Meta Llama 3 GitHub site
OpenCV-Python-Tutorials
unet_keras
unet_keras use image Semantic segmentation
yolov4
YOLO4, TensorFlow, Keras
HLearning's Repositories
HLearning/fisheye
fisheye image calibration
HLearning/unet_keras
unet_keras use image Semantic segmentation
HLearning/ai_papers
AI Papers
HLearning/DIYRefresh
下拉刷新框架
HLearning/OpenCV-Python-Tutorials
HLearning/yolov4
YOLO4, TensorFlow, Keras
HLearning/ai_learning
HLearning/AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
HLearning/llama
Inference code for LLaMA models
HLearning/llama.cpp
LLM inference in C/C++
HLearning/llama3
The official Meta Llama 3 GitHub site
HLearning/mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
HLearning/mlx
MLX: An array framework for Apple silicon
HLearning/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
HLearning/triton
Development repository for the Triton language and compiler
HLearning/tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
HLearning/ComputeLibrary-Review
The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
HLearning/cpp_learning
HLearning/fdlibm
Math library