Pinned Repositories
DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
Awesome-GPU
Awesome resources for GPUs
benchmark
A microbenchmark support library
coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
fvcore
Collection of common code that's shared among different research projects in FAIR computer vision team.
intel-extension-for-transformers
Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inference efficiency on Intel platforms.
Olive
Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
zbench
A simple CM kernel launchpad
Qianshui-Jiang's Repositories
Qianshui-Jiang/Awesome-GPU
Awesome resources for GPUs
Qianshui-Jiang/benchmark
A microbenchmark support library
Qianshui-Jiang/coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Qianshui-Jiang/DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
Qianshui-Jiang/fvcore
Collection of common code that's shared among different research projects in FAIR computer vision team.
Qianshui-Jiang/intel-extension-for-transformers
Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inference efficiency on Intel platforms.
Qianshui-Jiang/Olive
Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
Qianshui-Jiang/tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Qianshui-Jiang/zbench
A simple CM kernel launchpad
Qianshui-Jiang/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Qianshui-Jiang/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Qianshui-Jiang/tvm-build
A library for building TVM programmatically.