Pinned Repositories
3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
action-recognition-compressed-domain
Code for submission of Neurips 2019 paper entitled "Accelerating Action Recognition In The Compressed Domain"
distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
huggingface-blog
Public repo for HF blog posts
intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
openvino_notebooks
📚 Jupyter notebook tutorials for OpenVINO™
pytorch-pretrained-BERT
📖The Big-&-Extending-Repository-of-Transformers: Pretrained PyTorch models for Google's BERT, OpenAI GPT & GPT-2, Google/CMU Transformer-XL.
reference
Reference implementations of MLPerf benchmarks
tnt
an abstraction to train neural networks
distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
haim-barad's Repositories
haim-barad/action-recognition-compressed-domain
Code for submission of Neurips 2019 paper entitled "Accelerating Action Recognition In The Compressed Domain"
haim-barad/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
haim-barad/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
haim-barad/huggingface-blog
Public repo for HF blog posts
haim-barad/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
haim-barad/openvino_notebooks
📚 Jupyter notebook tutorials for OpenVINO™
haim-barad/pytorch-pretrained-BERT
📖The Big-&-Extending-Repository-of-Transformers: Pretrained PyTorch models for Google's BERT, OpenAI GPT & GPT-2, Google/CMU Transformer-XL.
haim-barad/reference
Reference implementations of MLPerf benchmarks
haim-barad/tnt
an abstraction to train neural networks