Pinned Repositories
blog
Public repo for HF blog posts
lighteval
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
nn_pruning
Prune a model while finetuning or training.
notebooks
Notebooks using the Hugging Face libraries 🤗
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
optimum
optimum-amd
AMD related optimizations for transformer models
optimum-intel
Accelerate inference of 🤗 Transformers with Intel optimization tools
transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
echarlaix's Repositories
echarlaix/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
echarlaix/blog
Public repo for HF blog posts
echarlaix/lighteval
echarlaix/neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
echarlaix/nn_pruning
Prune a model while finetuning or training.
echarlaix/notebooks
Notebooks using the Hugging Face libraries 🤗
echarlaix/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
echarlaix/optimum
echarlaix/optimum-amd
AMD related optimizations for transformer models
echarlaix/optimum-intel
Accelerate inference of 🤗 Transformers with Intel optimization tools
echarlaix/sentence-transformers
State-of-the-Art Text Embeddings