Pinned Repositories
CAMDM
(SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"
cz5test
A simple test of z5 wrapped to work with C
DeepLearning.ai-Summary
This repository contains my personal notes and summaries on DeepLearning.ai specialization courses. I've enjoyed every little bit of the course hope you enjoy my notes too.
distributed
A distributed task scheduler for Dask
DMCF
Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics (NeurIPS '22)
fairscale
PyTorch extensions for high performance and large scale training.
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Note: the repository does not accept github pull requests at this moment. Please submit your patches at http://reviews.llvm.org.
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
mesh
Mesh TensorFlow: Model Parallelism Made Easier
hubertlu-tw's Repositories
hubertlu-tw/CAMDM
(SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"
hubertlu-tw/cz5test
A simple test of z5 wrapped to work with C
hubertlu-tw/DeepLearning.ai-Summary
This repository contains my personal notes and summaries on DeepLearning.ai specialization courses. I've enjoyed every little bit of the course hope you enjoy my notes too.
hubertlu-tw/distributed
A distributed task scheduler for Dask
hubertlu-tw/DMCF
Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics (NeurIPS '22)
hubertlu-tw/fairscale
PyTorch extensions for high performance and large scale training.
hubertlu-tw/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
hubertlu-tw/llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Note: the repository does not accept github pull requests at this moment. Please submit your patches at http://reviews.llvm.org.
hubertlu-tw/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
hubertlu-tw/mesh
Mesh TensorFlow: Model Parallelism Made Easier
hubertlu-tw/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
hubertlu-tw/optimum
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
hubertlu-tw/pvcnn
[NeurIPS 2019, Spotlight] Point-Voxel CNN for Efficient 3D Deep Learning
hubertlu-tw/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
hubertlu-tw/RAD-NeRF
Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition
hubertlu-tw/rccl
ROCm Communication Collectives Library (RCCL)
hubertlu-tw/ROCm
ROCm - Open Source Platform for HPC and Ultrascale GPU Computing
hubertlu-tw/scratch
hubertlu-tw/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
hubertlu-tw/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
hubertlu-tw/tutorials
PyTorch tutorials.