Pinned Repositories
DATM
ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Dynamic-Diffusion-Transformer
Dynamic-Tuning
The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"
Enhance-A-Video
Enhance-A-Video: Better Generated Video for Free
InfoBatch
Lossless Training Speed Up by Unbiased Dynamic Data Pruning
LARS-ImageNet-PyTorch
Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional accumulated gradient and NVIDIA DALI dataloader.
Neural-Network-Parameter-Diffusion
We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
pytorch-lamb
PyTorch implementation of LAMB for ImageNet/ResNet-50 training
SpeeD
SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
VideoSys
VideoSys: An easy and efficient system for video generation
NUS HPC AI Lab's Repositories
NUS-HPC-AI-Lab/VideoSys
VideoSys: An easy and efficient system for video generation
NUS-HPC-AI-Lab/Neural-Network-Parameter-Diffusion
We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
NUS-HPC-AI-Lab/InfoBatch
Lossless Training Speed Up by Unbiased Dynamic Data Pruning
NUS-HPC-AI-Lab/Enhance-A-Video
Enhance-A-Video: Better Generated Video for Free
NUS-HPC-AI-Lab/SpeeD
SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
NUS-HPC-AI-Lab/DATM
ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
NUS-HPC-AI-Lab/Dynamic-Diffusion-Transformer
NUS-HPC-AI-Lab/Dynamic-Tuning
The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"
NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch
Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional accumulated gradient and NVIDIA DALI dataloader.
NUS-HPC-AI-Lab/R-MeeTo
Give us minutes, we give back a faster Mamba. The official implementation of "Faster Vision Mamba is Rebuilt in Minutes via Merged Token Re-training".
NUS-HPC-AI-Lab/oh-my-server
NUS-HPC-AI-Lab/GEOM
Pytorch implementation of ICML-2024 "Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching"
NUS-HPC-AI-Lab/PAD
Prioritize Alignment in Dataset Distillation
NUS-HPC-AI-Lab/InfoGrowth
Efficient and Online Dataset Growth Algorithm (with cleanness and diversity awareness) to deal with growing web data
NUS-HPC-AI-Lab/Helen
The official implementation of "Helen: Optimizing CTR Prediction Models with Frequency-wise Hessian Eigenvalue Regularization"
NUS-HPC-AI-Lab/pytorch-lamb
PyTorch implementation of LAMB for ImageNet/ResNet-50 training
NUS-HPC-AI-Lab/EDF
NUS-HPC-AI-Lab/Awesome-Efficient-Video-Generation
A curated list of recent efficient video generation methods.
NUS-HPC-AI-Lab/Multimodal-ICL-Retriever
NUS-HPC-AI-Lab/SGL
NUS-HPC-AI-Lab/CTRL
Pytorch implementation of "Two Trades is not Baffled: Condensing Graph via Crafting Rational Gradient Matching"
NUS-HPC-AI-Lab/ColossalAI
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training
NUS-HPC-AI-Lab/EnergonAI
Large-scale model inference.
NUS-HPC-AI-Lab/FastFold
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
NUS-HPC-AI-Lab/PaLM-colossalai
Scalable PaLM implementation of PyTorch
NUS-HPC-AI-Lab/SkyComputing
Sky Computing, a new way for federated learning
NUS-HPC-AI-Lab/TensorNVMe
A Python library transfers PyTorch tensors between CPU and NVMe
NUS-HPC-AI-Lab/.github