Bharat-Runwal
Independent Researcher | Visiting Scholar @OPTML-Group , MSU | Undergrad@IIT Delhi (2018-22)
East Lansing
Bharat-Runwal's Stars
google-research/google-research
Google Research
eriklindernoren/PyTorch-GAN
PyTorch implementations of Generative Adversarial Networks.
unifyai/ivy
The Unified AI Framework
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
diff-usion/Awesome-Diffusion-Models
A collection of resources and papers on Diffusion Models
artidoro/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
NielsRogge/Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
Uberi/speech_recognition
Speech recognition module for Python, supporting several engines and APIs, online and offline.
Lyken17/pytorch-OpCounter
Count the MACs / FLOPs of your PyTorch model.
mosaicml/llm-foundry
LLM training code for Databricks foundation models
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
salesforce/CodeT5
Home of CodeT5: Open Code LLMs for Code Understanding and Generation
he-y/Awesome-Pruning
A curated list of neural network pruning resources.
IST-DASLab/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
real-stanford/diffusion_policy
[RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion
facebookresearch/multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
vturrisi/solo-learn
solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
lukemelas/PyTorch-Pretrained-ViT
Vision Transformer (ViT) in PyTorch
microsoft/Tutel
Tutel MoE: An Optimized Mixture-of-Experts Implementation
IST-DASLab/sparsegpt
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
locuslab/wanda
A simple and effective LLM pruning approach.
cvpr2023-tutorial-diffusion-models/papers
VainF/Diff-Pruning
[NeurIPS 2023] Structural Pruning for Diffusion Models
yilundu/reduce_reuse_recycle
ICML 2023: Reduce, Reuse, Recycle: Composing Energy-Based Diffusion Models with MCMC
thunlp/MoEfication
Picsart-AI-Research/IPL-Zero-Shot-Generative-Model-Adaptation
[CVPR 2023] Zero-shot Generative Model Adaptation via Image-specific Prompt Learning
cvlab-columbia/ZSRobust4FoundationModel
adapter-hub/efficient-task-transfer
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021
FrancescoSaverioZuppichini/Loading-huge-PyTorch-models-with-linear-memory-consumption
Little article showing how to load pytorch's models with linear memory consumption
calgaryml/condensed-sparsity
[ICLR 2024] Dynamic Sparse Training with Structured Sparsity