Pinned Repositories
3dgp
3D generation on ImageNet [ICLR 2023]
articulated-animation
Code for Motion Representations for Articulated Animation paper
EfficientFormer
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
HyperHuman
[ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"
MMVID
[CVPR 2022] Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
MobileR2L
[CVPR 2023] Real-Time Neural Light Field on Mobile Devices
MoCoGAN-HD
[ICLR 2021 Spotlight] A Good Image Generator Is What You Need for High-Resolution Video Synthesis
NeROIC
Panda-70M
[CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers
R2L
[ECCV 2022] R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Snap Research's Repositories
snap-research/articulated-animation
Code for Motion Representations for Articulated Animation paper
snap-research/EfficientFormer
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
snap-research/HyperHuman
[ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"
snap-research/Panda-70M
[CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers
snap-research/MobileR2L
[CVPR 2023] Real-Time Neural Light Field on Mobile Devices
snap-research/R2L
[ECCV 2022] R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
snap-research/MyVLM
Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries"
snap-research/BitsFusion
snap-research/SnapFusion
snap-research/graphless-neural-networks
[ICLR 2022] Code for Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation (GLNN)
snap-research/AToM
Official implementation of `AToM: Amortized Text-to-Mesh using 2D Diffusion`
snap-research/SF-V
This respository contains the code for SF-V: Single Forward Video Generation Model.
snap-research/unsupervised-volumetric-animation
The repository for paper Unsupervised Volumetric Animation
snap-research/LargeGT
Graph Transformers for Large Graphs
snap-research/linkless-link-prediction
[ICML 2023] Linkless Link Prediction via Relational Distillation
snap-research/locomo
snap-research/hpdm
Hierarchical Patch Diffusion Models for High-Resolution Video Synthesis [CVPR 2024]
snap-research/weights2weights
Official Implementation of weights2weights
snap-research/textcraftor
snap-research/promptable-game-models
snap-research/USE
USE: Dynamic User Modeling with Stateful Sequence Models
snap-research/3D_4D_modeling_tutorial
3D/4D Generation and Modeling with Generative Priors, CVPR 2024 Tutorial
snap-research/4Real
Towards Photorealistic 4D Scene Generation via Video Diffusion Models
snap-research/cv-call-for-interns-2024
snap-research/qfar
Official implementation of MobiCom 2023 paper "QfaR: Location-Guided Scanning of Visual Codes from Long Distances"
snap-research/snapvideo
snap-research/SPAD
Source code for paper "SPAD: Spatially Aware Multi-View Diffusers"
snap-research/GenAU
snap-research/GTR
snap-research/improving-inductive-oov-recsys
Improving Out-of-Vocabulary Handling in Recommendation Systems