liuyyy111's Stars
skyil7/mmcr
Pytorch implementation of Maximum Manifold Capacity Representations (MMCR) loss.
AnswerDotAI/gpu.cpp
A lightweight library for portable low-level GPU computation using WebGPU.
QinYang79/Awesome-Noisy-Correspondence
This is a summary of research on noisy correspondence. There may be omissions. If anything is missing please get in touch with us. Our emails: linyijie.gm@gmail.com yangmouxing@gmail.com qinyang.gm@gmail.com
fanglaosi/Skeleton-in-Context
[CVPR2024] Official implementation of the paper: Skeleton-in-Context: Unified Skeleton Sequence Modeling with In-Context Learning
facebookresearch/EgoCom-Dataset
EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset
stevenlsw/hoi-forecast
[CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos
EgocentricVision/EgocentricVision
🔍 Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-progress; join us!
liuyyy111/Point-RAE
Code for ACM MM 2023 paper - Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning
DayongRen/Spiking-PointNet
Official PyTorch implementation for the following paper: Spiking PointNet: Spiking Neural Networks for Point Clouds.
liuyyy111/ConVSE
PyTorch source code for "Regularizing Visual Semantic Embedding with Contrastive Learning for Image-Text Matching"
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
ZrrSkywalker/I2P-MAE
[CVPR 2023] Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
facebookresearch/barlowtwins
PyTorch implementation of Barlow Twins.
facebookresearch/mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
huggingface/pytorch-image-models
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
cshizhe/asg2cap
Code accompanying the paper "Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs" (Chen et al., CVPR 2020, Oral).
OUCMachineLearning/OUCML