Pinned Repositories
AMeFu-Net
Repository for the paper "Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition"
CDFSOD-benchmark
A benchmark for cross-domain few-shot object detection (ECCV24 paper: Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector)
Embodied-One-Shot-Video-Recognition
The code of paper ACM Multimedia 2019: "Embodied One-Shot Video Recognition: Learning from Actions of a Virtual Embodied Agent"
lovelyqian.github.io
ME-D2N_for_CDFSL
Repository for the paper : ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning
Meta-FDMixup
Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data
NTIRE2025_CDFSOD
NTIRE 2025 Challenge on 1-st Cross-Domain Few-Shot Object Detection @ CVPR 2025
ObjectRelator
Offical repo for "ObjectRelator: Enabling Cross-View Object Relation Understanding in Ego-Centric and Exo-Centric Videos"
StyleAdv-CDFSL
Repository for the CVPR-2023 paper : StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning
wave-SAN-CDFSL
code for the paper wave-SAN: Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning
lovelyqian's Repositories
lovelyqian/CDFSOD-benchmark
A benchmark for cross-domain few-shot object detection (ECCV24 paper: Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector)
lovelyqian/Meta-FDMixup
Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data
lovelyqian/StyleAdv-CDFSL
Repository for the CVPR-2023 paper : StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning
lovelyqian/AMeFu-Net
Repository for the paper "Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition"
lovelyqian/NTIRE2025_CDFSOD
NTIRE 2025 Challenge on 1-st Cross-Domain Few-Shot Object Detection @ CVPR 2025
lovelyqian/ME-D2N_for_CDFSL
Repository for the paper : ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning
lovelyqian/Embodied-One-Shot-Video-Recognition
The code of paper ACM Multimedia 2019: "Embodied One-Shot Video Recognition: Learning from Actions of a Virtual Embodied Agent"
lovelyqian/wave-SAN-CDFSL
code for the paper wave-SAN: Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning
lovelyqian/lovelyqian.github.io
lovelyqian/ObjectRelator
Offical repo for "ObjectRelator: Enabling Cross-View Object Relation Understanding in Ego-Centric and Exo-Centric Videos"
lovelyqian/academicpages.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
lovelyqian/arn-eccv20-master
The code for ECCV'20 paper 'Few-shot Action Recognition via Permutation-invariant Attention'
lovelyqian/awesome-action-recognition
A curated list of action recognition and related area resources
lovelyqian/DAMDNet
DAMDNet for 3D face alignment(ICCV2019 workshop) Paper:
lovelyqian/awesome-egocentric-vision
A curated list of egocentric (first-person) vision and related area resources
lovelyqian/Detectron.pytorch
A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
lovelyqian/detectron2
Detectron2 is FAIR's next-generation research platform for object detection and segmentation.
lovelyqian/hmd
Detailed Human Shape Estimation from a Single Image by Hierarchical Mesh Deformation (CVPR2019 Oral)
lovelyqian/lovelyqian
Config files for my GitHub profile.
lovelyqian/maskrcnn-benchmark
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
lovelyqian/mixup-cifar10
mixup: Beyond Empirical Risk Minimization
lovelyqian/mmskeleton
Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch
lovelyqian/occupancy_flow
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"
lovelyqian/OpenMMD
OpenMMD is an OpenPose-based application that can convert real-person videos to the motion files (.vmd) which directly implement the 3D model (e.g. Miku, Anmicius) animated movies.
lovelyqian/Pixel2Mesh-1
A complete Pixel2Mesh implementation in PyTorch
lovelyqian/pytorch-i3d
lovelyqian/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
lovelyqian/SlowFast
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
lovelyqian/VIBE
Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
lovelyqian/vision
Datasets, Transforms and Models specific to Computer Vision