xiongfy's Stars
eric-hang/CGF
Implementation of Counterfactual Generation Framework
iamwangyabin/S-Prompts
Code for NeurIPS 2022 paper “S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning“
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
KHU-AGI/PriViLege
[CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners
google-research/l2p
Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
wangkiw/TEEN
The code repository for "Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration" (NeurIPS'23) in PyTorch
zhoudw-zdw/CVPR22-Fact
Forward Compatible Few-Shot Class-Incremental Learning (CVPR'22)
zysong0113/SAVC
[CVPR 2023] Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning
muzairkhattak/PromptSRC
[ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".
xialeiliu/Awesome-Incremental-Learning
Awesome Incremental Learning
rokmr/Continual-Learning
NTU-LANTERN/CFST
Compositional Few-Shot Testing with CGQA and COBJ
IMNearth/Curriculum-Learning-For-VLN
Code for NeurIPS 2021 paper "Curriculum Learning for Vision-and-Language Navigation"
CrystalSixone/DSRG
Code for A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation
cshizhe/VLN-HAMT
Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).
GengzeZhou/NavGPT
[AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models
YanyuanQiao/MiC
Code of the ICCV 2023 paper "March in Chat: Interactive Prompting for Remote Embodied Referring Expression"
UMass-Foundation-Model/3D-LLM
Code for 3D-LLM: Injecting the 3D World into Large Language Models
chenjinyubuaa/SEvol
weituo12321/PREVALENT
large scale pretrain for navigation task
google-research/pathdreamer
ericsujw/Matterport3DLayoutAnnotation
Layout annotation on a subset of Matterport3D dataset
HanqingWangAI/Dreamwalker
Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
facebookresearch/habitat-lab
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
niessner/Matterport
Matterport3D is a pretty awesome dataset for RGB-D machine learning tasks :)
MrZihan/GridMM
Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).
ronghanghu/speaker_follower
Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.
zehao-wang/LAD
Official implementation of Layout-aware Dreamer for Embodied Referring Expression Grounding (AAAI'23).
cshizhe/VLN-DUET
Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).
MarSaKi/VLN-BEVBert
[ICCV 2023} Official repo of "BEVBert: Multimodal Map Pre-training for Language-guided Navigation"