Pinned Repositories
airbert
Codebase for the Airbert paper
awesome-bci
Curated Collection of BCI resources
awesome-embodied-vision
Reading list for research topics in embodied vision
ETPNav
[TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"
learning_research
本人博士期间的科研经验
LLMsPracticalGuide
nasnet
reimplementation of "Learning Transferable Architectures for Scalable Image Recognition" using mnist dataset, include controller
NvEM
[ACM MM 2021 Oral] Official repo of "Neighbor-view Enhanced Model for Vision and Language Navigation"
unlocking-the-power-of-llms
使用 Prompts 和 Chains 让 ChatGPT 成为神奇的生产力工具!Unlocking the power of LLMs.
VLN-BEVBert
[ICCV 2023} Official repo of "BEVBert: Multimodal Map Pre-training for Language-guided Navigation"
MarSaKi's Repositories
MarSaKi/ETPNav
[TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"
MarSaKi/VLN-BEVBert
[ICCV 2023} Official repo of "BEVBert: Multimodal Map Pre-training for Language-guided Navigation"
MarSaKi/NvEM
[ACM MM 2021 Oral] Official repo of "Neighbor-view Enhanced Model for Vision and Language Navigation"
MarSaKi/learning_research
本人博士期间的科研经验
MarSaKi/LLMsPracticalGuide
MarSaKi/unlocking-the-power-of-llms
使用 Prompts 和 Chains 让 ChatGPT 成为神奇的生产力工具!Unlocking the power of LLMs.
MarSaKi/awesome-bci
Curated Collection of BCI resources
MarSaKi/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
MarSaKi/Awesome-LLM-Robotics
A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
MarSaKi/Awesome-Visual-Instruction-Tuning
Latest Papers and Datasets on Visual Instruction Tuning
MarSaKi/Awesome_Prompting_Papers_in_Computer_Vision
A curated list of prompt-based paper in computer vision and vision-language learning.
MarSaKi/cvpr-latex-template
Extended LaTeX template for CVPR/ICCV papers
MarSaKi/Grounding-REVERIE-Challenge
Official REVERIE Referring Expression Grounding Model of REVERIE Challenge @ CSIG 2022
MarSaKi/HOP-REVERIE-Challenge
Baseline for REVERIE-Challenge using HOP
MarSaKi/ChatGPT_JCM
OpenAI管理界面,聚合了OpenAI的所有接口进行界面操作(所有模型、图片、音频、微调、文件)等,支持Markdown格式(公式、图表,表格)等,GPT4接口官方只是在申请阶段,后期会一点一点的将OpenAI接口进行接入大家支持一下,微信群号在下方,右上角点个Star,我会一直更新下去,大家一起学习,一起加油,一起努力,一起成长。
MarSaKi/cow
Implementation of CoW appearing at CVPR 2023
MarSaKi/datacomp
DataComp: In search of the next generation of multimodal datasets
MarSaKi/Everything-LLMs-And-Robotics
The world's largest GitHub Repository for LLMs + Robotics
MarSaKi/habitat-matterport-3dresearch
MarSaKi/langchain
⚡ Building applications with LLMs through composability ⚡
MarSaKi/LLM-in-Vision
Recent LLM-based CV and related works. Welcome to comment/contribute!
MarSaKi/LLM-with-RL-papers
A collection of LLM with RL papers
MarSaKi/MarSaKi.github.io
MarSaKi/MiniGPT-4
MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
MarSaKi/NLP_ability
总结梳理自然语言处理工程师(NLP)需要积累的各方面知识,包括面试题,各种基础知识,工程能力等等,提升核心竞争力
MarSaKi/OpenAGI
OpenAGI: When LLM Meets Domain Experts
MarSaKi/Paper-Picture-Writing-Code
Paper Picture Writing Code
MarSaKi/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
MarSaKi/VIMA
Official Algorithm Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
MarSaKi/waypoint-predictor
Training code of waypoint predictor in Discrete-to-Continuous VLN.