DingchenYang99's Stars
langchain-ai/langchain
🦜🔗 Build context-aware reasoning applications
run-llama/llama_index
LlamaIndex is a data framework for your LLM applications
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
OpenBMB/ChatDev
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
ml-explore/mlx
MLX: An array framework for Apple silicon
joonspk-research/generative_agents
Generative Agents: Interactive Simulacra of Human Behavior
a16z-infra/ai-town
A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize.
WooooDyy/LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
tickstep/aliyunpan
阿里云盘命令行客户端,支持JavaScript插件,支持同步备份功能。
Luodian/Otter
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Alpha-VLLM/LLaMA2-Accessory
An Open-source Toolkit for LLM Development
yinboc/liif
Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)
microsoft/SoM
Set-of-Mark Prompting for GPT-4V and LMMs
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
DmitryRyumin/ICCV-2023-Papers
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support visual intelligence development!
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
shikras/shikra
wayveai/Driving-with-LLMs
PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
GAP-LAB-CUHK-SZ/MVImgNet
CVPR2023 | MVImgNet: A Large-scale Dataset of Multi-view Images
LinWeizheDragon/Retrieval-Augmented-Visual-Question-Answering
This is the official repository for Retrieval Augmented Visual Question Answering
qiantianwen/NuScenes-QA
[AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.
yuanze-lin/REVIVE
[NeurIPS 2022] Official Code for REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering
Kunlun-Zhu/Awesome-Agents-Research
Understanding-Visual-Datasets/VisDiff
Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)
KyanChen/OvarNet
OvarNet official implement of the paper "OvarNet: Towards Open-vocabulary Object Attribute Recognition"
isekai-portal/Link-Context-Learning
AndersonStra/MuKEA
MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering
microsoft/UniTAB
UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)
JoseponLee/IntentQA
Official repository for "IntentQA: Context-aware Video Intent Reasoning" from ICCV 2023.