evison's Stars
sindresorhus/awesome
😎 Awesome lists about all kinds of interesting topics
langchain-ai/opengpts
MarkFzp/mobile-aloha
Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation
agiresearch/AIOS
AIOS: LLM Agent Operating System
MarkFzp/act-plus-plus
Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN
zchoi/Awesome-Embodied-Agent-with-LLMs
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
themesberg/landwind
Responsive and clean landing page built with Tailwind CSS and Flowbite
wayveai/Driving-with-LLMs
PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
agiresearch/WarAgent
WarAgent: LLM-based Multi-Agent Simulation of World Wars
agiresearch/Formal-LLM
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
Luckfort/CD
"Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?"
MingyuJ666/ProLLM
[COLM'24] We propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as language prompts. It considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins.
casmlab/NPHardEval
Repository for NPHardEval, a quantified-dynamic benchmark of LLMs
agiresearch/AutoFlow
agiresearch/CoRE
LLM as Interpreter for Natural Language Programming, Pseudo-code Programming and Flow Programming of AI Agents
agiresearch/IDGenRec
Towards LLM-RecSys Alignment with Textual ID Learning
MingyuJ666/The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models
[ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reasoning steps in prompts remains largely unknown. To shed light on this, we have conducted several empirical experiments to explore the relations.
agiresearch/TrustAgent
TrustAgent: Towards Safe and Trustworthy LLM-based Agents
agiresearch/BattleAgent
agiresearch/EmojiCrypt
EmojiCrypt: Prompt Encryption for Secure Communication with Large Language Models
GHupppp/MemorySharingLLM
qcznlp/uncertainty_attack
agiresearch/ContextHub
Contextualized Logic
Tizzzzy/Law_LLM
agiresearch/UP5
UP5: Unbiased Foundation Model for Fairness-aware Recommendation
agiresearch/MoralBench
MoralBench: Evaluating the Moral of Large Language Models
yunqi-li/Fairness-Of-ChatGPT
agiresearch/agiresearch.github.io
AGI Research
agiresearch/AIOS-RAG
imrecommender/PGNR