mccandless's Stars
lobehub/lobe-chat
๐คฏ Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
karpathy/LLM101n
LLM101n: Let's build a Storyteller
myshell-ai/OpenVoice
Instant voice cloning by MIT and MyShell.
Mintplex-Labs/anything-llm
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
Doriandarko/claude-engineer
Claude Engineer is an interactive command-line interface (CLI) that leverages the power of Anthropic's Claude-3.5-Sonnet model to assist with software development tasks. This tool combines the capabilities of a large language model with practical file system operations and web search functionality.
huggingface/lerobot
๐ค LeRobot: Making AI for Robotics more accessible with end-to-end learning
modelscope/DiffSynth-Studio
Enjoy the magic of Diffusion models!
decodingml/llm-twin-course
๐ค ๐๐ฒ๐ฎ๐ฟ๐ป for ๐ณ๐ฟ๐ฒ๐ฒ how to ๐ฏ๐๐ถ๐น๐ฑ an end-to-end ๐ฝ๐ฟ๐ผ๐ฑ๐๐ฐ๐๐ถ๐ผ๐ป-๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐๐๐ & ๐ฅ๐๐ ๐๐๐๐๐ฒ๐บ using ๐๐๐ ๐ข๐ฝ๐ best practices: ~ ๐ด๐ฐ๐ถ๐ณ๐ค๐ฆ ๐ค๐ฐ๐ฅ๐ฆ + 12 ๐ฉ๐ข๐ฏ๐ฅ๐ด-๐ฐ๐ฏ ๐ญ๐ฆ๐ด๐ด๐ฐ๐ฏ๐ด
OpenTeleVision/TeleVision
[CoRL 2024] Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
mbreuss/diffusion-literature-for-robotics
Summary of key papers and blogs about diffusion models to learn about the topic. Detailed list of all published diffusion robotics papers.
notmahi/dobb-e
DobbยทE: An open-source, general framework for learning household robotic manipulation
MarkFzp/humanplus
[CoRL 2024] HumanPlus: Humanoid Shadowing and Imitation from Humans
ok-robot/ok-robot
An open, modular framework for zero-shot, language conditioned pick-and-drop tasks in arbitrary homes.
real-stanford/scalingup
[CoRL 2023] This repository contains data generation and training code for Scaling Up & Distilling Down
OpenGVLab/Instruct2Act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
CraftJarvis/MC-Planner
Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents"
Lifelong-Robot-Learning/LIBERO
Benchmarking Knowledge Transfer in Lifelong Robot Learning
nickgkan/3d_diffuser_actor
Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"
GuanxingLu/ManiGaussian
[ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation
andvg3/Grasp-Anything
Dataset and Code for ICRA 2024 paper "Grasp-Anything: Large-scale Grasp Dataset from Foundation Models."
UMass-Foundation-Model/MultiPLY
Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World
YanjieZe/GNFactor
[CoRL 2023 Oral] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields
ToruOwO/hato
๐๏ธ HATO: Learning Visuotactile Skills with Two Multifingered Hands
HybridRobotics/prompt2walk
Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969
graspnet/graspness_unofficial
Unofficial implementation of ICCV 2021 paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection"
rail-berkeley/fmb
xiaoxiaoxh/UniFolding
[CoRL 2023] UniFolding: Towards Sample-efficient, Scalable, and Generalizable Robotic Garment Folding.
dkguo/PhyGrasp
Shengqiang-Zhang/LoHo-Ravens
Official code for the long-horizon language-conditioned robotic manipulation benchmark LoHoRavens.
Eric-nguyen1402/Language-driven-closed-loop-grasping