WillMandil001
PhD student in robotics, part of the Uni of Cambridge and Uni of Lincoln CDT for Agri-Food robotics. Feel free to contact me at willmandil@yahoo.co.uk
WillMandil001's Stars
OpenInterpreter/open-interpreter
A natural language interface for computers
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
karpathy/llama2.c
Inference Llama 2 in one file of pure C
lukemelas/EfficientNet-PyTorch
A PyTorch implementation of EfficientNet
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
google-research/scenic
Scenic: A Jax Library for Computer Vision Research and Beyond
NVIDIA-Omniverse/IsaacGymEnvs
Isaac Gym Reinforcement Learning Environments
google-deepmind/mujoco_menagerie
A collection of high-quality models for the MuJoCo physics engine, curated by Google DeepMind.
google-research/robotics_transformer
octo-models/octo
Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.
mistralai/megablocks-public
NVIDIA-Omniverse/OmniIsaacGymEnvs
Reinforcement Learning Environments for Omniverse Isaac Gym
graspnet/graspnet-baseline
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
clvrai/furniture
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
NVlabs/contact_graspnet
Efficient 6-DoF Grasp Generation in Cluttered Scenes
jconorgrogan/CLARKGPT
The ultimate LLM prompt: extract the best possible answers with the highest fidelity and lowest error rates
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"
jsll/pytorch_6dof-graspnet
ac-93/tactile_gym
Suite of PyBullet reinforcement learning environments targeted towards using tactile data as the main form of observation.
JeanElsner/panda-py
Python bindings for real-time control of Franka Emika robots.
XiYe20/VPTR
The repository for paper VPTR: Efficient Transformers for Video Prediction
Farama-Foundation/gym-examples
Example code for the Gym documentation
ISosnovik/sesn
Code for "Scale-Equivariant Steerable Networks"
vincentmllr/deep-robot-grasping
Teaching a Franka Emika Panda robot to grasp objects using deep reinforcement learning
yich7045/Visuo-Tactile-Transformers-for-Manipulation
For Visuo-Tactile Transformers for Manipulation
NMS05/Multimodal-Fusion-with-Attention-Bottlenecks
applied-ai-lab/ramp
A codebase for RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning
nerovalerius/registration_3d
Point cloud registration of two intel D435i 3D cameras using the Iterative-Closest-Point algorithm.
imanlab/action_conditioned_tactile_prediction
emlynw/rl_franka