Pinned Repositories
alacarter.github.io
caption-guided-saliency
Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)
deltaco
Training Code: [ICLR 2023] Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
Dynamic-Programming-Instances
Electric-Vehicle-with-Steering-gh
Nov. 2016 - Feb 2017. PID Controller.
Micromouse-gh
Maze-Travelling Autonomous Vehicle. Collision Avoidance.
open_clip
An open source implementation of CLIP.
robosuite
Surreal Robotics Suite: standardized and accessible robot manipulation benchmark in physics simulation
roboverse-deltaco
Env Code: [ICLR 2023] Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
Semi-Automated-Robot-Arm-gh
Robot Arm that moves according to routine, with checkpoints for user-input fine positional adjustments
Alacarter's Repositories
Alacarter/deltaco
Training Code: [ICLR 2023] Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
Alacarter/roboverse-deltaco
Env Code: [ICLR 2023] Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
Alacarter/Electric-Vehicle-with-Steering-gh
Nov. 2016 - Feb 2017. PID Controller.
Alacarter/alacarter.github.io
Alacarter/caption-guided-saliency
Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)
Alacarter/Dynamic-Programming-Instances
Alacarter/Micromouse-gh
Maze-Travelling Autonomous Vehicle. Collision Avoidance.
Alacarter/open_clip
An open source implementation of CLIP.
Alacarter/robosuite
Surreal Robotics Suite: standardized and accessible robot manipulation benchmark in physics simulation
Alacarter/Semi-Automated-Robot-Arm-gh
Robot Arm that moves according to routine, with checkpoints for user-input fine positional adjustments
Alacarter/softlearning
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains.
Alacarter/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.