Pinned Repositories
3D-BoundingBox
PyTorch implementation for 3D Bounding Box Estimation Using Deep Learning and Geometry
3D-Deepbox
3D Bounding Box Estimation Using Deep Learning and Geometry (MultiBin)
abb
ROS-Industrial ABB support (http://wiki.ros.org/abb)
ApproxMVBB
Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in 3D.
ARIAC
Repository for ARIAC 2020, consisting of kit building in a simulated warehouse with a dual arm robot.
Pursuit-Evasion-Game-with-Deep-Reinforcement-Learning-in-an-environment-with-an-obstacle
In this study, a multi agent chase-escape problem using Deep Q learning. Actors of the problem are smart evader and smart pursuers with opposite goals. At the beginning of the game these agents have homogeneous properties and evader and pursuits don’t have knowledge about the map. The purpose of the pursuer robots is the catching the evader as fast as it could and the purpose of the evader robot is the escaping as much as it could. Such as this game, where a player's gain is in balance with the loss of other players are called zero-sum games. The end condition, which may differ according to the approach applied, in our study is that “any of the pursuers or evader within the same or neighbor pixel with obstacle or map border” or “one pursuer and evader within the same or neighbor pixel”, in other words, Evader catches by the any of the pursuers or evader hits an obstacle or any pursuers hits an obstacle. A new episode of the game resumes after each collision or cath. In this respect, escape-chase problems are also included in the repeat games class. In this study, the question is what any pursuer or evader can do to improve its performance in a repetitive part of the game is questioned. The method used for this study is Deep Reinforcement Learning. Agents receive rewards or penalties based on their moves within a section and update this information into the Neural Network.
Reinforcement-Learning-Tutorial
Sample reinforcement learning tutorial notebooks 🎉
MBaranPeker's Repositories
MBaranPeker/Pursuit-Evasion-Game-with-Deep-Reinforcement-Learning-in-an-environment-with-an-obstacle
In this study, a multi agent chase-escape problem using Deep Q learning. Actors of the problem are smart evader and smart pursuers with opposite goals. At the beginning of the game these agents have homogeneous properties and evader and pursuits don’t have knowledge about the map. The purpose of the pursuer robots is the catching the evader as fast as it could and the purpose of the evader robot is the escaping as much as it could. Such as this game, where a player's gain is in balance with the loss of other players are called zero-sum games. The end condition, which may differ according to the approach applied, in our study is that “any of the pursuers or evader within the same or neighbor pixel with obstacle or map border” or “one pursuer and evader within the same or neighbor pixel”, in other words, Evader catches by the any of the pursuers or evader hits an obstacle or any pursuers hits an obstacle. A new episode of the game resumes after each collision or cath. In this respect, escape-chase problems are also included in the repeat games class. In this study, the question is what any pursuer or evader can do to improve its performance in a repetitive part of the game is questioned. The method used for this study is Deep Reinforcement Learning. Agents receive rewards or penalties based on their moves within a section and update this information into the Neural Network.
MBaranPeker/abb
ROS-Industrial ABB support (http://wiki.ros.org/abb)
MBaranPeker/ApproxMVBB
Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in 3D.
MBaranPeker/ARIAC
Repository for ARIAC 2020, consisting of kit building in a simulated warehouse with a dual arm robot.
MBaranPeker/ArUCo-Markers-Pose-Estimation-Generation-Python
Estimating pose using ArUCo Markers
MBaranPeker/Computer-Vision-and-Robotics-Paper-List
Computer Vision and Robot Vision
MBaranPeker/computervision-recipes
Best Practices, code samples, and documentation for Computer Vision.
MBaranPeker/darknet
YOLOv4 - Neural Networks for Object Detection (Windows and Linux version of Darknet )
MBaranPeker/darknet_ros
YOLO ROS: Real-Time Object Detection for ROS
MBaranPeker/Detectron2_ros
A ROS Node for detecting objects using Detectron2.
MBaranPeker/easy_handeye
Automated, hardware-independent Hand-Eye Calibration
MBaranPeker/fanuc_experimental-release
MBaranPeker/gb_visual_detection_3d
MBaranPeker/github-readme-stats
:zap: Dynamically generated stats for your github readmes
MBaranPeker/gpu-voxels
GPU-Voxels is a CUDA based library which allows high resolution volumetric collision detection between animated 3D models and live pointclouds from 3D sensors of all kinds.
MBaranPeker/kuka-rsi-ros-interface
A ROS node for the manipulation of a KUKA robot arm via RSI 3
MBaranPeker/kuka_experimental
Experimental packages for KUKA manipulators within ROS-Industrial (http://wiki.ros.org/kuka_experimental)
MBaranPeker/Machine-Learning-for-Computer-Vision
This project experiments various machine learning models on computer vision tasks such as image super resolution, style transferring, and object detection.
MBaranPeker/moveit
:robot: The MoveIt motion planning framework
MBaranPeker/navigation.ros.org
https://navigation.ros.org/
MBaranPeker/nerfies.github.io
MBaranPeker/Online-3D-BPP-DRL
This repository contains the implementation of paper Online 3D Bin Packing with Constrained Deep Reinforcement Learning.
MBaranPeker/Open3D
Open3D: A Modern Library for 3D Data Processing
MBaranPeker/pick_ik
Inverse Kinematics solver for MoveIt
MBaranPeker/pytorch-YOLOv4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
MBaranPeker/ros
Core ROS packages
MBaranPeker/ros2_controllers
Generic robotic controllers to accompany ros2_control
MBaranPeker/yak_ros
Example ROS frontend node for the Yak TSDF package
MBaranPeker/YoloV4
Yolo v4 in pytorch, tensorflow and onnx
MBaranPeker/zed-ros-wrapper
ROS wrapper for the ZED SDK