/ShakingBot

Dynamic Manipulation for Bagging

Primary LanguagePython

ShakingBot: Dynamic Manipulation for Bagging

Gu Ningquan, Zhang Zhizhong

Wuhan Textile University, Wuhan City, Hubei Province, China

Bag manipulation through robots is complex and challenging due to the deformability of the bag. Based on dynamic manipulation strategy, we propose the ShakingBot for the bagging tasks. Our approach utilizes a perception module to identify the key region of the plastic bag from arbitrary initial configurations. According to the segmentation, ShakingBot iteratively executes a set of actions, including Bag Adjustment, Dual-arm Shaking and One-arm Holding, to open the bag. Then, we insert the items and lift the bag for transport. We perform our method on a dual-arm robot and achieve a success rate of 21/33 for inserting at least one item across a variety of initial bag configurations. In this work, we demonstrate the performance of dynamic shaking actions compared to the quasi-static manipulation in the bagging task. We also show that our method generalizes to variations despite the bag's size, pattern, and color.

This repository contains code for training and evaluating ShakingBot in both simulation and real-world settings on a dual-UR5 robot arm setup for Ubuntu 18.04. It has been tested on machines with Nvidia GeForce RTX 2080 Ti.

Table of Contents

Data Collection

Kinect v2 based data collection code

KinectV2

Explanation

  • The folder【get_rgb_depth】is used to get the raw data of Kinect V2, RGB image size is 1920×1080,Depth image size of 512×424.
  • Since RGB images and depth images have different resolutions, they need to be aligned.The folder【colorized_depth】is used to align images.
  • 【final_datasets】is used to processing of images to target resolution,Generate .png and .npy files.
  • 【all_tools】Includes hardware kit for matlab to connect to Kinect V2.

Installation

Requirements

  • Ubuntu 18.04
  • Matlab R2020a
  • Kinect V2

Show Datasets

raw_rgb(1920×1080):

raw_rgb

raw_depth(512×424):

show_depth

align_rgb(512×424):

align_rgb

Get Datasets

  1. Preprocess the images collected in kinect v2

    The paint color can be changed into others according to the pattern color of the bag, here is the color of the paint in one of the bags.

    cd get_datasets
    python get_rgb_npy.py
    

processed_rgb(243×255):

processed_rgb

  1. Get datasets label

    python color_dectect.py
    

Network Training

Installation and Code Usage

  1. Make a new virtualenv or conda env. For example, if you're using conda envs, run this to make and then activate the environment:
    conda create -n shakingbot python=3.6 -y
    conda activate shakingbot
    
  2. Run pip install -r requirements.txt to install dependencies.
    cd network_training
    pip install -r requirements.txt
    

Train Region Perception Model

  1. In the repo's root, get rgb and depth map

  2. In the configs folder modify segmentation.json

  3. Train Region Perception model

        python train.py
    

Evaluate Region Perception Model

  1. In the repo's root, download the model weights

  2. Then validate the model from scratch with

    python visualize.py
    
  3. Training details can be viewed in the bag

    cd network_training
    tensorboard --logdir train_runs/