/ROS_labs

ROS Labs

Primary LanguageCMake

ROS_labs

LAB 3

PART 1

PREREQUISITES: Download and compile the 5 DOF planar robot packages.

sudo apt-get install ros-noetic-ros-control ros-noetic-ros-controllers //
git clone https://github.com/arebgun/dynamixel_motor //
git clone https://github.com/fenixkz/ros_snake_robot.git
sudo apt-get install ros-noetic-gazebo-ros-pkgs ros-noetic-gazebo-ros-control 

After every download:

catkin_make
source ~/CATKIN_WORKSPACE/devel/setup.bash 

To launch gazebo:

roslaunch gazebo_robot gazebo.launch  

To see available ROS Topics:

rostopic list 

TASK: Create a rosnode that will “listen” for std_msgs/Float64 type data and “publish” this data to the joint of the planar robot. The node should send the command to move if the any new incoming value is lower than the previous one.

Joint Movement of Planar Robot

new.mov

PART 2

TASK: Get the step response of (you can create a node that will send a square-wave function):

  1. the joint at the base of the robot
lab3_part2_base_step.mov

base_step

  1. the joint at the end-effector of the robot
lab3_part2_end_step.mov

end_step

Get the sine-wave response of (you can create a node that will send a sine-wave function):
3. the joint at the base of the robot

lab3_part2_base_sin.mov

base_sin

  1. the joint at the end-effector of the robot
lab3_part2_end_sin.mov

end_sin

LAB 4

TASK: 1. Configure MoveIt library

My MoveIt package is called "lab4".

  1. Create a node moves the “end” by 1.4 (in rviz units mm or m) along X axis

File to run is located in scripts/src/test.cpp

rosrun scripts test_test
x.mov
  1. Create a node that moves “end” to Draw a rectangle File to run is located in scripts/src/test_rectangle.cpp
rosrun scripts test_rect
Untitled.mov

LAB 5

TASK: Using rosbag Record the joint angles and the position of the end- effector in x- and y-axes.

/scripts/src/salem.csv

untitled

cos

LAB 6

TASK: Obtain Forward Kinematics without the robot model

Dataset is located in scripts/src/dict1.csv. Dataset was obtained by scripts/src/dataset.py

Importing libraries.

import numpy as np
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import pandas as pd
from sklearn.model_selection import train_test_split

Reading generated csv file.

def main():
    data = pd.read_csv("/home/zhamilya/catkin_ws_zhamilya/dict1.csv", header = None, names = ["Angles", "XY"])
    print(data.head(10))

Screenshot from 2021-11-25 06-04-22

Splitting into train and test.

    train = data['Angles'].to_numpy()
    labels = data['XY'].to_numpy()

    X = list()
    Y = list()
    for i in range(len(train)):
        labels[i] = labels[i].replace('     ', ' ')
        labels[i] = labels[i].replace('   ', ' ')
        labels[i] = labels[i].replace('  ', ' ')
        labels[i] = labels[i].strip('[ ').strip(' ]')
        train[i] = train[i].strip('(').strip(')')
        result = [float(val) for val in train[i].split(',')]
        X.append(result)
        result = [float(val) for val in labels[i].split(' ')]
        Y.append(result)

    X_train, X_test, y_train, y_test = train_test_split(np.asarray(X), np.asarray(Y), test_size=0.80)
    print("TRAIN X SHAPE ", np.shape(X_train))
    print("TRAIN Y SHAPE ", np.shape(y_train))
    print("TEST X SHAPE ", np.shape(X_test))
    print("TEST Y SHAPE ", np.shape(y_test))

Screenshot from 2021-11-27 18-23-17

Loss Function: Root Mean Square

def rmse(y_true, y_pred):
        return K.sqrt(K.mean(K.square(y_pred - y_true)))

Model

    model = Sequential()
    model.add(Dense(10, input_dim = 5, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss=rmse, optimizer=Adam(0.01))
    print(model.summary())

Screenshot from 2021-11-25 06-13-26

    model.fit(X_train, y_train, epochs = 15)
    scores = model.evaluate(X_test, y_test, verbose=0) 
    print("RMSE: %.2f" % (scores))

RMSE: 0.10

CHANGING THE LOSSES

  1. Mean Squared Logarithmic Error
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))

mean_squared_logarithmic_error 0.0005
  1. Mean Absolute Error
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_absolute_error', optimizer=keras.optimizers.Adam(0.01))

mean_absolute_error 0.0331
  1. Mean Squared Error
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_error', optimizer=keras.optimizers.Adam(0.01))

mean_squared_error 0.0036
  1. Root Mean Squared Error
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss=rmse, optimizer=keras.optimizers.Adam(0.01))
rmse 0.0508

Mean Squared Logarithmic Error showed the best results.

CHANGING THE NUMBER OF LAYERS

Layer number Mean Squared Logarithmic Error
2 ------------> 0.000359
3 ------------> 0.088592
4 ------------> 0.088045
5 ------------> 0.088894
6 ------------> 0.000551
7 ------------> 0.000601

2 layers showed the best results.

CHANGING THE ACTIVATION FUNCTIONS

  1. Tanh
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.000190
  1. Sigmoid
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'sigmoid'))
    model.add(Dense(16, activation = 'sigmoid'))
    model.add(Dense(16, activation = 'sigmoid'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.017334
  1. Linear
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'linear'))
    model.add(Dense(16, activation = 'linear'))
    model.add(Dense(16, activation = 'linear'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.002832
  1. Softmax
    model = Sequential()
    model.add(Dense(10, input_dim =5, activation = 'softmax'))
    model.add(Dense(16, activation = 'softmax'))
    model.add(Dense(16, activation = 'softmax'))
    model.add(Dense(3, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.019299

Tanh showed the best results

RESULTS

Screenshot from 2021-11-27 18-19-16

Mean Squared Logarithmic Error: 0.000190

Dataset Size -> 10000
Number of Hidden Layers -> 2
Optimizer -> Adam
Activation Function -> Tanh
Loss -> Mean Squared Logarithmic Error
Epochs -> 15

LAB 7

TASK: Obtain Inverse Kinematics without the robot model

Importing libraries.

import numpy as np
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import pandas as pd
from sklearn.model_selection import train_test_split

Reading generated csv file.

def main():
    data = pd.read_csv("/home/zhamilya/catkin_ws_zhamilya/dict1.csv", header = None, names = ["Angles", "XY"])
    print(data.head(10))

Screenshot from 2021-11-25 06-04-22

Splitting into train and test.

    train = data['Angles'].to_numpy()
    labels = data['XY'].to_numpy()

    X = list()
    Y = list()
    for i in range(len(train)):
        labels[i] = labels[i].replace('     ', ' ')
        labels[i] = labels[i].replace('   ', ' ')
        labels[i] = labels[i].replace('  ', ' ')
        labels[i] = labels[i].strip('[ ').strip(' ]')
        train[i] = train[i].strip('(').strip(')')
        result = [float(val) for val in train[i].split(',')]
        Y.append(result)
        result = [float(val) for val in labels[i].split(' ')]
        X.append(result)

    X_train, X_test, y_train, y_test = train_test_split(np.asarray(X), np.asarray(Y), test_size=0.80)
    print("TRAIN X SHAPE ", np.shape(X_train))
    print("TRAIN Y SHAPE ", np.shape(y_train))
    print("TEST X SHAPE ", np.shape(X_test))
    print("TEST Y SHAPE ", np.shape(y_test))

Screenshot from 2021-11-25 07-09-55

Model

    model = Sequential()
    model.add(Dense(10, input_dim =3, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(5, activation='linear'))
    model.compile(loss=rmse, optimizer=keras.optimizers.Adam(0.01))

Model Fitting

    model.fit(X_train, y_train, epochs = 200)
    scores = model.evaluate(X_test, y_test, verbose=0) 
    print("RMSE: %.2f" % (scores))
0.259730

CHANGE THE LOSS FUNCTION TO MSLE

Model

    model = Sequential()
    model.add(Dense(10, input_dim =3, activation = 'relu'))
    model.add(Dense(16, activation = 'relu'))
    model.add(Dense(5, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.01889

INCRERASE NUMBER OF LAYERS

# 2 ------------> mean_squared_logarithmic_error: 0.022235   
# 3 ------------> mean_squared_logarithmic_error: 0.015986   
# 4 ------------> mean_squared_logarithmic_error: 0.022250   
# 5 ------------> mean_squared_logarithmic_error: 0.015907   
# 6 ------------> mean_squared_logarithmic_error: 0.015879   
# 7 ------------> mean_squared_logarithmic_error: 0.018899   
# 8 ------------> mean_squared_logarithmic_error: 0.018779   
# 9 ------------> mean_squared_logarithmic_error: 0.018794   
# 10 -----------> mean_squared_logarithmic_error: 0.018883   
0.015907

The best performance showed the 5 hidden layers.

CHANGE THE ACTIVATION FUNCTION TO TANH

    model = Sequential()
    model.add(Dense(10, input_dim =3, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))
    model.add(Dense(16, activation = 'tanh'))

    model.add(Dense(5, activation='linear'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.015924

tanh_lr_0 001

Figure_tanh_15_lrdefault

CHANGE THE EPOCH SIZE TO 200

0.015749

Screenshot from 2021-11-27 19-16-20

epoch200

RESULTS

Mean Squared Logarithmic Error: 0.015749

Dataset Size -> 10000
Number of Hidden Layers -> 5 Optimizer -> Adam
Activation Function -> Tanh
Loss -> Mean Squared Logarithmic Error
Epochs -> 200