/gnn

Primary LanguagePython

Documentation

This repository is used for processing the GNN-based scene graph generation generated by various images.

In the current setup, a complete trial involves robot doing navigation while taking pictures from the environment. Once navigation is done, the images are sent to the model (based in Google Cloud) to extract the relations (in form of csv files). All CSV files then are passed to the SQL database and to the reasoner.

Dependencies

Pandas, numpy, torch, torchvision, tkinter <\br>

I suggest installing these libraries in a virtual environment.

How to run the system?

To run the system, first make sure the Segway-based robot is operating and localized perfectly. The segway-based repository instructions is here.

Preprocessing parameters update

Then, make sure that the parameters in

~/catkin_ws/src/gnn/launch/main.launch 

file, including imaging frequency and the map name are all updated.

Branch check

Also make sure that bwi_common repository is in gnn branch and do the

$ roscd bwi_kr_execution
$ git stash 

to make sure that previous updates to the facts are cleaned and removed.

Camera check

Also, make sure the camera is installed and properly working. To test the camera is working properly, you need to run:

$ roslaunch astra_camera astrapro.launch

And in a new terminal test if the depth and RGB work properly:

$ rostopic echo /camera/depth/image_raw
$ rostopic echo /camera/rgb/image_raw

If the RGB topic does not output anything, let me know.

Taking images and navigation

Now, in a new terminal, type down:

$ roslaunch gnn main.launch

And then let the robot navigate either autonomously or using the joystick

$ roslaunch bwi_joystick_teleop joystick_teleop.launch

After navigation, once you are done, please shutdown the gnn main.launch process

Postprocessing parameters update

Then, make sure that the parameters in

~/catkin_ws/src/gnn/src/config.py 

file are all updated.

Relation extraction (Google Cloud)

Now, it is the time to send the images to the Google Cloud: To do that, make sure that Google Cloud Virtual Machine (VM) (Ask me for ID and password) is turned on. If it is on, on a new terminal, ssh to the VM by typing:

$ ssh_gcp    #(Assumption) ssh command is defined in .bashrc

and make sure the data folders and results folder are empty (To avoid confusion with other previous results)

Next scp the images to the VM and in the VM, an example command would be:

$ roscd gnn/src/input_images/2020-1-10-18-25                    # In local machine
$ gcloud compute scp *.jpg saeid@instance-2:~/KERN/data/input_images   # In local machine

Once transferring images is done, run the following command:

$ cd KERN/scripts/            # In google cloud terminal
$ sh testone_sgdet.sh         # In google cloude terminal

Once the relationship extraction is done, then scp all the outputs to the results folder. An example would be:

$ roscd gnn/src/results/2020-1-10-18-25/         # In local machine
$ gcloud compute scp saeid@instance-2:~/KERN/results/*.csv .  # In local machine
  • REMEMBER TO SHUTDOWN THE GOOGLE CLOUD SERVICE, ELSE YOU WILL BE CHARGED MORE.

Now, it's time to create a database. Just run

$ roscd gnn/src 
$ python db.py

And now to do the reasoning, please go to the reasoning folder by typing:

$ roscd gnn/src/reasoning
$ python sql_to_asp.py

Visualization of single image

Make sure the libraries in gui.py are installed in a virtual environment. Then run

$ roscd gnn/src/
$ python gui.py --image [image_name] --csvfile [csv file of the output]

Visualization of TA area

Create a folder called 'data' in the gnn folder and then download the ta_area