lukovicaleksa/autonomous-driving-turtlebot-with-reinforcement-learning

How to run this code?

kim-lux opened this issue · 14 comments

I am interested in this video. And I just want to follow this . But I'm a beginner of ROS.
So I'cant begin this git project. When I run this project, gazebo simulation is fail (turtlebot is crashed)
I think this is because I just run each python file. (like use this command to run scan_node.py $ python scan_node.py)
Can you explain how can I run this whole file. (like rqt graph)
I think I must use roslaunch command but I don't know how can i use this command.

could you give me a code when you start this simulation in gazebo?

we already study basics of ROS architecture
and we run this project with this code
rosrun master_rad control_node.py
When we use 0, 2 path, we are success
but the other path we are failed
When you process this project , Did you success all the pass?
thank you for your response
l am honored to study this project

Sorry for the late reply. We run control_node.py via rosrun but everything fails except path 0,2 as pictured. I'm not sure why this is wrong. is there another important code for run this program? we just use rosrun master_rad control_node.py. Please let us know if we are doing anything wrong. thanks for the reply
path4
path1

Yes, that's how you run the code, nothing else needs to be done. Just choose the path at the top of the .py file and it should work. Which version of ROS and Linux are you using? By the look of your screen i guess you are using newer version of Linux and and ROS probobly, i used Linux 16.04 and ROS Kinetic Kame on the virtual machine. Everything should work fine, the version of code posted here is the final version that worked for me.

Thanks for your kind reply. I'll check it out and post back.

I ran it with 16.04 kenetic and still got the same result. Having trouble running it in a VM? thanks for the response
Path4

Then im afraid i dont know from here what the problem is, i will try to debug these days when i get time. Hope i will solve the problem and answer you as soon as possible.

Fixed everything, scripts had few bugs that i left while i was testing. I did one more learning phase and acquired new Q-table which is stored in Log_learning_FINAL folder. Everything works fine now.
One note: beware of the feedback controller parameters and goal distance and angle threshold, reaching goal depends a lot on these.
Please let me know if all paths work for you now, so i can close the issue.

Thanks for your hard work. But unfortunately I can't seem to get Q-learning to work properly in this code. The terminal says Q-learniing is being applied, but there are many times when the direction change does not occur. Could you please let me know what the problem is? Now it crashes on all paths. Thank you always.

image

I have migrated my PC fully to Linux in the meantime. Now I am using Ubuntu 20.04 with ROS noetic.
PARAMETERS

  • /rosdistro: noetic
  • /rosversion: 1.15.11
    Please try with these versions, and if it still does not works i really dont have any idea why. Are you sure u downloaded the latest code?

Finally succeeded. All thanks to you. Thank you very much. It worked because I matched the version. Thank you very much.

Your welcome! Im glad it works now