goodrobots/vision_landing

vision_landing doesn't work on Raspberry Pi

OscJD opened this issue ยท 27 comments

OscJD commented

Hi I finished the installation, but what is the next step?, I execute the vision_landing app but I don't receive anything. Apparently there's no problem with connection because when I choose a wrong device the app no receive answer. I am using a RPi with a raspicam and a pixhawk.
I wanna see images from the camera for verify the vision_landing is working or how can I do that?

fnoop commented

Hi, as advised in email vision_landing has not been tested yet with Raspberry Pi and will not work. There are known issues with slow computers, as they incur a high latency and slow update rate providing the target vectors to Ardupilot PrecLand. Ardupilot is not yet equipped to deal with this and can (almost certainly will) go out of control.
I'll keep this issue open to track getting vision_landing working on Raspberry, which is a good goal to aim for.

fnoop commented

track_targets dies and restarts repeatedly #81

fnoop commented

vision_landing and track_targets now working correctly on raspberry. Next step is to add calibration data for raspicam v2, test visual tracking, and to fix latency handling in arducopter.

fnoop commented

Added calibration data for raspicam v2 in #84 , tested visual tracking works on marker board.

fnoop commented

Added timesync to vision_landing to sync between raspberry (or any other companion) and ardupilot, to correlate camera frames with inertial frames. Initial work done on arducopter precland to consume timestamps and correlate frames.

fobdy commented

What FPS is to be expected on Raspberry Pi 3? We have only ~4 fps for 720p and ~8fps for 480p with track_targets.
I think it's very low for landing.

fnoop commented

Hi @fobdy Work is ongoing to get the raspberry working for precision landing. I'm getting about 15fps at 480p, but there's still more performance to be had. Theoretically precland should work right down to 1fps (albeit very sub-optimal), but the real problem is latency. Still working on that, hope to get some flight tests going within the next few days.

fobdy commented

@fnoop Thank you for your feedback! And some other important questions :)
What companion computer did you use for your precision landing initially? Was it successful and durable? Do you recommend Odroid XU4 instead of Rapsberry PI 3? Will it be more robust and safer? Or maybe Intel Joule?

fnoop commented

Yes I used the Intel Joule for most of the development, because it is so fast. Initially I used the Raspberry and Odroid XU4 but had very poor (dangerous!) results with both, which turned out to be because of processing latency. The arducopter precland is hardcoded for a tiny latency (20ms) to work with the IRLock system, it performs very erratically with companion computer systems. I am working on a set of patches currently to fix this issue, hopefully will be submitted as a PR within the next week or so, depending on testing.
The faster the computer and camera combination, the better the result you will get with precision landing. This is because as you get closer to the target, if there is any wind or poor tuning then the faster framerate and lower latency allows the computer/flight controller combination to compensate faster, keep the target in frame and calculate the necessary attitude corrections. If you have a high latency and/or low framerate, it is very difficult for the system to cope with any movement.
I don't recommend the XU4 at all - I had very poor experience with it, but others have better experience. The Joule gave an excellent result, but then Intel went and cancelled it :(.
My focus currently is to get precision landing working with a Raspberry, because it is so cheap and widely available. Also the raspberry camera is surprisingly good quality, is very fast because of the direct CSI connection, and is very well supported in software/firmware. So the results are actually very good indeed, apart from the slow processing.
Once I have it working with the raspberry, I'll finally do a first release of vision_landing (and take a long holiday..).

This is awesome.

I am working on a set of patches currently to fix this issue, hopefully will be submitted as a PR within the next week or so, depending on testing.
๐Ÿ‘

My focus currently is to get precision landing working with a Raspberry...the raspberry camera is surprisingly good quality, is very fast because of the direct CSI connection, and is very well supported in software/firmware
๐Ÿ‘

go @fnoop !!

fobdy commented

@fnoop Thanks for the expanded answer. Good luck and we are looking forward :)
Also we've started playing with maverick.

fobdy commented

We've noticed CPU consumption is only about 30% for only one core when track_targets is running.

fnoop commented

@fobdy artoolkit looks interesting. It would be interesting to try apriltag and other libraries as well. Aruco is not fast, a lot of the processing delay seems to occur in the aruco tag matching routines, but it may be that it's a limit of the required task.
But the first task is just to get it running!
CPU consumption will basically take as much as it can. If you have other processes running then they will take CPU away from track_targets. vision_landing itself takes a reasonable amount of CPU - this is not the vision routines but the underlying dronekit which is quite cpu intensive. Again, there are possibilities to optimise this in the future.
On the Joule I had track_targets running about 380-390% on 4 cores, haven't really looked at consumption on the raspberry yet.

fobdy commented

Also there is the hypothetical option to try artoolkit with RTAndroid OS (or android with odroid) due to some artoolkit android ARM optimizations (don't know if it's true for linux). But as for now RTAndroid is transforming to some commercial version (emteria.os).

According to this 3 years old benchmarks Aruco is quite fast. But there is only some old artoolkit+? in comparison.

Why i've noticed artoolkit is it because of AR.js. Which claims its very fast on mobile devices ;) (with js artoolkit emscripten version).

fnoop commented

The thing is, aruco is very well integrated into opencv - it essentially comes as an opencv library. Opencv-contrib itself even has an older implementation of aruco built in. And the track_targets code wasn't written with portability to other toolkits in mind. So any effort to port it to a different toolkit would have to be really worth it performance-wise, otherwise it's not worth the trouble. This is not to say it's not worth it, but that you'd have to really want the extra performance.

Another consideration is that vision_landing is intended to be quite generic/scalable. It works with SITL in a VM, it works on an intel computer, ARM computer, anywhere that python and opencv works basically. I don't think it's worth chasing a few % optimisation to lose that (eg. tying it to a particular OS).

Getting it to work on a raspberry with minimal changes is the first step and a good milestone. If (and when, hopefully!) it works, then optimising it can be looked at.

Hello everyone! I was reading your conversation and since I am implementing precision landing (PX4) on a RaspberryPI 3 I would like to ask if there are any news or improvements on this setup. I have the RapiCam V2 and I am trying to use only track_targets. I will implement by myself the landing logic but at this point I am trying to achieve visual estimation. What do you suggest in my case? Shall I use the track_targets.cpp alone or the python class that calls it ?

Thanks a lot

fnoop commented

Hi @andrea-nisti Do you mean you're going to add px4 firmware support for precision landing? #76)
At the moment the vision_landing python wrapper is ardupilot specific, but as soon as someone adds px4 support I'll adapt it.

Maybe in the future I will add a px4 support, I am using a px4 vehicle now where I send position commands through a MavlinkRos interface. My needs, at this point, is the target position estimation. That is why I will try to use the track_target program and extract the marker pose. In the future yes, I will integrate it in px4.

allroght, so basically the Tvec member stores the pose of the marker respect to the camera frame right? Moreover, how do you suggest to launch the application? I have a raspberry pi3 with Ubuntu and the camerav2. Is the interface with the camera and image extraction already taken into account inside track_targets?

fnoop commented

Yes the Tvec is the translation vector of the pose. You'll either have to adapt vision_landing or come up with your own launcher, it depends on how you're talking to the firmware (dronecore, dronekit?). The camera should be accessed automatically through v4l2 /dev/video0.

I am talking with the firmware through mavros. What is the shared data between track markers and the firmware? Time syncing? My idea is to run the tracker standalone and see the performances; when the relative pose is calculated I have a program that generates the right commands for the robot.

fnoop commented

Time sync is pretty crucial, if you're writing your own launcher then you'll have to deal with that. For arducopter the wrapper takes the output from the tracker and sends it through dronekit/mavlink landing_target message. The tracker also calculates the distance form the Tvec vector norm and outputs the distance measurement, so you don't need a separate rangefinder.

All this will be very different if you're directly controlling px4 directly through position commands. Will be very interesting to see how you get on!

#13

Nice thanks!

Well I developed an independent software that generates the right position setpoint in order to land on a moving and floating platform. It works under perfect localization (motion capture) and now I am experimenting in order to find a way to estimate relative poses. This tracker could be one solution (I very nice one if it works! ) and I can combine a range finder too. If I have good results I may right a px4 driver for precise landing but we'll see. Can you point me where specifically time syncing is involved? For the image processing delay?

TSC21 commented

@andrea-nisti you probably should follow this - PX4/PX4-Autopilot#8160

Thanks, I will give it a look and come back as soon as I have something working ;)

Hello, sorry but I had some work to do. In the meanwhile I was able to compile the package with aruco on the raspberry, cross-compile opencv and launch track targets. It runs at 6 / 12 hz in average and it detects targets.

I launch it using:

./track_targets --verbose -d TAG16h5 /dev/video0 ../../calibration/raspicamv2-calibration-640x480.yml 0.235

I am using the ar16h5 marker set with the 4 tags all together but I have some questions.

  • In the launch command I need to specify the marker width, but which one since there 4 markers printed?

  • For the case of the 4 tag figure, the pose provided will be the marker pose or the center of the board?

Thanks in advance