AeroRust/WorkingGroup

Project: Drone demo

elpiel opened this issue ยท 1 comments

Non-Critical Aeronautics

Drones

Aerorust Drones

Current work

Many of you might already know that we are working on Parrot drones right now and building a Rust SDK called arsdk-rs to be able to connect, control and use other features of the drone with Rust.

Proposal

With this in mind I want to propose to create a demo project that will integrate different components and build an integrated system showing what is currently possible with Rust and Open-source.

Social part

The other aspect of the project is the possibility to connect and form bonds with other communities like Rust-ML (machine learning) WG & Rust-CV (Computer Vision).

  • Rust-ML: Looking for PoC project using Video stream from a Drone - rust-ml/discussion#7
  • Rust-CV: What could be built with? (TODO)

Tech stack

There are many components already in place, which we can integrate together and build this project.

Components:
Legend:

  • โœ”๏ธ - we already have this component
  • ๐Ÿ› ๏ธ - Work in progress
  • โ“ - Ideas are welcome

โœ”๏ธ Sphinx Simulation provided by Parrot

๐Ÿ› ๏ธ Arsdk-rs - Rust SDK for sending/receiving commands (i.e. controlling the drone)

โ“ Computer Vision

Rust-ML (machine learning) WG or Rust-CV (Computer Vision) components that we can integrate with the Drone

๐Ÿ› ๏ธ VR - o0Ignition0o/airsim-rs#6

This could be useful to monitor or perhaps even control the drone

๐Ÿ› ๏ธ โ“ Microsoft AirSim - https://github.com/o0Ignition0o/airsim-rs

Check the project https://github.com/microsoft/AirSim for more information.

o0Ignition0o/airsim-rs
Simulation integration for a much more complete and advanced simulations.

Objectives

  • Integrated project as a portfolio and a way to showcase the Rust community to other communities and domains.
  • Talks surrounding the project for better exposure
  • Workshops - for exposure, finding sponsors and FUN ๐ŸŽ‰

@elpiel asked me to write up what capabilities we have today at Rust CV. You can see that now in our goals section.

I think the simplest way you can utilize Rust CV today to do something interesting with the drone is using the technique found here: https://github.com/rust-cv/vslam-sandbox/blob/0a0bd760ceee2da38f0626a8a8678b9e98a657e1/src/main.rs. This code will allow you to perform some very rudimentary indirect visual odometry. We can make it a bit more robust by allowing it to use older frames in case of failure. This will allow you approximate how much the drone has moved and rotated on each frame.

Since the drone probably already has basic equipment onboard to detect motion, this could be used in a different way. You could use a reference image of a particular object with some features on it, such as a large sheet of paper (or, since this is a simulation, just a flat rendering of an image). The drone could then constantly estimate its pose relative to this object. Alternatively, you could trigger the drone to hold steady, and then it would remember a snapshot from that location. From that point on, it could hold steady at that location since it can figure out how its position has changed based on the current frame and that snapshot it took. This would also require a human to have a trigger to tell the drone "hold position". This would provide interesting value, since the onboard sensors (accelerometers and gyroscopes) cant tell you absolute position, but you can do that with computer vision so long as objects in the scene do not move (although if they do move, that likely wont throw it off unless it is a large change).

There are also other interesting things you could do by using other algorithms, but I think that this is the simplest way to integrate computer vision into this simulation today. The easiest of the above solutions is likely the "hold in place" concept, since no data needs to be prepared in advance, and all the information comes just from the drone camera.

Let me know if you are interested in me writing up a demo of this capability, and I can do that right away.