Processing Images into Tasks
ClayMav opened this issue · 0 comments
ClayMav commented
For each image in the queue, spin up four threads, and in each thread do each of the four things below:
- Possible collisions will be detected
a. The assigned drone will be identified in the image, and the image will be cropped to
a smaller section centered at the assigned drone
b. An algorithm will be run to spot any potential hazards in this cropped image
c. For the Overwatch drone, this will not occur - Task objects will be identified
a. Task objects are relevant physical objects that the assigned drone needs to interact with. (QR Drones: Bin/iPad/QR Code, Healing Drone: Human player, Overwatch Drone: 3 friendly drones) - Calculate distance vector to task object (nearest in the case of the QR Drones)
a. Using pixel-to-meter conversion, north-south, and east-west are gained
b. Using depth sensor on the Realsense, up-down is calculated - Send vector to assigned drone
a. On the drone, speed is calculated based on distance
b. If distance is high, speed should also be, but if distance is low, speed should be as well
c. In this message, additional information can be sent to the drones (if a command came in to heal the player, this would be sent to the healing drone)
To address this pull request, the assumption is that all of the threading and structure for cropping the images and structure such that it will be easy to implement the vision and later calcuations.