I’ve written a lot recently about the use of drones in some areas. From pizza delivery to search and rescue the art of the possible has expanded. Back in November, I posted a recap of a conversation with a group of future IT professionals at a nearby Technical High School. I talk to the group about once a month. In our last meeting, the team had some questions about my assertion around the concept of improving search and rescue with the use of drones.
“Drones only have 15-20 minutes of flight time. How can you effectively search 20 square miles with only 20 minutes per drone flight?” What a great question, luckily I had the answer. First off, readily available consumer drones have far less battery capacity than the professional drones used for search and rescue. Secondly, in my opinion, the critical success factor for using drones in search and rescue is humans reviewing drone footage afterward. This system would have even number sets of drones. Commerical drones have flight times of between 30 and 45 minutes depending on payload. The video would be stream back to a central system, and human beings would review the footage. The mix of humans and drones would also allow for redirection or redeployment of the drones very rapidly. For example, if the infrared cameras found a heat source or searchers found a section of the area too hard to walk or traverse on foot. The drones could e quickly redeployed to examine that area. The video would then be streamed back to, the users reviewing it, as part of a grip. The GPS location of each video feed would allow the user to see the video from above related to the map below.
“How do you coordinate and link the video from three different drones into a cohesive output?” The process for this is much more complex and yet is available today. The system would have two distinct components as far as inputs and outputs. The drones would provide streaming video back to a central server system, including their current GPS information. The drones would fly in an established pattern using an electronic grid concept. The “electronic” grid would then allow users to select a section or grid on the map, and see the video for that grid. As a grid’s original fly over is reviewed, the grid would turn red on the map. When a grid section is reflown or has new data, it would turn blue. The integration of maps and video data is already available today from Apple, Microsoft, and Google. Simply look to see the street view function within any of the maps. Live video integration isn’t different from the video and imagery from a satellite.
“What about doing this for lost people in a forest?” That is the problem of the day that there is no easy answer. In the winter a video drone can fly just over the tops of trees and see the ground. In the spring, summer, and fall that won’t work. An infrared Camera is possible, but again you are limited. Tall trees mean you have to fly higher, infrared doesn’t work the further you are away from what you are scanning. That said you can still leverage the drone system to cover as much ground as possible. The last question was the one I couldn’t answer. I did leave the students with a thought, however. Build a drone that has anti-collision radar so that the drone could autonomously operate below the canopy of the forest. Linking the drones would prevent duplication of footage based on the need to avoid trees in searching a forest.
The question going forward would be how many drones could we deploy quickly to search for a missing person. I know many sheriffs departments have one or even two drones. For such a system to be effective, you would most likely need 6-12 linked drones. That is a huge outlay of capital for many law enforcement agencies.