This project aims to automatically identify nascent wildfires in live imagery using neural networks.
Currently, fire lookout stations are deployed in hundreds of sites around the western US. Each station includes a pan/tilt/zoom camera with near-infrared capability. All stations continuously stream high resolution imagery at several frames per minute to centralized servers for human monitoring and analysis.
Lookout stations are extremely expensive to deploy, because their high power and bandwidth consumption requires access to a reliable energy source and communications equipment. Because of their resource needs, monitoring stations are typically co-located with existing communications infrastructure. Remote sites which require dedicated solar power harvesting, power storage, and radio antennas are 3-5 times more expensive, and are therefore only installed in limited critical locations. This leaves many rural areas without reliable systems for identifying and reporting early-stage fires, resulting in slower emergency response and larger overall burns.
The goal of this project is to develop an AI algorithm capable of operating on low-power embedded systems, allowing lookout stations to autonomously decide when to upload data, and when to conserve their resources. If successful, this approach could dramatically reduce the bandwidth and power consumed by each station, lowering the cost of installing and operating individual nodes, and enabling wider deployment to a larger number of monitoring sites.
Data handling is described in DATA.md.
This documentation is a work in progress, and more will be added here as the project progresses.