Skip to main content
To KTH's start page To KTH's start page

Deep Learning for Active Fire Detection Using Multi-Source Satellite Image Time Series

Time: Thu 2023-06-15 10.00

Location: E53, Osquarsbacke 18, Campus, video conference [MISSING]

Language: English

Subject area: Geodesy and Geoinformatics, Geoinformatics

Doctoral student: Yu Zhao , Geoinformatik

Opponent: Professor Cartalis Constantinos, National and Kapodistrian University of Athens

Supervisor: Professor Yifang Ban, Geoinformatik; Dr Andrea Nascetti, Geoinformatik; Associate Professor Josephine Sullivan, Robotik, perception och lärande, RPL

Export to calendar

QC 20230526

Abstract

In recent years, climate change and human activities have caused increas- ing numbers of wildfires. Earth observation data with various spatial and temporal resolutions have shown great potential in detecting and monitoring wildfires. Advanced Baseline Imager (ABI) onboarding NOAA’s geostation- ary weather satellites Geostationary Operational Environmental Satellites R Series (GOES-R) can acquire images every 15 minutes at 2km spatial resolu- tion and has been used for early fire detection. Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) onboarding sun-synchronous satellites offer twice daily revisit and are widely used in active fire detection. VIIRS Active Fire product (VNP14IMG) has 375 m spatial resolution and MODIS Active Fire product (MCD14DL) has 1 km spatial resolution. While these products are very useful, the existing solutions have flaws, including many false alarms due to cloud cover or build- ings with roofs in high-temperature. Also, the multi-criteria threshold-based method does not leverage rich temporal information of each pixel at different timestamps and rich spatial information between neighbouring pixels. There- fore, advanced processing algorithms are needed to provide reliable detection of active fires. 

In this thesis, the main objective is to develop deep learning-based meth- ods for improved active fire detection, utilizing multi-sensor earth observation images. The high temporal resolution of the above satellites makes temporal information more valuable than spatial resolution. Therefore, sequential deep learning models like Gated Recurrent Unit (GRU), Long-Short Term Memory (LSTM), and Transformer are promising candidates for utilizing temporal in- formation encoded in the variation of the thermal band values. In this thesis, a GRU-based early fire detection method is proposed using GOES-R ABI time-series which shows earlier detection time of wildfires than VIIRS active fire product by NASA. In addition, a Transformer based method is proposed utilizing the Suomi National Polar-orbiting Partnership (Suomi-NPP) VIIRS time-series which shows better accuracy in active fire detection than VIIRS active fire product. 

The GRU-based GOES-R early detection method utilizes GOES-R ABI time-series which is composed of normalized difference between Mid Infra-red (MIR) Band 7 and Long-wave Infra-red Band 14. And Long-wave Infra-red Band 15 is used as the cloud mask. A 5-layer GRU network is proposed to process the time-series of each pixel and classify the active fire pixels at each time step. For 36 study areas across North America and South America, the proposed method detects 26 wildfires earlier than VIIRS active fire product. Moreover, the method mitigates the problem of coarse resolution of GOES- R ABI images by upsampling and the results show more reliable early-stage active fire location and suppresses the noise compared to GOES-R active fire product. 

For active fire detection utilizing the VIIRS time-series, a Transformer based solution is proposed. The VIIRS time-series images are tokenized into vectors of pixel time-series as the input to the proposed Transformer model. The attention mechanism of the Transformer helps to find the relations of the pixel at different time steps. By detecting the variation of the pixel values, the proposed model classifies the pixel at different time steps as an active fire pixel or a non-fire pixel. The proposed method is tested over 18 study areas across different regions and provides a 0.804 F1-Score. It outperforms the VIIRS active fire products from NASA which has a 0.663 F1-Score. Also, the Transformer model is proven to be superior for active fire detection to other sequential models like GRU (0.647 F1-Score) and LSTM (0.756 F1- Score). Also, both F1 scores and IoU scores of all sequential models indicate sequential models perform much better than spatial ConvNet models, for example, UNet (0.609 F1-Score) and Trans-U-Net (0.641 F1-Score). 

Future research is planned to explore the potential of both optical and SAR satellite data such as VIIRS, Sentinel-2, Landsat-8/9, Sentinel-1 C-band SAR and ALOS Phased Array L-band Synthetic Aperture Radar (PALSAR) for daily wildfire progression mapping. Advanced deep learning models, for example, Swin-Transformer and SwinUNETR will also be investigated to im- prove multi-sensor exploitation. 

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-327380