Skip to main content

Deep Learning for Wildfire Progression Monitoring Using SAR and Optical Satellite Image Time Series

Time: Tue 2021-06-15 14.30

Location: Videolänk:https://kth-se.zoom.us/j/65483992232, Du som saknar dator /datorvana kontakta yifang@kth.se / Use the e-mail address if you need technical assistance, Stockholm (English)

Subject area: Geodesy and Geoinformatics, Geoinformatics

Doctoral student: Puzhao Zhang , Geoinformatik

Opponent: Professor Lorenzo Burzzone, University of Trento

Supervisor: Professor Yifang Ban, Geoinformatik

Export to calendar

Abstract

Wildfires have coexisted with human societies for more than 350 million years, always playing an important role in affecting the Earth's surface and climate. Across the globe, wildfires are becoming larger, more frequent, and longer-duration, and tend to be more destructive both in lives lost and economic costs, because of climate change and human activities. To reduce the damages from such destructive wildfires, it is critical to track wildfire progressions in near real-time, or even real-time.  Satellite remote sensing enables cost-effective, accurate, and timely monitoring on the wildfire progressions over vast geographic areas. The free availability of global coverage Landsat-8 and Sentinel-1/2 data opens the new era for global land surface monitoring, providing an opportunity to analyze wildfire impacts around the globe. The advances in both cloud computing and deep learning empower the automatic interpretation of spatio-temporal remote sensing big data on a large scale.

The overall objective of this thesis is to investigate the potential of modern medium resolution earth observation data, especially Sentinel-1 C-Band synthetic aperture radar (SAR) data, in wildfire monitoring and develop operational and effective approaches for real-world applications. This thesis systematically analyzes the physical basis of earth observation data for wildfire applications, and critically reviews the available wildfire burned area mapping methods in terms of satellite data, such as SAR, optical, and SAR-Optical fusion. Taking into account its great power in learning useful representations, deep learning is adopted as the main tool to extract wildfire-induced changes from SAR and optical image time series. On a regional scale, this thesis has conducted the following four fundamental studies that may have the potential to further pave the way for achieving larger scale or even global wildfire monitoring applications. 

To avoid manual selection of temporal indices and to highlight wildfire-induced changes in burned areas, we proposed an implicit radar convolutional burn index (RCBI), with which we assessed the roles of Sentinel-1 C-Band SAR intensity and phase in SAR-based burned area mapping. The experimental results show that RCBI is more effective than the conventional log-ratio differencing approach in detecting burned areas. Though VV intensity itself may perform poorly, the accuracy can be significantly improved when phase information is integrated using Interferometric SAR (InSAR). On the other hand, VV intensity also shows the potential to improve VH intensity-based detection results with RCBI. By exploiting VH and VV intensity together, the proposed RCBI achieved an overall mapping accuracy of 94.68% and 94.17% on the 2017 Thomas Fire and the 2018 Carr Fire.

For the scenario of near real-time application, we investigated and demonstrated the potential Sentinel-1 SAR time series for wildfire progression monitoring with Convolutional Neural Networks (CNN). In this study, the available pre-fire SAR time series were exploited to compute temporal average and standard deviation for characterizing SAR backscatter behaviors over time and highlighting the changes with kMap. Trained with binarized kMap time series in a progression-wise manner, CNN showed good capability in detecting wildfire burned areas and capturing temporal progressions as demonstrated on three large and impactful wildfires with various topographic conditions. Compared to the pseudo masks (binarized kMap), CNN-based framework brought an 0.18 improvement in F1 score on the 2018 Camp Fire, and 0.23 on the 2019 Chuckegg Creek Fire. The experimental results demonstrated that spaceborne SAR time series with deep learning can play a significant role for near real-time wildfire monitoring when the data becomes available at daily and hourly intervals.

For continuous wildfire progression mapping, we proposed a novel framework of learning U-Net without forgetting in a near real-time manner. By imposing a temporal consistency restriction on the network response, Learning without Forgetting (LwF) allows the U-Net to learn new capabilities for better handling with newly incoming data, and simultaneously keep its existing capabilities learned before. Unlike the continuous joint training (CJT) with all available historical data, LwF makes U-Net learning not dependent on the historical training data any more. To improve the quality of SAR-based pseudo progression masks, we accumulated the burned areas detected by optical data acquired prior to SAR observations. The experimental results demonstrated that LwF has the potential to match CJT in terms of the agreement between SAR-based results and optical-based ground truth, achieving a F1 score of 0.8423 on the Sydney Fire (2019-2020) and 0.7807 on the Chuckegg Creek Fire (2019). We also found that the SAR cross-polarization ratio (VH/VV) can be very useful in highlighting burned areas when VH and VV have diverse temporal change behaviors.

SAR-based change detection often suffers from the variability of the surrounding background noise, we proposed a Total Variation (TV)-regularized U-Net model to relieve the influence of SAR-based noisy masks. Considering the small size of labeled wildfire data, transfer learning was adopted to fine-tune U-Net from pre-trained weights based on the past wildfire data. We quantified the effects of TV regularization on increasing the connectivity of SAR-based areas, and found that TV-regularized U-Net can significantly increase the burned area mapping accuracy, bringing an improvement of 0.0338 in F1 score and 0.0386 in IoU score on the validation set. With TV regularization, U-Net trained with noisy SAR masks achieved the highest F1 (0.6904) and IoU (0.5295), while U-Net trained with optical reference mask achieved the highest F1 (0.7529) and IoU (0.6054) score without TV regularization. When applied on wildfire progression mapping, TV-regularized U-Net also worked significantly better than vanilla U-Net with the supervision of noisy SAR-based masks, visually comparable to optical mask-based results.

On the regional scale, we demonstrated the effectiveness of deep learning on SAR-based and SAR-optical fusion based wildfire progression mapping. To scale up deep learning models and make them globally applicable, large-scale globally distributed data is needed. Considering the scarcity of labelled data in the field of remote sensing, weakly/self-supervised learning will be our main research directions to go in the near future.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295725