This project aims to develop AI-assisted tools to analyze and reconstruct flood from image and video data captured by (survalence/traffic) cameras. The key new technologies to develop are video object segmentation (VOS) techniques, real-time detection and reconstruction techinques for objects with rapid appearance changes under severe weather.
Effectively esimating water level and constructing flood hydrographs in urban areas in real-time during flash floods, hurricanes, and other extreme weather events remains as a difficult task. Water in such scenes often has rapidly changing appearance caused by free-form self-deformation, environment illumination, reflections, wave, ripples, turbulence, sediment concentration, etc. Such rapidly changing appearance often leads to less accurate water detection and segmentation, and consequently, unreliable flood/water level estimation.
We have maintained an annotated water database and benchmark to support the training of video water detetion and segmentation.
We have developed cutting-edge deep water segmentation pipelines, and referece object detection and size estimation networks for real-time innundation map estimation.
V-FloodNet: A Video Segmentation System for Urban Flood Detection and Quantification
Yongqing Liang, Xin Li, Brian Tsai, Navid Jafari, Qin Chen
Environmental Modelling and Software, Vol. 163, Article 105586, 2022
Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement
Yongqing Liang, Xin Li, Navid Jafari, Qin Chen
Neural Information Processing Systems (NIPS), 2020
[Paper] [Supplementary Doc] [Codes]
The WaterV1 contains:
The WaterV2 contains:
A Pretrained WaterNet Model: Trained on WaterV2 Training Set for 200 Epochs