Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

4. Intelligent Architectures for Extreme Event Visualisation

verfasst von : Yang Song, Maurice Pagnucco, Frank Wu, Ali Asadipour, Michael J. Ostwald

Erschienen in: Climate Disaster Preparedness

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Realistic immersive visualisation can provide a valuable method for studying extreme events and enhancing our understanding of their complexity, underlying dynamics and human impacts. However, existing approaches are often limited by their lack of scalability and incapacity to adapt to diverse scenarios. In this chapter, we present a review of existing methodologies in intelligent visualisation of extreme events, focusing on physical modelling, learning-based simulation and graphic visualisation. We then suggest that various methodologies based on deep learning and, particularly, generative artificial intelligence (AI) can be incorporated into this domain to produce more effective outcomes. Using generative AI, extreme events can be simulated, combining past data with support for users to manipulate a range of environmental factors. This approach enables realistic simulation of diverse hypothetical scenarios. In parallel, generative AI methods can be developed for graphic visualisation components to enhance the efficiency of the system. The integration of generative AI with extreme event modelling presents an exciting opportunity for the research community to rapidly develop a deeper understanding of extreme events, as well as the corresponding preparedness, response and management strategies.

4.1 Introduction

Extreme events such as earthquakes, floods and wildfires have a significant impact on both the natural environment and human society. To effectively predict, prepare for and manage the impact of extreme events, researchers have developed a range of physics-based modelling methodologies to understand the underlying dynamics of such events. When these modelling methods are integrated into an immersive visual environment, researchers and domain experts can interact with systems and better understand the complex nature of extreme events and human responses. This increased understanding relies on three factors. The first is the physical presence users feel in immersive environments, and the second is how this type of presence heightens intuitive understanding and spatial cognition. The third factor is associated with a capacity to interact with or shape the environment. In an immersive visualisation, users can specify key environmental factors that would affect the dynamics of extreme events, and the system will then adapt the visualisation accordingly to provide a highly naturalistic depiction of various scenarios. Such intelligent visualisation systems integrating physics- and data-driven modelling and simulation will be highly effective for preparing communities, designing response strategies and training first responders.
Currently, the simulation of earthquakes with supercomputers has been an active research field, and there is significant effort being invested by researchers in developing open-access datasets to facilitate further data-driven research (Kovner, 2022). There has also been significant research for fire and flood modelling using both physics-based and machine learning approaches (Jain et al., 2020; Teng et al., 2017). However, there is relatively less research specifically focused on immersive visualisation for extreme events, especially for intelligent visualisation that can adapt dynamically to different environments in simulated scenarios.
In this chapter, we will first provide a review of representative approaches that build towards intelligent visualisation of extreme events. We consider that intelligent visualisation is a computational pipeline that consists of (i) modelling, (ii) simulation and (iii) graphic visualisation. While modelling and simulation focus on data generation, graphic visualisation uses computer graphics algorithms to represent the generated data in a visually immersive and realistic way. Next, motivated by the recent success of deep learning and generative artificial intelligence (AI), we will present suggestions for how generative AI methodologies can be incorporated into the visualisation of extreme events. Finally, we will discuss how different generative AI methods can support the various components required in a visualisation pipeline for extreme events.

4.2 Intelligent Visualisation of Extreme Events

While the noun “visualisation” often refers to the graphic presentation or representation of image-based data, in the present context, we focus on intelligent visualisation, which consists of a complete computational pipeline including modelling, simulation and graphic visualisation (Fig. 4.1). Such an intelligent visualisation system will be able to generate data representations of extreme events based on physical modelling or learning-based modelling and simulations, which are then visualised in high resolution with support for user interaction and immersive experiences. This section discusses examples of each of the three pipeline stages.

4.2.1 Physical Modelling

The objective of using physical modelling for extreme events is to develop mathematical models that replicate the underlying principles and behaviours of the dynamic evolution of these events. For instance, through physical modelling, studies have investigated the effect of wind, slope, fuel moisture, fuel structure and ignition setting on the rate of spread and intensity of bushfires (Sharples & Hilton, 2020).
Fire modelling approaches have evolved from initial one-dimensional (1D) rate of spread (RoS) estimations to the more intricate 2D or 3D simulations that depict the expansion of fire perimeters in spatial contexts. Physical fire models follow the same fundamental principles of physics but differ in choosing the governing equations and implementations. They also vary in complexity and dimensionality. For example, the classical approach for fire spread modelling (Weber, 1991) was initially a 1D model to predict RoS based on the flux of energy and later extended to a 2D plane. A model called WFDS was later developed for 3D simulations to resolve different physical process stages (Mell et al., 2007).
Some physical models have been integrated into software packages for various application domains. For instance, WRF-SFIRE (Mandel et al., 2014) provides a coupled meteorological and fire spread model. This integrated system accounts for dynamic interactions between weather conditions and fire behaviour by solving intricate physical and chemical processes within a high-resolution 3D grid-like domain. With its capability for advanced solution modelling, WRF-SFIRE has been widely used by researchers in fire dynamics. FlamMap (Finney, 2006) “is a fire analysis desktop application that [...] [includes a suite of functions that can simulate] potential fire behaviour characteristics, [such as] fire growth and spread, and conditional burn probabilities under constant environmental conditions” (Finney, 2023). It also encapsulates FARSITE (Finney, 2004), which computes wildfire growth and behaviour for longer time periods under heterogeneous conditions.
In practice, physical models are primarily adopted in behaviour analysis of fire and flood events rather than operational use, mainly due to the challenges associated with validation and computational demands. Moreover, employing physical models necessitates meticulous manual input that requires domain expertise. This input encompasses aspects such as defining initial geometry and domain parameters, specifying fire source characteristics and configuring simulation parameters, among others. These inputs cannot be accurately determined without a rigorous and deep understanding of the underlying physics involved. Consequently, physical models tend to present a steep learning curve for researchers who do not have a background in these disciplines.

4.2.2 Learning-Based Modelling and Simulation

Because the principles of physical modelling are founded in expert knowledge, its capability for modelling complex or new scenarios is also inherently limited by experts’ existing knowledge. To overcome this situation, recent approaches have explored the use of learning-based methodologies to reveal previously hidden patterns in historical or experimental data for fire and flood behaviour analysis. A diverse range of methods is available for this purpose, including both statistical machine learning and neural network-based models, and the choice of methods typically depends on the available scalability of data. For instance, there has been extensive research on flooding due to intensification of heavy rainfall under climate change conditions (Ho et al., 2023). These approaches typically utilise statistical machine learning models such as linear regression to discover correlations between events and environmental factors. To develop the machine learning models, a set of data would be collected from historical events, and various data-driven computations are applied to it, before it is fit for use in machine learning models. Similar approaches have also been developed using experimental data to address the limitations of historical data. For example, data samples from outdoor experimental fires and natural, more intense wildfires can be obtained using logistic and non-linear regression models for the rate of fire spread (Cruz et al., 2021). Such models demonstrate an ability to represent a broad range of wildfire behaviour adaptions to the effects of wind speed, fuel structure and various landscape conditions.
Other types of statistical machine learning models have also been incorporated into extreme event modelling and simulation. In an example of the application of RoS estimation, a Bayesian model was developed based on weather variables so that the model can effectively accommodate the variability in model inputs and uncertainty associated with RoS prediction (Storey et al., 2021). Bayesian models have also been adapted in flood modelling to estimate the frequency of extreme flood events based on historical records (Parkes & Demeritt, 2016).
With the recent development of neural networks and, particularly, deep learning, some approaches have been developed to perform modelling or simulation based on higher-dimensional data, such as satellite images, for fire spread modelling or simulation. For example, the FireCast method (Radke et al., 2019) is a simple convolutional neural network model developed based on satellite images and weather data. To address the issue of limited training data, weather interpolation and data augmentation techniques are employed. A similar approach was developed (Yang et al., 2021), incorporating “ground truth labels” obtained from public datasets. In another recent study, various data sources, “including topography, weather [conditions], […], vegetation, and population density” (Huot et al., 2022) as well as satellite images, are combined to create a comprehensive dataset for predicting next-day wildfire spread. Huot et al. then formulate the prediction “as an image segmentation [task to] classify each area as either containing fire [or not], given the location [of] the fire of the previous day”. A convolutional autoencoder is developed for the segmentation and demonstrates higher performance than other machine learning approaches based on logistic regression and random forest algorithms. In another approach, Hodges et al. (2019) consider the challenge of collecting sufficient amounts of training data to support a robust machine learning process. In response, they generate synthetic data using Rothermel (Scott & Burgan, 2005) for homogeneous landscapes and FARSITE for heterogeneous spreads. A deep convolutional inverse graphics network is then developed using the synthetic data to predict fire spread.
While learning-based approaches like these can overcome the problem of limited domain knowledge and represent a more diverse data distribution, the capabilities of existing approaches are still limited. Current applications of learning-based approaches are mainly focused on predicting the frequency of fire or flood events and are typically formulated as regression, classification or segmentation problems. However, such methods are not designed for generating realistic simulations of the dynamic behaviours of extreme events in hypothetical scenarios with diverse environmental conditions, especially when user interaction and adaptive simulation are expected. Moreover, the performance of machine learning models is highly dependent on large-scale, high-quality training data. While researchers have devoted substantial time to creating open-access datasets, these are still quite small scale, which then limits the performance and generalisability of the developed systems. We expect to see more developments in dataset creation and integration of learning-based approaches with knowledge-driven physical modelling, which would effectively address this issue.

4.2.3 Graphic Visualisation

The graphic visualisation component refers to the stage where real, modelling or simulation data is rendered and displayed in 2D or 3D. Typically, 2D visualisation is conducted by overlaying data on maps or displaying it as plots, whereas 3D visualisation provides a more comprehensive and realistic representation of an event’s behaviour that is often rendered over a 3D map. A comprehensive survey of visualisation systems for wildfires has recently been published (Tirado Cortes et al., 2023). Here, we provide an overview of 3D methods in graphic visualisation that have been utilised in the domain of extreme events since 3D methods require more complex processing steps.
A representative system of 3D wildfire visualisation is presented by Castrillon et al. (2011), where FARSITE is used to generate the data to be visualised, such as fire perimeters, the intensity of flames and velocity of the fire front. A graphical interface is then built in a 3D Multiplayer Geographical Environment, integrating geographical layers and 3D objects over virtual terrain. The module for fire visualisation is developed based on two particle systems for modelling flame and smoke, which are controlled by an emitter specifying the behaviours of particles. The propagation of fire is also modelled by curve morphing techniques to update the perimeters of fires and generate animations. Various optimisation techniques are also implemented to adaptively reduce the mesh vertices and number of particles so that the visualisation can be realistic while reducing the graphic complexity. There are also examples of systems that introduce more user interaction functionalities to update the rendering and visualisation. For instance, in one system (Wahlqvist et al., 2021), users can change the views, data inputs and timestamps, and the visualisation can give valuable insights into the effect of fire spread on population areas.
Overall, while advanced graphics techniques can be implemented to achieve highly realistic visualisation of extreme events, significant advances are needed to support computational modelling for specific event scenarios. There is very limited support for dynamically updating the rendering of different scenes. Changing of environments will thus require extensive effort redesigning the underlying 3D models. 3D computer graphics also require extensive computational resources. To enhance efficiency, current approaches often resort to approximation algorithms (Byari et al., 2022) that reduce the spatial resolution or realism of the visualisation. While recent advancements in deep learning and generative AI have demonstrated impressive progress in computer graphics (Lefohn, 2023), more work is needed for adapting such methods for visualisation of extreme events.

4.3 Generative AI in Visualisation

While generative AI has attracted considerable public attention due to the popularity of ChatGPT, we believe the development of generative adversarial networks (GANs) (Goodfellow et al., 2014) marks the start of generative AI for images. By training a simple deep learning model with a generator and a discriminator, GANs can generate new images that resemble the original imaging domain. Many improvements to the original GAN structure have since been developed for various objectives, such as style transfer, image super-resolution and image editing, leading to diverse applications. Conversely, GANs can be difficult to train, and the generated images often lack diversity. Hence, other generative AI models are proposed, such as variational autoencoders (VAEs) (Kingma & Welling, 2019) and diffusion models (DMs) (Ho et al., 2020), although VAEs tend to produce images with lower quality and DMs can be slow when generating images. More recently, deep learning models have been adapted into computer graphics, such as neural radiance field (NeRF) (Mildenhall et al., 2020) and its variants, achieving both efficient and realistic graphic rendering and visualisation. Nevertheless, while significant research and industry development have been conducted on generative AI, relatively little work has been done specifically for the visualisation of extreme events. Here, we describe some representative studies of generative AI models in related application domains, which are useful precedents for adapting generative AI for visualisation of extreme events.

4.3.1 Image Generation

A typical GAN model contains two components: a generator that creates synthesised images and a discriminator that distinguishes between real and generated images. During the training process, the aim is to derive a generator that can create highly realistic images so that the discriminator cannot separate them from the real images. As a result, the trained GAN generator can be used to create new, high-quality images during the inference process. Many variants of the standard GAN model have been proposed, some customised for specific applications, while others address fundamental limitations in the GAN model, such as the difficulty of training and problems with mode collapse. A recent survey paper (Wang et al., 2021) presents a comprehensive overview of this field.
One example of the use of GANs in extreme event visualisation generates photo-realistic images showing how floods can affect the environment (Schmidt et al., 2022). The approach, named ClimateGAN, can generate flooded scenes with 1-metre flood levels based on arbitrary street-level scenes such as Google Street View images. The model consists of two modules: a Masker module for predicting the image regions that should be under water and a GAN-based Painter module to generate water textures based on the Masker’s prediction. To train the model, paired images of before and after flooding would be needed, which are, however, rare and cannot be easily collected. Therefore, in ClimateGAN, a virtual world is created using the Unity3D engine to simulate urban, suburban and rural environments, which are then flooded with 1m of water to generate the paired training data. A smaller dataset of real images was also collected to enhance the training of the model. While ClimateGAN generates realistic images, it is difficult to extend it to floods of different heights, mainly due to the difficulty of data collection.
In contrast to GANs, DMs are inspired by non-equilibrium thermodynamics. A DM consists of two “processes, the forward diffusion process [which] defines a [Markov] chain of diffusion steps to slowly add random noise to data, [and] the reverse diffusion process” (Niu et al., 2023), which learns to reverse the forward process to construct desired data outputs from the noise. While DMs are typically much slower than GANs during the generation process, various techniques have been developed to enhance their efficiency. DMs have thus gained significant interest in the research community and industry, creating popular tools such as DALLE 2, because of their exceptional capabilities in creating high-quality, realistic and diverse images.
DMs have also been used for weather forecasting. For example, a recent approach (Chen et al., 2023), named SwinRDM, performs weather forecasting via a variational recurrent neural network and then interpolates the forecasting output via a diffusion-based super-resolution module. As a result, SwinRDM can provide global weather forecasting at 0.25-degree resolution without incurring an excessively high computational cost.
Based on these precedents, similar models can be developed for the visualisation of extreme events, such as using GANs or DMs to generate scenes showing wildfire spread or changes in landscapes after an extreme event episode. Similar to ClimateGAN, the difficulty would lie in data collection, as generative AI models require large-scale training data. Images of certain view angles would be easier, such as aerial images from satellite imagery. In other cases, a simulated environment might be the best approach to generate a sufficient amount of training data. Moreover, for extreme events, the realism of generated images is critical, and they need to adjust to different environmental conditions. To achieve this, it is possible to integrate text prompts in the image generation process via DMs or image templates as conditional input for GANs. Such information can be used to explicitly guide the image generation process so that the outputs can better approximate the expected scenarios following certain environmental variables.

4.3.2 Dynamic Simulation

The dynamic evolution of an extreme event, reflected in the rapid motion (direction and velocity) of the fire or flood, is an important aspect that cannot simply be represented by a sequence of images. For example, while the spread of a wildfire recorded in satellite imagery might be viewed once every few hours, a 3D visualisation of fire events in the first-person immersive view would require real-time dynamic update, and the information about evolution and motion also conveys causal effects according to the environmental conditions. While generative AI models have demonstrated impressive performance for single images, relatively fewer research studies have been conducted on generating dynamic time-lapse data or videos, often due to the significant requirement for computational power.
In a recent study (Chu et al., 2021), a GAN-based deep learning model is developed for fluid simulation, where the fluid can morph dynamically depending on several “control modalities, including obstacles, physical parameters, kinetic energy and vorticity” (ibid.). Interestingly, the model “explicitly embeds physical quantities into the learned latent space” (ibid.) so that the control parameters can effectively impact the generation of output and enhance their diversity. To train the model, a training dataset is created via simulation to generate pairs of images representing the density and velocity information. The dataset also introduces samples showing different velocities of moving obstacles so that the model training can be exposed to a variety of cases. The evaluation results show that the approach delivers higher performance than the other more standard GAN- or VAE-based models. However, as with all generative AI methods and especially GAN models, the simulation outputs do not always adapt well to user controls.
For dynamic or video generation, unlike the above-mentioned approach that involves explicit physical modelling, most methods choose to incorporate motion information via more traditional computer vision algorithms, such as optical flow. For instance, with DTVNet (Zhang et al., 2020), the generation framework takes in a single landscape image and then generates diversified time-lapse videos based on normalised motion vectors. The network contains two modules: an optical flow encoder that estimates the optical flow between consecutive images and a dynamic video generator that follows a GAN-based architecture and constructs the video frames by learning motion and content information. DTVNet experiments were performed on a dataset “containing dynamic sky scenes, [including a] cloudy sky with moving clouds and […] a starry sky with moving stars” (Xiong et al., 2018). Evaluation of DTVNet in human user studies shows improved performance over other GAN models.
DMs have also been applied to dynamic scene or video generation, which typically shows more impressive results than GAN-based models but requires text prompts. For example, Imagen Video (Ho et al., 2022) is a text-conditional video generation system. Compared to other approaches, Imagen Video achieves high-definition video generation producing videos of 128 frames of 1280 × 768 pixels at 24 frames per second. It achieves this with a cascade of DMs containing a sequence of spatial and temporal super-resolution processes. While the method achieves remarkable performance, it was trained on an “internal dataset [containing] 14 million video–text pairs and 60 million image–text pairs” (Ho et al., 2022) as well as other large-scale public datasets. While this requirement can be prohibitive, for developing a domain-specific model, such as for simulating wildfires, a much smaller dataset should be feasible to achieve promising results.
Currently, DMs have demonstrated impressive performance for video generation, which can be a possible approach for generating dynamic simulation of extreme events. Customisations of the models would be required so that environmental variables can be effectively integrated in place of the text prompts. GANs, on the other hand, can be more flexible in terms of introducing environmental variables into the model. However, domain-specific customisation will also be required especially to encourage the diversity of data generation. Overall, in a manner similar to generative AI for image generation, a key design consideration would be the dataset. Large-scale datasets that can closely represent real data distribution and diversity would be valuable for developing such models. To accommodate the limitations of datasets, other techniques would need to be exploited, such as introducing explicit physical modelling, performing advanced data augmentation or integrating pretrained models with transfer learning.

4.4 Conclusion

Extreme climate events such as floods and wildfires pose a particular challenge to society. To better prepare the community and first responders for such unpredictable events, new methods are required. Intelligent visualisation facilitates (i) picturing diverse scenarios, (ii) developing rich and dynamic narratives from them, (iii) communicating the threats they entail and (iv) supporting people to rehearse their responses to these threats. As such, intelligent visualisation is central to both gaining new insights into extreme events and translating this knowledge to stakeholders.
Moreover, human perception of the environment and its ability to adapt to dynamic changes depend on the rapid acquisition of real-time sensory data for processing and swift response. The remarkable capacity of the brain to efficiently allocate computational resources and direct pertinent data streams to the relevant cortical regions for planning somatosensory reactions empowers us to manage these fluctuations. Nevertheless, our capacity to make well-informed decisions and respond appropriately to unfamiliar situations (dealing with uncertainty) remains significantly underexplored. Acknowledging that immersion is a multifaceted experience intricately influenced by various sensory modalities, this chapter places its primary emphasis on the saliency of visual information as a key driver of information acquisition when extreme events occur.
In this chapter, we provided a review of current methodologies in intelligent visualisation, focusing on physical modelling, learning-based modelling and simulation as well as graphic visualisation components. Then, considering the widespread success of deep learning and particularly generative AI models, we hypothesised that such models can also be adapted for the visualisation of extreme events. We thus presented several representative generative AI approaches in related application domains and discussed various design considerations when developing such approaches for extreme event visualisation. Ultimately, this chapter can be viewed as both a review and position paper for the emerging topic of intelligent visualisation for extreme events.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
Zurück zum Zitat Byari, M., Bernoussi, A., Jellouli, O., Ouardouz, M., & Amharref, M. (2022). Multi-scale 3D cellular automata modelling. Chaos, Solutions & Fractals, 164, 112653.CrossRef Byari, M., Bernoussi, A., Jellouli, O., Ouardouz, M., & Amharref, M. (2022). Multi-scale 3D cellular automata modelling. Chaos, Solutions & Fractals, 164, 112653.CrossRef
Zurück zum Zitat Castrillon, M., Jorge, P., Lopez, I., Macias, A., et al. (2011). Forecasting and visualization of wildfires in a 3D geographical information system. Computers & Geosciences, 37(3), 390–396.CrossRef Castrillon, M., Jorge, P., Lopez, I., Macias, A., et al. (2011). Forecasting and visualization of wildfires in a 3D geographical information system. Computers & Geosciences, 37(3), 390–396.CrossRef
Zurück zum Zitat Chen, L., Du, F., Hu, Y., Wang, Z., & Wang, F. (2023). SwinRDM: Integrate SwinRNN with diffusion model towards high-resolution and high-quality weather forecasting. In Y. Chen & J. Neville (Eds.), AAAI conference on AI (pp. 322–330). AAAI. Chen, L., Du, F., Hu, Y., Wang, Z., & Wang, F. (2023). SwinRDM: Integrate SwinRNN with diffusion model towards high-resolution and high-quality weather forecasting. In Y. Chen & J. Neville (Eds.), AAAI conference on AI (pp. 322–330). AAAI.
Zurück zum Zitat Chu, M., Thuerey, N., Seidel, H., Theobalt, C., & Zayer, R. (2021). Learning meaningful controls for fluids. ACM Transactions on Graphics, 40(4), 100.CrossRef Chu, M., Thuerey, N., Seidel, H., Theobalt, C., & Zayer, R. (2021). Learning meaningful controls for fluids. ACM Transactions on Graphics, 40(4), 100.CrossRef
Zurück zum Zitat Cruz, M., Cheney, N., Gould, J., McCaw, W., et al. (2021). An empirical-based model for predicting the forward spread rate of wildfires in eucalypt forests. International Journal of Wildland Fire, 31(1), 81–95.CrossRef Cruz, M., Cheney, N., Gould, J., McCaw, W., et al. (2021). An empirical-based model for predicting the forward spread rate of wildfires in eucalypt forests. International Journal of Wildland Fire, 31(1), 81–95.CrossRef
Zurück zum Zitat Finney, M. (2004). FARSITE: Fire area simulator—Model development an evaluation (Research paper). US Department of Agriculture & Forest Service Finney, M. (2004). FARSITE: Fire area simulator—Model development an evaluation (Research paper). US Department of Agriculture & Forest Service
Zurück zum Zitat Finney, M. (2006). An overview of FlamMap fire modelling capabilities. In P. L. Andrews & B. W. Butler (Eds.), Fuels management—How to measure success (pp. 213–220). US Department of Agriculture & Forest Service. Finney, M. (2006). An overview of FlamMap fire modelling capabilities. In P. L. Andrews & B. W. Butler (Eds.), Fuels management—How to measure success (pp. 213–220). US Department of Agriculture & Forest Service.
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., et al. (2014). Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), International conference on neural information processing systems (pp. 2672–2680). NeuIPS. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., et al. (2014). Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), International conference on neural information processing systems (pp. 2672–2680). NeuIPS.
Zurück zum Zitat Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), International conference on neural information processing systems (pp. 1–20). NeuIPS. Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), International conference on neural information processing systems (pp. 1–20). NeuIPS.
Zurück zum Zitat Ho, J., Chan, W., Saharia, C., Whang, J., et al. (2022). Imagen Video: High-definition video generation with diffusion models. arXiv: 2210.02303. Ho, J., Chan, W., Saharia, C., Whang, J., et al. (2022). Imagen Video: High-definition video generation with diffusion models. arXiv: 2210.02303.
Zurück zum Zitat Ho, M., Wasko, C., O’Shea, D., Nathan, R., et al. (2023). Changes in flood-associated rainfall losses under climate change. Journal of Hydrology, 625, 129950.CrossRef Ho, M., Wasko, C., O’Shea, D., Nathan, R., et al. (2023). Changes in flood-associated rainfall losses under climate change. Journal of Hydrology, 625, 129950.CrossRef
Zurück zum Zitat Hodges, J., Lattimer, B., & Hughes, J. (2019). Wildland fire spread modelling using convolutional neural networks. Fire Technology, 55, 2115–2142.CrossRef Hodges, J., Lattimer, B., & Hughes, J. (2019). Wildland fire spread modelling using convolutional neural networks. Fire Technology, 55, 2115–2142.CrossRef
Zurück zum Zitat Huot, F., Hu, R., Goyal, N., Sankar, T., et al. (2022). Next day wildfire spread: A ML dataset to predict wildfire spreading from remote-sensing data. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–13.CrossRef Huot, F., Hu, R., Goyal, N., Sankar, T., et al. (2022). Next day wildfire spread: A ML dataset to predict wildfire spreading from remote-sensing data. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–13.CrossRef
Zurück zum Zitat Jain, P., Coogan, S., Subramanian, S., Crowley, M., et al. (2020). A review of ML applications in wildfire science and management. Environmental Reviews, 28(4), 478–505.CrossRef Jain, P., Coogan, S., Subramanian, S., Crowley, M., et al. (2020). A review of ML applications in wildfire science and management. Environmental Reviews, 28(4), 478–505.CrossRef
Zurück zum Zitat Kingma, D., & Welling, M. (2019). An introduction to variational autoencoders. Now Publishing.CrossRef Kingma, D., & Welling, M. (2019). An introduction to variational autoencoders. Now Publishing.CrossRef
Zurück zum Zitat Kovner, A. (2022, September 15). Earthquake safety, one shake simulation at a time. Berkeley Lab. Kovner, A. (2022, September 15). Earthquake safety, one shake simulation at a time. Berkeley Lab.
Zurück zum Zitat Lefohn, A. (2023, May 2). Latest NVIDIA graphics research advances generative AI’s next frontier. NVIDIA Blogs. Lefohn, A. (2023, May 2). Latest NVIDIA graphics research advances generative AI’s next frontier. NVIDIA Blogs.
Zurück zum Zitat Mandel, J., Amram, S., Beezley, J., Kelman, G., et al. (2014). Recent advances and applications of WRF-SFIRE. Natural Hazards and Earth System Sciences, 14(10), 2829–2845.CrossRef Mandel, J., Amram, S., Beezley, J., Kelman, G., et al. (2014). Recent advances and applications of WRF-SFIRE. Natural Hazards and Earth System Sciences, 14(10), 2829–2845.CrossRef
Zurück zum Zitat Mell, W., Jenkins, M., Gould, J., & Cheney, P. (2007). A physics-based approach to modelling grassland fires. International Journal of Wildland Fire, 16(1), 1–22.CrossRef Mell, W., Jenkins, M., Gould, J., & Cheney, P. (2007). A physics-based approach to modelling grassland fires. International Journal of Wildland Fire, 16(1), 1–22.CrossRef
Zurück zum Zitat Mildenhall, B., Srinivasan, P., Tancik, M., Barron, J., et al. (2020). NeRF: Representing scenes as neural radiance fields for view synthesis. In A. Vedaldi, H. Bischof, & J.-M. Frahm (Eds.), European conference on computer vision (pp. 405–421). Springer. Mildenhall, B., Srinivasan, P., Tancik, M., Barron, J., et al. (2020). NeRF: Representing scenes as neural radiance fields for view synthesis. In A. Vedaldi, H. Bischof, & J.-M. Frahm (Eds.), European conference on computer vision (pp. 405–421). Springer.
Zurück zum Zitat Niu, C., Phaneuf, M., & Mojabi, P. (2023). A diffusion model for multi-layered metasurface unit cell synthesis. IEEE Open Journal of Antennas and Propagation, 4, 654–666.CrossRef Niu, C., Phaneuf, M., & Mojabi, P. (2023). A diffusion model for multi-layered metasurface unit cell synthesis. IEEE Open Journal of Antennas and Propagation, 4, 654–666.CrossRef
Zurück zum Zitat Parkes, B., & Demeritt, D. (2016). Defining the hundred-year flood: A Bayesian approach for using historic data to reduce uncertainty in flood frequency estimates. Journal of Hydrology, 540, 1189–1208.CrossRef Parkes, B., & Demeritt, D. (2016). Defining the hundred-year flood: A Bayesian approach for using historic data to reduce uncertainty in flood frequency estimates. Journal of Hydrology, 540, 1189–1208.CrossRef
Zurück zum Zitat Radke, D., Hessler, A., & Ellsworth, D. (2019). FireCast: Leveraging deep learning to predict wildfire spread. In S. Kraus (Ed.), IJCAI (pp. 4575–4591). IJCAI. Radke, D., Hessler, A., & Ellsworth, D. (2019). FireCast: Leveraging deep learning to predict wildfire spread. In S. Kraus (Ed.), IJCAI (pp. 4575–4591). IJCAI.
Zurück zum Zitat Schmidt, V., Luccioni, A., Teng, M., Zhang, T., et al. (2022). ClimanteGAN: Raising climate change awareness by generating images of floods. In K. Hofman & A. Rush (Eds.), International conference on learning representations (pp. 1–27). ICLR. Schmidt, V., Luccioni, A., Teng, M., Zhang, T., et al. (2022). ClimanteGAN: Raising climate change awareness by generating images of floods. In K. Hofman & A. Rush (Eds.), International conference on learning representations (pp. 1–27). ICLR.
Zurück zum Zitat Scott, J., & Burgan, R. (2005). Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model. Technical report, US Department of Agriculture & Forest Service. Scott, J., & Burgan, R. (2005). Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model. Technical report, US Department of Agriculture & Forest Service.
Zurück zum Zitat Sharples, J., & Hilton, J. (2020). Modeling vorticity-driven wildfire behaviour using near-field techniques. Frontiers in Mechanical Engineering, 5, 69.CrossRef Sharples, J., & Hilton, J. (2020). Modeling vorticity-driven wildfire behaviour using near-field techniques. Frontiers in Mechanical Engineering, 5, 69.CrossRef
Zurück zum Zitat Storey, M., Bedward, M., Price, O., Bradstock, R., & Sharples, J. (2021). Derivation of a Bayesian fire spread model using large-scale wildfire observations. Environmental Modelling & Software, 144, 105127.CrossRef Storey, M., Bedward, M., Price, O., Bradstock, R., & Sharples, J. (2021). Derivation of a Bayesian fire spread model using large-scale wildfire observations. Environmental Modelling & Software, 144, 105127.CrossRef
Zurück zum Zitat Teng, J., Jakeman, A., Vaze, J., Croke, B., et al. (2017). Flood inundation modelling: A review of methods, recent advances and uncertainty analysis. Environmental Modelling & Software, 90, 201–216.CrossRef Teng, J., Jakeman, A., Vaze, J., Croke, B., et al. (2017). Flood inundation modelling: A review of methods, recent advances and uncertainty analysis. Environmental Modelling & Software, 90, 201–216.CrossRef
Zurück zum Zitat Tirado Cortes, C., Thurow, S., Ong, A., Sharples, J. J., et al. (2023). Analysis of wildfire visualisation systems for research and training (pp. 1–20). IEEE Transactions on Visualization & Computer Graphics. Tirado Cortes, C., Thurow, S., Ong, A., Sharples, J. J., et al. (2023). Analysis of wildfire visualisation systems for research and training (pp. 1–20). IEEE Transactions on Visualization & Computer Graphics.
Zurück zum Zitat Wahlqvist, J., Ronchi, E., Gwynne, S., Kinateder, M., et al. (2021). The simulation of wildland-urban interface fire evacuation: The WUI-NITY platform. Safety Science, 136, 105145.CrossRef Wahlqvist, J., Ronchi, E., Gwynne, S., Kinateder, M., et al. (2021). The simulation of wildland-urban interface fire evacuation: The WUI-NITY platform. Safety Science, 136, 105145.CrossRef
Zurück zum Zitat Wang, Z., She, Q., & Ward, T. (2021). Generative adversarial networks in computer vision: A survey and taxonomy. ACM Computing Surveys, 54(2), 37. Wang, Z., She, Q., & Ward, T. (2021). Generative adversarial networks in computer vision: A survey and taxonomy. ACM Computing Surveys, 54(2), 37.
Zurück zum Zitat Weber, R. (1991). Modelling fire spread through fuel beds. Progress in Energy and Combustion Science, 17(1), 67–82.CrossRef Weber, R. (1991). Modelling fire spread through fuel beds. Progress in Energy and Combustion Science, 17(1), 67–82.CrossRef
Zurück zum Zitat Xiong, W., Luo, W., Ma, L., Liu, W., & Luo, J. (2018). Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. Computer Vision & Pattern Recognition. Arxiv.org. https://t1p.de/7nr9x. Accessed 17 Dec 2023. Xiong, W., Luo, W., Ma, L., Liu, W., & Luo, J. (2018). Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. Computer Vision & Pattern Recognition. Arxiv.org. https://​t1p.​de/​7nr9x. Accessed 17 Dec 2023.
Zurück zum Zitat Yang, S., Lupascu, M., & Meel, K. (2021). Predicting forest fire using remote sensing data and ML. In AAAI conference on AI (pp. 14983–14990). AAAI. Yang, S., Lupascu, M., & Meel, K. (2021). Predicting forest fire using remote sensing data and ML. In AAAI conference on AI (pp. 14983–14990). AAAI.
Zurück zum Zitat Zhang, J., Xu, C., Liu, L., Wang, M., et al. (2020). DTVNet: Dynamic time-lapse video generation via single still image. In A. Vedaldi, H. Bischof, T. Brox, & J.-M. Frahm (Eds.), European conference on computer vision (pp. 300–315). Springer. Zhang, J., Xu, C., Liu, L., Wang, M., et al. (2020). DTVNet: Dynamic time-lapse video generation via single still image. In A. Vedaldi, H. Bischof, T. Brox, & J.-M. Frahm (Eds.), European conference on computer vision (pp. 300–315). Springer.
Metadaten
Titel
Intelligent Architectures for Extreme Event Visualisation
verfasst von
Yang Song
Maurice Pagnucco
Frank Wu
Ali Asadipour
Michael J. Ostwald
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-56114-6_4

Premium Partner