Scientific visualization of large-scale vector fields with modern, so-called integration-based methods that rely on the analysis of particle trajectories is not feasible with current methods, since existing algorithms cannot make efficient use of parallel architectures such as clusters and supercomputers. This status leaves researchers in science and industry unable to visualize, analyze and understand the processes described by large vector field data from simulation or measurement. The project aims at developing a novel methodological framework for integration-based visualization that will provide visualization of largest-scale vector fields in current scientific applications. The novel methodology will allow the efficient use of parallel architectures for fast and interactive visualization of very large vector field data sets, which is not possible with current methods. The projects approaches will combine techniques from scientific visualization, parallel algorithms, applied mathematics, and software design. The resulting increased ability to study large vector fields will strongly impact fundamental scientific research in a large and interdisciplinary setting of scientific and industrial application areas that rely on vector field visualization. This includes research on technologies related to timely problems such as combustion, fusion, and aerodynamics.
This project is supported by the Marie Curie Actions within the EU FP7 Programme under grant #304099.
Abstract: Particle advection is an important vector field visualization technique that is difficult to apply to very large data sets in a distributed setting due to scalability limitations in existing algorithms. In this paper, we report on several experiments using work requesting dynamic scheduling which achieves balanced work distribution on arbitrary problems with minimal communication overhead. We present a corresponding prototype implementation, provide and analyze benchmark results, and compare our results to an existing algorithm.
Abstract: Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their output. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.
Abstract: Characterizing the interplay between the vortices and forces acting on a wind turbine's blades in a qualitative and quantitative way holds the potential for significantly improving large wind turbine design. The paper introduces an integrated pipeline for highly effective wind and force field analysis and visualization. We extract vortices induced by a turbine's rotation in a wind field, and characterize vortices in conjunction with numerically simulated forces on the blade surfaces as these vortices strike another turbine's blades downstream. The scientifically relevant issue to be studied is the relationship between the extracted, approximate locations on the blades where vortices strike the blades and the forces that exist in those locations. This integrated approach is used to detect and analyze turbulent flow that causes local impact on the wind turbine blade structure. The results that we present are based on analyzing the wind and force field data sets generated by numerical simulations, and allow domain scientists to relate vortex-blade interactions with power output loss in turbines and turbine life-expectancy. Our methods have the potential to improve turbine design in order to save costs related to turbine operation and maintenance.
Abstract: A series of large eddy simulations is used to assess the transport properties of multi-scale ocean flows. In particular, we compare scale-dependent measures of Lagrangian relative dispersion and the evolution of passive tracer releases in models containing only submesoscale mixed layer instabilities and those containing mixed layer instabilities modified by deeper, baroclinic mesoscale disturbances. Visualization through 3D finite-time Lyapunov exponents and spectral analysis show that the small scale instabilities of the mixed layer rapidly lose coherence in the presence of larger-scale straining induced by the mesoscale motion. Eddy diffusivities computed from passive tracer evolution increase by an order of magnitude as the flow transitions from small to large scales. During the time period when both instabilities are present, scale-dependent relative Lagrangian dispersion, given by the finite-scale Lyapunov exponent (λ), shows two distinct plateau regions clearly associated with the disparate instability scales. In this case, the maximum value of λ over the submesocales at the surface flow is three times greater than λ at the mixed layer base which is only influenced by the deeper baroclinic motions. The results imply that parameterizations of submesoscale transport properties may be needed to accurately predict surface dispersion in models that do not explicitly resolve submesoscale turbulent features.
Abstract: Finite-time Lyapunov exponent and Lagrangian coherent structures are popular concepts in fluid dynamics for the structural analysis of fluid flows but the associated computational cost remains a major obstacle to their use in visualization. In this paper, we present a novel technique that allows for the coupled computation and visualization of salient flow structures at interactive frame rates. Our approach is built upon a hierarchical representation of the FTLE field, which is adaptively sampled and rendered to meet the need of the current visual setting. The performance of our method allows the user to explore large and complex datasets across scales and to inspect their features at arbitrary resolution. The paper discusses an efficient implementation of this strategy on the graphics hardware and provides results for an analytical flow and several CFD simulation datasets.
Abstract: Although there has been significant research in GPU acceleration, both of parallel simulation codes (i.e., GPGPU) and of single GPU visualization and analysis algorithms, there has been relatively little research devoted to visualization and analysis algorithms on GPU clusters. This oversight is significant: parallel visualization and analysis algorithms have markedly different characteristics - computational load, memory access pattern, communication, idle time, etc. - than the other two categories. In this paper, we explore the benefits of GPU acceleration for particle advection in a parallel, distributed-memory setting. As performance properties can differ dramatically between particle advection use cases, our study operates over a variety of workloads, designed to reveal insights about underlying trends. This work has a three-fold aim: (1) to map a challenging visualization and analysis algorithm - particle advection - to a complex system (a cluster of GPUs), (2) to inform its performance characteristics, and (3) to evaluate the advantages and disadvantages of using the GPU. In our performance study, we identify which factors are and are not relevant for obtaining a speedup when using GPUs. In short, this study informs the following question: if faced with a parallel particle advection problem, should you implement the solution with CPUs, with GPUs, or does it not matter?