Research

I am a computational scientist. My research is interdiciplinary and lies at the intersection of mathematics, physics, engineering and computational science. My research can roughly be divided into contributions that focus on improvement of the design-to-analysis process and contributions that focus on improvements in numerical simulation of physical phenomena. Some topics are more theoretical and some are more applied. However, all have high industrial relevance and contain a strong algorithmic component. The following introduces some of the research directions that I am interested in and outlines ongoing and future work.

High quality quadrilateral mesh generation

Traditionally, meshing has been regarded as a separate process that converts a design model into an analysis model. This process is in general tedious and labor intensive and leads to model inaccuracies. I envision that re-parameterization / mesh generation techniques will be embedded within CAD; thus becoming part of the designer’s toolbox to generate watertight conforming geometry that is compatible with downstream applications.

I investigated frame-field based parameterization to convert trimmed NURBS geometry to analysis suitable representations. While existing frame-field based integer grid parameterization methods offer the high quality necessary in this application, they suffer from several robustness issues. We proposed specialized constraints that incorporated properties of an analytical solution that resolves the poor behavior near singularities. Furthermore, we presented a convenient and efficient solution framework that directly incorporates linear constraints into a reduced basis, simplifying layout extraction. Future work is focused on resolving the remaining robustness issues of the methodology and developing unstructured splines.

Efficient quadrature, formation, assembly, and solution techniques

At the core of any finite element implementation are the element subroutines that form the local element matrices that are subsequently assembled in the global system of matrix equations. Early on, it was recognized that this assembly process could be performed in an element-by-element fashion due to the local support of the finite element basis functions, which were of low order and low continuity. This approach worked very well in the early days of serial computation where floating point operations were expensive and memory limited. Today, floating point operations are relatively inexpensive, memory is abundant, but expensive to move, and serial computation has been replaced by parallel computation. Not only has the hardware changed. The finite element method has evolved too.

We have started to rethink and redesign the formation and assembly process that is at the core of existing finite element codes. This includes design of new quadrature rules that work in tandem with row formation and assembly techniques. This has resulted in drastic computational savings compared with classical element by element formation and assembly and Gaussian quadrature.

When it comes to future work I am particularly interested in developing new solution algorithms that enable computation of industrially relevant problems that are highly computationally challenging or currently beyond our reach. Good examples are kinetic theory, which has applications in rarefied fluid dynamics, plasma modeling and photo-lithography, and stochastic problems, which have applications in structural risk or reliability analysis and prediction. Scientific breakthroughs are necessary to make these problems computable. Below follows a recent success story in the computation of the Karhunen-Loeve expansion.

Efficient matrix-free solution procedures for the Karhunen-Loeve expansion

The Karhunen-Loeve (KL) expansion decomposes a random field into an infinite linear combination of L2 orthogonal functions with decreasing energy content. Truncated representations have applications in stochastic finite element analysis (SFEM), proper orthogonal decomposition (POD). and in image processing where the technique is known as principal component analysis (PCA). All these techniques are closely related and widely used in practice. Due to a curse of dimensionality, numerical computation of the Karhunen-Loeve expansion is computationally challenging in terms of both memory requirements and computing time.

I supervised a PhD. student in the development of a matrix-free solution methodology that exploits the separable structure that is present in the mathematical formulation, the function spaces and at the quadrature level, making computations involving higher dimensions tractable.

Geometric discretization

Extension of exterior calculus to discrete spaces such as graphs and simplicial and polygonal meshes has led to the development of discrete exterior calculus (DEC) and finite element exterior calculus (FEEC). These discrete frameworks aim to faithfully represent physical quantities and their relationships in the discrete setting. Central to the discrete theory is a de Rham sequence of discrete differential forms, a discrete analogue of the generalized Stokes theorem and a discrete Hodge decomposition. In the past we developed a general recipe to construct higher order interpolants on quadrilateral and hexagonal grids that satisfy a discrete de Rham sequence. As an illustrating case, we investigated splines in a mixed formulation of incompressible Stokes flow. Coauthors and I also investigated geometric discretization techniques on single and dual grids.

Weak boundary conditions in DEC

Ongoing work includes an extension of DEC in which the Hodge star operators are defined through a discrete inner product. Importantly, this results in discrete co-differential operators that are defined through a discrete analogue of the classical integration by parts formulae. Akin to the finite element exterior calculus (FEEC) the approach gives rise to a straightforward and consistent implementation of natural boundary conditions, which are weakly satisfied.

The elasticity complex

Although compatible discretization of the de Rham complex is well established, discretization of the elasticity complex is far more challenging. I am currently working on geometric discretization of symmetric matrix fields in the elasticity complex by utilizing a close connection with the de Rham sequence. In contrast to previous work I focus on matrix fields that are strongly symmetric. I would like to investigate if the new spaces can resolve locking in shells and solids and am interested in extensions to non-linear problems. I am also very interested in developing robust global parameterization techniques based on these new symmetric matrix fields.

Scientific machine-learning applied to cavitation

Cavitation is a complex multiphase phenomenon that occurs when pressure fluctuations in a liquid lead to the formation of vapor filled bubbles in regions of low pressure. When subjected to higher pressures, the bubbles can collapse, forming shock-waves that can deteriorate performance, damage equipment and produce noise. Cavitation is well described by the Navier-Stokes-Korteweg equations. Numerical solution, however, is extremely challenging due to the small time and length scales of the physical problem.

I am supervising a PhD. student in the development of a scientific machine-learning algorithm to better understand and model cavitation. In particular, we are developing variationally consistent reduced order models (ROMs) from high fidelity simulations of cavitation. Our approach is based on de Rham compatible geometric discretization of the Navier-Stokes-Korteweg equations, variational multiscale (VMS) modeling of sub-scale fluctuations and proper orthogonal decomposition (POD).