Assyr Abdulle (EPFL, Lausanne) | Saturday 10th July 14:30 |
A posteriori error analysis for numerical homogenization methods |
In this talk we present an a posteriori error
analysis for the numerical homogenization of elliptic problems. The
discretization scheme relies on macro and micro finite elements,
following the framework of the heterogeneous multiscale method. In this
multiscale method, the desired macroscopic solution is obtained by a
suitable averaging procedure based on microsolution probing the
fine-scale structure of the problem. As the macroscopic data (such as
the macroscopic diffusion tensor) are not available beforehand,
appropriate error indicators have to be defined for designing adaptive
methods. We will show that such indicators based only on the available
macro- and microsolutions (used to compute the actual macrosolution)
can be defined, allowing for
a macroscopic mesh refinement strategy which is both reliable and
efficient. Numerical experiments illustrating the efficiency and
reliability of the adaptive multiscale method will be presented.
References:
[1] A. Abdulle and A. Nonnenmacher, A posteriori error analysis
of the heterogeneous multiscale method for homogenization problems,
C.R. Math. Acad. Sci. Paris 247 (2009), no. 17-18, 1081--1086.
[2] A. Abdulle and A. Nonnenmacher, Adaptive finite element
heterogeneous multiscale method for homogenization problems, to appear
in Comput. Methods Appl. Mech. Engrg. (2010).
[3] A. Abdulle, A priori and a posteriori error analysis for numerical
homogenization: a unified framework, to appear in Contemporary Applied
Mathematics, (2010).
|
|
Ben Adcock (University of Cambridge) | Poster |
Accurate and stable recovery of functions from spectral data |
(Joint work with Anders C. Hansen) We
consider the problem of reconstructing a function (defined on some
bounded domain) to high accuracy from a finite number of its
coefficients with respect to some orthogonal basis. Straightforward
expansion in this basis may converge slowly. Yet, as we prove, it is
always possible to reconstruct the function in another, more rapidly
convergent basis. Such reconstruction technique is stable, and the
resultant approximation near-optimal.
A common example of this approach is the reconstruction of an
analytic, nonperiodic function from its Fourier coefficients, with
numerous applications including image and signal processing. Fourier
series converge slowly. Nonetheless, by reconstructing in a polynomial
basis we obtain exponential convergence in terms of n (the polynomial
degree), or root exponential convergence in m (the number of Fourier
coefficients). The procedure can be implemented in O(mn) operations. |
|
Vivi Andasari (Division of Mathematics, University of Dundee) | Poster |
Mathematical Modelling of Cancer Invasion of Tissue: The Roles of Cell-cell and Cell-matrix Adhesion |
(Joint work with M.A.J Chaplain)
Adhesion, which includes cell-to-cell and cell-to-extracellular-matrix
adhesion, plays an important role in cancer invasion and metastasis.
After undergoing morphological changes malignant and invasive tumour
cells, i.e., cancer cells, break away from the primary tumour by loss
of cell-cell adhesion, degrade their basement membrane and migrate
through the extracellular matrix by enhancement of cell-matrix
adhesion. These processes require interactions and signalling
cross-talks between proteins and cellular components facilitating the
cell adhesion. Although such processes are very complex, the necessity
to fully understand the mechanism of cell adhesion is crucial for
cancer studies, which may contribute to improving cancer treatment
strategies.
Cancer cell migration and invasion of the extracellular matrix
involving adhesive interactions between cells mediated by cadherins and
between cell and matrix mediated by integrins, are modelled by
employing two types of mathematical models: an individual-based
approach and a continuum approach. In the individual-based approach, we
first develop pathways for cell-cell and cell-matrix adhesion using
Ordinary Differential Equations and later incorporate the pathways in
the Cellular Potts Model for computational multi-scale modelling. In
the continuum approach, we use Partial Differential Equations in which
cell adhesion is treated as non-local and formulated by integral terms.
The computational simulation results from the two different
mathematical models show that we can predict invasive behaviour of
cancer cells from cell adhesion properties. Invasion occurs if we
reduce cell-cell adhesion and increase cell-matrix adhesion and vice
versa. Changing the cell adhesion properties can affect the
spatio-temporal behaviour of cancer cell invasion. These results may
lead to broadening our understanding of cancer cell invasion and
suggest optimal methods of patient treatment. |
|
Todd Arbogast (University of Texas at Austin) | Monday 12th July 10:10 |
Mixed multiscale methods for heterogeneous elliptic problems
|
In this three-part series of lectures, we consider
a second order elliptic problem with a heterogeneous coefficient
written in mixed form (i.e., as a system of two first order equations).
Multiscale methods can be viewed in one of three equivalent frameworks:
as a Galerkin or finite element method with nonpolynomial basis
functions, as a variational multiscale method with standard finite
elements, or as a domain decomposition method with restricted degrees
of freedom on the interfaces. We treat each case, and discuss the
advantages of the approach for devising effective local multiscale
methods. Included is recent work on methods that incorporate
information from homogenization theory and effective domain
decomposition methods.
|
|
Todd Arbogast (University of Texas at Austin) | Tuesday 13th July 10:10 |
Mixed multiscale methods for heterogeneous elliptic problems
|
|
Todd Arbogast (University of Texas at Austin) | Wednesday 14th July 09:15 |
Mixed multiscale methods for heterogeneous elliptic problems
|
|
Anthony Baran (Met Office UK) | Friday 9th July 10:40 |
Electromagnetic and light scattering by atmospheric particulates: How well does theory compare against observation ? |
Ubiquitous cirrus (ice crystal clouds) and
atmospheric dust are usually found towards the upper and lower parts of
the troposphere, respectively. The common feature between these two
types of atmospheric occurrences is that they are composed of irregular
particles of varying sizes and shapes. The interaction between incident
radiation and these two types of particulates has a profound influence
on the degree to which the surface of the Earth is either generally
warmed (cirrus) or cooled (dust) in a warming climate due to increased
carbon dioxide emissions. The radiative importance of cirrus and dust
is clear, yet understanding the magnitude of amplification or
diminution of the greenhouse effect is poorly understood. The reason
for this is that the variability of shapes and sizes is highly
significant and so commonly characterizing them is problematic.
Moreover, in the case of ice crystals and large dust particles,
currently there is no one light scattering method that can be applied
to predict their scattering properties over the observed range of size
parameter space. In this talk the traditional electromagnetic and light
scattering methods that are usually applied to this problem using
single ice crystal and dust shapes will be reviewed. More recent
attempts to model cirrus and dust scattering using ensembles of shapes
will be emphasized. Theoretical and observational requirements needed
to further understand the interaction between incident radiation and
ice crystals/dust, so that the uncertainty in the magnitude of the
warming or cooling is reduced, will also be explored. |
|
Peter Bastian (University of Heidelberg) | Saturday 10th July 09:15 |
Centre for Numerical Algorithms and Intelligent Software Special Lecture:
The Distributed and Unified Numerics Environment (DUNE) |
DUNE (www.dune-project.org) is a set of C++-based open-source
software framework for the grid-based numerical solution of partial
differential equations (PDEs). Its main design principles are:
(i) separation of data structures and algorithms through abstract
interfaces, (ii) use of generic programming techniques for achieving
performance and (iii) enabling reuse of existing finite element
software through appropriate interface design. DUNE provides
support for many different kinds of grids, a flexible linear solver
package, is parallel as well as dimension-independent and offers a full
simulation workflow using free software. New discretization schemes
and PDE models can be integrated with relative ease through reuse
of existing components and powerful abstraction mechanisms.
In this talk I will give a short overview of the framework and then
concentrate on the flexible implemention of various finite element schemes
in the "PDELab" module with applications to flow and transport in
porous media. |
|
Timo Betcke (University of Reading) and Euan Spence (University of Bath) | Monday 12th July 18:45 |
Coercivity of boundary integral operators in acoustic scattering |
Much research effort in recent years has been focused on
designing effective numerical methods for high frequency acoustic
scattering. The main difficulty is that, as the frequency increases,
the solution becomes more oscillatory, leading to a rapid increase of
degrees of freedom in conventional methods to maintain accuracy. One
way around this difficulty is to use the high frequency asymptotics of
the solution of the scattering problem to design approximation spaces
which take into account the high oscillation of the solution. Once
these hybrid asymptotic-numerical methods have been designed, an
interesting question is whether rigorous error bounds can be
established which are explicit in the frequency.
One strategy for proving rigorous error bounds for boundary integral
methods for these high frequency problems is to seek to prove that the
integral operator is coercive. For these high frequency problems one
ideally wants to establish coercivity independent of (or at least
explicit in) the frequency.
Coercivity has so far been established only for the case of the circle
(in 2d) and sphere (3d) using Fourier analysis. This talk will present
some new results on proving coercivity for a much wider class of
domains, and also on investigating coercivity numerically. |
|
Liliana Borcea (Rice University) | Tuesday 6th July 15:30 |
Source localization in random acoustic waveguides |
Mode coupling due to scattering by weak random
inhomogeneities in
waveguides leads to loss of coherence of wave fields at long distances
of propagation. This in turn leads to serious deterioration of
coherent source localization methods. I will show with analysis and
numerical simulations how such deterioration occurs, and introduce a
novel incoherent approach for long range source localization in random
waveguides. It is based on a special form of transport theory for the
incoherent fluctuations of the wave field. I will show with analysis
that the method is statistically stable and will illustrate its
performance with numerical simulations.
I will also show how it can be used to estimate the correlation
function of the random fluctuations of the wave speed.
|
|
Dan Brinkman (University of Cambridge) | Poster |
Numerical modelling of bilayer organic photovoltaic devices
|
We use a finite element scheme with hybrid
discontinuous galerkin elements implemented in ngsolve to model bilayer
polymer solar cells. The model depends non-trivially on the width of
the polymer-polymer interface, which is 2 orders of magnitude smaller
than the device itself. We examine how changing the size and shape of
this interface alters the large-scale behaviour of the device. |
|
Phil Browne (University of Bath) | Poster |
Structural Optimization using a SIMP approach |
Structural optimization attempts to solve a
material distribution problem. That is, given some design space and
loading conditions what is the optimal distribution of the material
within this space to achieve a given objective. These problems arise in
many situations, and techniques to solve these problems are regularly
used in aerospace, automotive and civil engineering industries. This
poster will present the basics of using a relaxation approach known as
the SIMP method to turn a discrete programming problem into a
continuous problem whilst recovering discrete solutions, and also will
show the latest work on trying to make the resulting solutions
resistant to buckling. |
|
Chris Budd (University of Bath) | Wednesday 14th July 15:40 |
Adaptive methods for multi-scale problems |
(Joint work with Emily Walsh (Bath) and JF Williams (SFU))
Many partial differential equations evolve to
have solution structures on very small scales.
Examples are blow-up in combustion problems and
developing weather fronts in meteorology.
Resolving such structures is a difficult numerical
challenge and usually requires some form of
adaptive method. In this talk I will describe an adaptive
numerical method based on ideas in optimal transport, which
aims to equidistribute a numerical mesh in an optimal
manner. I will show that this leads to a very powerful
method of adapting a mesh which can resolve evolving
structures of the solutions of PDES over an extremely
wide range of length scales. Moreover this method is
relatively easy to implement, in any number of dimensions.
In this talk I will develop the theory and application of this method, with
a special emphasis on the problems of resolving developing
storms in meteorology.
|
|
Simon Chandler-Wilde (University of Reading) | Monday 12th July 17:00 |
Numerical methods for high frequency scattering |
(Joint work with Steve Langdon and Ashley Twigger.)
In this talk we review progress on the development, implementation and
numerical analysis of computational methods for high frequency
scattering problems. These problems are at least two-scale, with the
wavelength of the incident wave very much smaller than a typical
dimension of the scattering obstacle. This is a classical problem, and
effective asymptotic methods are available for very high frequencies,
for at least certain classes of problems, while fast mutipole methods
make boundary element methods effective for moderate frequencies. In
this talk we discuss aspects of a joint project between Bath and
Reading Universities (for details seehttp://people.bath.ac.uk/eas25/HF/),
in which the aim is to produce novel boundary integral equation methods
at the interface of classical boundary element methods and high
frequency asymptotics, which build the oscillatory behaviour at high
frequency into the approximation space. This enables, at least for some
classes of scatterer, the development of algorithms which can achieve a
specified error tolerance with a cost which is bounded independently of
the frequency. Proving this rigorously is a challenging exercise in
asymptotics uniform in frequency and discretisation parameter, and in
the conditioning of oscillatory integral operators. |
|
John Chapman (University of Durham) | Poster |
The continuous discontinuous Galerkin finite element method for an advection-diffusion equation |
(Joint work with Max Jensen, Emmanuil Georgoulis and Andrea Cangiani)
When attempting to solve a prototype advection-diffusion equation using
the standard continuous Galerkin finite element method the numerical
solution exhibits non physical oscillations. One solution is to use a
discontinuous Galerkin finite element method. This however is more
computationally intensive.
We present the hypothesis that a Galerkin method that is continuous on
the domain away from any boundary or internal layers, and discontinuous
in the vicinity of any layers, is stable. We present a proof that this
is the case for the discontinuous portion and several numerical
experiments, as well as work for the future on the continuous region.
|
|
Zhiming Chen (Chinese Academy of Sciences) | Saturday 10th July 16:55 |
Convergence of the uniaxial perfectly matched layer method for time-harmonic
scattering problems in two-layered media |
We propose a uniaxial perfectly matched layer (PML)
method for solving the time-harmonic scattering problems in
two-layered media. The exterior region of the scatterer is divided
into two half spaces by an infinite plane, on two sides of which the
wave number takes different values. We surround the computational
domain where the scattering field is interested by a PML layer with
the uniaxial medium property. By imposing homogenous boundary
condition on the outer boundary of the PML layer, we show that the
solution of the PML problem converges exponentially to the solution
of the original scattering problem in the computational domain as
either the PML absorbing coefficient or the thickness of the PML
layer tends to infinity. |
|
Paul Childs (Schlumberger) | Friday 9th July 10:10 |
Challenges in seismic imaging |
Inversion and imaging methods using the full
seismic waveforms are computationally challenging. We will review the
current practice in the seismic imaging industry, and outline several
multiscale challenges where new mathematical algorithms are being
sought. |
|
Andrew Cliffe (University of Nottingham) | Friday 9th July 17:00 |
Deep Geological Disposal of Radioactive Waste |
The talk will discuss various modelling and
computational issue related to the deep geological disposal of
radioactive wastes. Particular attention will be paid to uncertainty
quantification. The major problems will be described together with some
of the outstanding mathematical and computational challenges. |
|
Masoumeh Dashti (University of Warwick) | Poster |
Bayesian approach to an elliptic inverse problem |
(Joint work with Andrew Stuart)
We consider the inverse problem of determining the permeability from
the pressure in a Darcy model of flow in a porous medium.
Mathematically the problem is to find the diffusion coefficient for a
linear uniformly elliptic partial differential equation in divergence
form, in a bounded domain in two or three dimensions, from pointwise
measurements of the solution in the interior.
We adopt a Bayesian approach to the problem. We place a prior Gaussian
random field measure on the log permeability, specified through its two
point correlation function. We study the regularity of functions drawn
from this prior measure, by use of the Karhunen-Loeve expansion. We
also study the Lipschitz properties of the observation operator mapping
the log permeability to the observations. Assuming that the
observations are subject to mean zero noise, and combining the
aforementioned regularity and continuity estimates, we show that the
posterior measure is well-defined. Furthermore the posterior measure is
shown to be Lipschitz continuous with respect to the data in the
Hellinger and total variation metrics, giving rise to a form of
well-posedness of the inverse problem. |
|
Niall Deakin (University of Dundee) | Poster |
Mathematical modelling of cancer growth and spread: the role of enzyme degradation of tissue |
(Joint work with Mark Chaplain, George Lolas and Alastair Thompson)
There are many steps involved in the growth and spread of cancer - the
current work will focus on the local invasion of the host tissue. A
crucial aspect of cancer cell growth and development is the process in
which they invade locally by the secretion of enzymes involved in
proteolysis, namely plasmin and matrix metalloproteinases (MMPs). These
overly expressed proteolytic enzymes then proceed to degrade the host
tissue allowing the cancer cells to spread throughout the region by
active migration and interaction with components of the extracellular
matrix such as collagen. We will consider two approaches for modelling
the invasion on the macro scale (cell population level). The first
mathematical model considers cancer cells and a number of different
matrix degrading enzymes (MDEs) from the MMP family and their
interaction with and effect on the extracellular matrix (ECM). The
second model focuses on the specific role of the urokinase-type
plasminogen activation (uPA) system. Both models consist of a system of
reaction-diffusion- taxis partial differential equations in an attempt
to capture the qualitative dynamics of the migratory response of the
cancer cells. |
|
Louis Durlofsky (Stanford University) | Saturday 10th July 10:10 |
Uncertainty quantification for subsurface flow problems using coarse-scale models |
Fine-scale features can have a large impact on key
subsurface flow quantities such as injection or production rates.
Because the geological characteristics of subsurface formations are
highly uncertain, multiple realizations are typically simulated in an
attempt to capture the impact of geological uncertainty on flow
behavior. It is, however, expensive to perform flow simulation on
highly resolved models; for this reason a number of upscaling and
multiscale procedures have been devised. Most such techniques aim to
provide coarse models that reproduce the fine-model response on a
realization-by-realization basis. This may not be necessary, however,
when the goal is to replicate the statistics of the flow responses of
multiple realizations. In this talk, I will present an upscaling
approach that entails the statistical assignment of upscaled functions.
This approach is more efficient than traditional treatments as it
greatly reduces the most time-consuming upscaling computations. I will
also describe new procedures for upscaling in the vicinity of injection
and production wells. Numerical results demonstrate that, by combining
near-well upscaling and statistical assignment of coarse-scale flux
functions, coarse models that are well suited for computing ensemble
quantities can be efficiently constructed. |
|
Yalchin Efendiev (Texas A & M University) | Wednesday 7th July 15:30 |
Multiscale simulation techniques for high-contrast subsurface flows |
The development of numerical algorithms for modeling flow processes in large-scale
highly heterogeneous formations is very challenging because the properties of natural geologic
porous formations (e.g., permeability) display high variability levels and complex spatial correlation
structures, which span a rich hierarchy of length scales. Thus, it is usually necessary to resolve a
wide range of length and time scales in order to obtain accurate predictions of the flow, mechanical
deformation, and transport processes under investigation. In practice, however, some type of
coarsening (or upscaling) of the detailed model is usually performed before the model
can be used to simulate complex displacement processes. Many approaches have been developed
and applied successfully when a scale separation adequately describes the spatial variability of
the subsurface properties (e.g., permeability) that have bounded variations. The quality of these
approaches deteriorates for complex heterogeneities, especially when the contrast in the media
properties is large, e.g., in the case of fractured porous media. In this talk, I will describe
coarse-scale spaces that can be used in upscaling flow equations as well as in domain decomposition methods
when media properties have high contrast and are spatially heterogeneous.
Numerical results will be presented that show that one can improve the accuracy of multiscale methods and
obtain contrast-independent preconditioners. |
|
Bjorn Engquist (University of Texas, Austin) | Thursday 8th July 09:15 |
Fast algorithms for high frequency wave propagation |
Boundary integral formulations of high frequency
scattering problems are difficult to handle numerically due to the
oscillatory nature of the kernel. The number of unknowns (N) in the
approximation of the boundary potential or currents must be very large
and the standard fast multi-pole method does not give a reduction in
the computational complexity of the core matrix vector multiplication
in the solution process. A new multi-level method based on directional
decomposition can be proved to have near optimal order of complexity:
O(NlogN). A random sampling algorithm to further increase the
efficiency will also be introduced. The principles behind this
technique also apply to preconditioning of numerical approximations of
differential equation formulations. |
|
Oliver Ernst (TU Bergakademie Freiberg) | Saturday 10th July 16:20 |
On the Convergence of Generalized Polynomial Chaos Expansions |
A number of approaches for discretizing partial
differential equations with random data are based on generalized
polynomial chaos expansions of random variables. These constitute
generalizations of the polynomial chaos expansions introduced by
Norbert Wiener to expansions in polynomials orthogonal with respect to
non-Gaussian probability measures. We present conditions on such
measures which imply mean-square convergence of generalized polynomial
chaos expansions to the correct limit and complement these with
illustrative examples.
|
|
Leonardo Figueroa (University of Oxford) | Poster |
Separated representation approximation of a high-dimensional Fokker–Planck PDE for dilute polymers. |
(Joint work with Endre Süli)
The evolution of the configuration of polymer molecules in a viscous
incompressible solvent is naturally modelled by a system of
Langevin-type stochastic differential equations. The associated
probability density function satisfies a high-dimensional Fokker–Planck
equation on the space of all possible polymer chain configurations. One
of the key difficulties, apart from high dimensionality, is the fact
that the nonlinear spring-laws featuring in the model introduce
degeneracies in the coefficients of the Fokker–Planck equation.
We consider an algorithm based on an SVD-like separated
representation strategy, which exploits the tensor-product structure of
the space of polymer chain-configurations and approximates the
probability density function with a sum of products of functions
defined on the low-dimensional space of admissible configurations for a
single spring. We establish the convergence of the proposed algorithm.
|
|
Martin Gander (University of Geneva) | Monday 12th July 18:10 |
Why it is difficult to solve Helmholtz problems
with classical iterative methods |
In contrast to the positive definite Helmholtz equation, the
deceivingly similar looking indefinite Helmholtz equation is difficult
to solve using classical iterative methods; in particular, both
classical domain decomposition and multigrid methods fail to converge.
I will show in my presentation where the problems lie, and present
remedies for domain decomposition methods, and also some new insight
for constructing an efficient multigrid method for the indefinite
Helmholtz equation. |
|
Uduak George (University of Sussex) | Poster |
Modelling and simulation of cell membrane dynamics. |
The study of cell membrane dynamics is part of a more general study of cell
behaviour. Understanding the dynamics of the cell membrane is vital as it
plays a critical role in many biological processes such as wound healing,
embryogenesis, immune response and pathological processes such as formation
of primary and secondary tumours. We will show a model that is able to
describe the shapes, expansions, contractions, protrusions and retraction
motions of the cell.
|
|
Mike Giles (University of Oxford) | Thursday 8th July 17:40 |
Multilevel Monte Carlo for elliptic SPDEs |
(Joint work with Rob Scheichl and Aretha Teckentrup at Bath, and Andrew Cliffe at Nottingham.)
Elliptic SPDEs, in which the log-diffusivity is a stochastic field,
arise in the modelling of oil reservoirs and nuclear waste
repositories. In this talk I will discuss how the multilevel Monte
Carlo method, which was recently developed for financial Monte Carlo
applications, can be used in this context to efficiently estimate the
expected value of various output functionals.
|
|
Andrew Gordon (University of Manchester) | Poster |
Solving stochastic collocation systems with algebraic multigrid |
Stochastic collocation methods facilitate the
numerical solution of partial differential equations (PDEs) with random
data and give rise to long sequences of similar discrete linear
systems. When elliptic PDEs with random diffusion coefficients are
discretized with mixed finite element methods in the physical domain,
the resulting collocation systems can be solved iteratively with the
minimal residual (MINRES) method and algebraic multigrid (AMG) can be
used as a key components for a highly robust preconditioner. When
considered individually, the stochastic collocation systems are trivial
to solve, however, the challenge lies in exploiting the systems'
similarities to recycle information and minimize the cost of solving
the entire sequence.
In this poster, we consider full tensor and sparse grid stochastic
collocation schemes applied to a model stochastic elliptic problem and
discretize in physical space using lowest order Raviart-Thomas mixed
finite elements. We propose an efficient solver for the resulting
sequence of linear systems and show, in particular, that it is feasible
to use finely-tuned AMG preconditioning for each system if key set-up
information is reused. Crucially, this preconditioning strategy is
robust with respect to variations in the discretization and statistical
parameters for both stochastically linear and nonlinear diffusion
coefficients.
|
|
Viet Ha Hoang (NTU, Singapore) | Wednesday 14th July 14:30 |
Sparse Tensor Galerkin Approximations for Parametric and Random Hyperbolic PDEs |
(Joint with Christoph Schwab) We consider
stochastic wave equations whose coefficients depend on countably many
random variables on [-1.1]. The problem is cast into the form of a
parametric wave equation which depends on a countably many parameters.
This equation is approximated by Galerkin projection onto polynomial
spaces of finite dimensions in the parameter space. We establish
uniform stability with respect to the support of the resulting coupled
hyperbolic system, and analyticity of the solution with respect to the
countably many parameters. We also establish regularity for the
solution of the parametric deterministic system. |
|
Thomas Hou (Caltech) | Saturday 10th July 15:05 |
Model Reduction via a Multi-scale Random Basis Method |
Uncertainty arises in many complex real-world problems of scientific
and engineering interests. Many of these problems involve multiple
scales in both space and time, which may vary in several orders. The
presence of randomness further complicates multi-scale problems because
its effect may span many scales and grow with time through nonlinear
interactions. Earlier methods, such as the worst-case analysis,
sensitivity analysis and etc, tend to be over-pessimistic or
unreliable when the problems become complicated. Wiener Chaos
Expansion methods and their variants proposed in recent years show
some promising features but still suffer from the curse of
dimensionality.
In this talk, we propose a multiscale stochastic method which
consists of two parts, offline and online computations. In the offline
computation, a set of nonlinear stochastic bases are constructed
using the Karhunen-Loeve (KL) expansion and the Monte Carlo simulations.
In the online computation, the stochatic solution is expanded in
terms of the nonlinear stochastic bases constructed offline, resulting
in a sparse representation of the stochastic solution. By solving a
small set of coupled PDEs for the coefficients of the expansion, we
obtain an efficient numerical method to compute the solution of
SPDEs in the online step. We have applied this method to some elliptic
problems with random coefficients. Our numerical results confirm that
the proposed method indeed offers an efficient computational method
for solving stochastic PDEs. It also provides an effective reduced
model as a result of our method. Our method is semi-non-intrusive in
the sense that certified legacy codes can be used with minimum changes
in the offline computation.
|
|
Arieh Iserles (University of Cambridge) | Monday 12th July 17:35 |
Asymptotic–numerical multiscale expansions |
(Joint work with Marissa Condon and Alfredo Deanho)
In this talk we present an introduction to a methodology for the
solution of ODEs, DAEs and DDEs with highly oscillatory forcing. The
asymptotic expansion of such equations involves two hierarchies of
scales, which are derived explicitly by solving non-oscillatory
problems. |
|
Patrick Jenny (ETH, Zürich) | Tuesday 6th July 10:10 |
Transported probability density function (PDF) methods for multi-scale and uncertainty problems - part 1 |
An introduction to the basic ideas of PDF modeling with the necessary
mathematical background is provided in this first part of the short
course.
Further objective of the lecture is to identify potential targets
which can benefit from this attractive approach, e.g. turbulent
reactive flow.
|
|
Patrick Jenny (ETH, Zürich) | Wednesday 7th July 10:10 |
Transported probability density function (PDF) methods for multi-scale and uncertainty problems - part 2 |
As an illustrative example it is shown how the PDF approach can be
employed to model non-equilibrium gas flow.
This allows to emphasize strength and limitations and at the same
time efficient solution methods are discussed
|
|
Patrick Jenny (ETH, Zürich) | Friday 9th July 09:15 |
Transported probability density function (PDF) methods for multi-scale and uncertainty problems - part 3 |
It is shown how PDF modeling can be employed to deal with uncertainty
in sub-surface transport.
On one hand it is explained how simple stochastic, microscopic
"rules" lead to a closure at the Darcy scale, which is only possible
by honoring arbitrary joint distributions and spatial correlations.
Second, a PDF approach to assess uncertainty of tracer transport is
presented, which is not limited to small variances like e.g.
perturbation methods.
|
|
Peter Jimack (University of Leeds) | Friday 9th July 16:30 |
Numerical Models for the Simulation of Elastohydrodynamic Lubrication Problems |
This talk will describe joint research undertaken
as part of a collaboration between academia and industry that is funded
as part of the EU FW6 Transfer of Knowledge scheme. The focus of the
research is the efficient, accurate and reliable numerical simulation
of lubricated contacts in which the applied load is sufficiently large
to lead to elastic deformation in the contacting elements (hence
elastohydrodynamic lubrication). One aspect of these problems that is
of significant industrial importance is the behaviour of the lubricant
and the contacting elements when their surfaces are not smooth. This
roughness is often at a much smaller length-scale than the contact
region and so poses significant computational challenges. The talk will
provide an overview of the numerical techniques that may be used and
present a selection of simulation results. |
|
Jesper Karlsson (KAUST, Saudi Arabia) | Poster |
A Computable Weak Error Expansion for the Tau-Leap Method
|
This work develops novel error expansions with computable leading
order terms for the global weak error in the tau-leap discretization
of pure jump processes arising in kinetic Monte Carlo models.
Accurate computable a posteriori error approximations are the basis
for adaptive algorithms; a fundamental tool for numerical simulation
of both deterministic and stochastic dynamical systems. These pure
jump processes are simulated either by the tau-leap method, or by
exact simulation, also referred to as dynamic Monte Carlo, the
Gillespie algorithm or the Stochastic simulation algorithm. Two types
of estimates are presented: an a priori estimate for the relative
error that gives a comparison between the work for the two methods
depending on the propensity regime, and an a posteriori estimate with
computable leading order term. |
|
Tatiana Kim (University of Bath) | Poster |
Hybrid numerical-asymptotic boundary integral method for solving high-frequency acoustic scattering problems. |
In the paper [1] by Dominguez, Graham and Smyshlyaev, a numerical method is presented for solving high-
frequency acoustic scattering problems in two dimensions, where the incident wave is a plane wave and the
boundary of the scatterer is smooth and convex. The problem is formulated for the surface current, i.e.
the normal derivative of the total wavefield on the boundary of the scatterer, using a combined potential
boundary integral approach, that results in a one-dimensional boundary integral equation. A novel Galerkin
scheme is proposed that incorporates known asymptotic behavior of the solution on the boundary into the
approximation space and the error of this Galerkin discretization is obtained. The key feature of this hybrid
method is that the degrees of freedom must grow only slightly greater than k1/9
in order to maintain the accuracy as k grows.
However, the Galerkin discretization of the boundary integral equation, leads to a system of linear equations
with coefficients that are highly-oscillatory double integrals, that, in practice, can not be computed exactly.
We propose a novel numerical technique for computing these integrals with the number of quadrature points
required to maintain the accuracy, independent of the wavenumber.
References:
[1] V.Dominguez, I.G.Graham, V.P.Smyshlyaev, A hybrid numerical-asymptotic boundary integral method
for high-frequency acoustic scattering. Numer. Math. 106 (2007), pp. 471-510.
[2] T.Kim, V.Dominguez, I.G.Graham, V.P.Smyshlyaev, Resent progress on hybrid numerical asymptotic
boundary integral methods for high-frequency scattering problems, Proceedings of UKBIM7 (2009), pp.
15-23. |
|
John King (University of Nottingham) | Tuesday 6th July 17:00 |
Multiscale modelling of cell populations |
Intercellular signalling processes in populations
of biological cells can lead to neighbours adopting different fates.
Homogenisation approaches suited to describing the tissue-scale
properties of such discrete systems will be outlined, together with
some of their implications. |
|
Frances Kuo (University of New South Wales) | Thursday 8th July 18:15 |
Lifting the curse of dimensionality
- quasi Monte Carlo methods for high dimensional integration
|
High dimensional problems, that is, problems with a
very large number of variables, are coming to play an ever more
important role in applications. These include, for example, option
pricing problems in mathematical finance, maximum likelihood problems
in statistics, and porous flow problems in computational physics. High
dimensional problems pose immense challenges for practical computation,
because of a nearly inevitable tendency for the costs of computation to
increase exponentially with dimension: this is the celebrated "curse of
dimensionality". In this talk I will give an introduction to
"quasi-Monte Carlo methods" for tackling high dimensional integrals,
with a focus on "lattice rules", and discuss the challenges that we
face while attempting to lift the curse of dimensionality.
|
|
Seong Lee (Chevron Energy Technology Company ) | Friday 9th July 14:30 |
Adaptive Multiscale Finite Volume Method for Multiphase Flow in a Heterogeneous Reservoir |
Recent advances in multiscale methods show great
promise in efficiently simulating a high-resolution model for highly
heterogeneous media. We propose numerical, adaptive prolongation and
restriction operators of flow and transport equations that will greatly
improve numerical efficiency over the conventional finite difference
reservoir simulation. We also discuss iterative methods to control
numerical errors in MSFV simulation and devise adaptive, numerical
strategy that yields high computational efficiency within acceptable
error tolerance. |
|
Ben Leimkuhler (University of Edinburgh) | Tuesday 6th July 17:35 |
Simplified modelling of energetic interactions using thermal baths, with application to a fluid vortex system. |
Using the thermodynamic concept of a reservoir, we
investigate a computational model for interaction with unresolved
degrees of freedom (a thermal bath) [1]. We assume that a finite
restricted system can be modelled by a generalized canonical ensemble,
described by a density which is a smooth function of the energy of the
restricted system. A generalized stochastic-dynamic thermostat [2]
enables modelling of a restricted resolved dynamics embedded within a
larger energetic bath, while leaving the desired equilibrium
distribution invariant. To illustrate the method, we apply these
techniques in the setting of a simplified point vortex flow on a disc,
in which a modified Gibbs distribution (modelling a finite, rather than
infinite, bath of weak vortices) provides a regularizing formulation
for restricted system dynamics.
Although our method does not provide a proper dynamical closure, it
is very straightforward to implement in a wide range of situations and
can provide realistic averages. Numerical experiments, effectively
replacing many vortices by a few artificial degrees of freedom, are in
excellent agreement with the two-scale simulations that have appeared
in the literature [3].
[1] Dubinkina, S., Frank, J. and Leimkuhler, B., Simplified
modelling of energetic interactions with a thermal bath, with
application to a fluid vortex system, preprint, 2010.
[2] Leimkuhler, B., Generalized Bulgac-Kusnezov methods for
sampling of the Gibbs-Boltzmann measure, Physical Review E 026703,
2010.
[3] Bühler, O., Statistical mechanics of strong and weak point vortices in a cylinder, Physics of Fluids, 14, 2139-2149, 2001.
|
|
Qifeng Liao (University of Manchester) | Poster |
Effective Error Estimators for Low Order Elements |
This poster focuses on a posteriori error
estimation for (bi-)linear and (bi-)quadratic elements. At first, the
simple diffusion problem is tested for introducing the methodology we
adopted for doing error estimation, which is based on solving local
Poisson problems. Next, this methodology is applied to dealing with
classical mixed approximations of incompressible flow problems.
Computational results suggest that our error estimators are
cost-effective, both from the perspective of accurate estimation of the
global error and for the purpose of selecting elements for refinement
within a contemporary self-adaptive refinement algorithm. |
|
Ping Lin (University of Dundee) | Wednesday 7th July 17:35 |
Quasicontinuum methods for crystalline materials with simple and complex lattice structure |
Many scientific systems such as materials may be
modeled by a large number of
particles (or atoms) where any particle interacting with any other
through, for example, a pair potential energy. The equillibrium
configuration is a minimizer of the total energy of the system. The
computational cost is extremely high since the number of particles (or
atoms) is usually huge. An approximate sparse representation of the
system is necessary to reduce the computational cost. Recently in
material research much attention has been paid to a so-called
quasicontinuum (QC) approximation which may be seen as an approximate
representation of the accurate atomistic model. We will study
some QC methods for cyrstalline materials with simple and complex
lattice structures and estimate the error of QC methods. Part of the
talk is based on
joint work with A Abdulle and A Shapeev. |
|
Mitchell Luskin (University of Minnesota) | Tuesday 6th July 11:40 |
Hybrid Atomistic-to-Continuum Coupling Methods |
Many materials problems require the accuracy of atomistic
modeling in small regions, such as the neighborhood of a crack
tip. However, these localized defects typically interact
through long ranged elastic fields with a much larger region
that cannot be computed atomistically. Materials scientists
have proposed many methods to compute solutions to these
multiscale problems by coupling atomistic models near a
localized defect with continuum models where the deformation is
nearly uniform. During the past several years, a mathematical
structure has been given to the description and formulation of
atomistic-to-continuum coupling methods, and corresponding
mathematical analysis has clarified the relation between the
various methods and the sources of error.
I will present three tutorial lectures covering the relation
between atomistic and continuum models and the formulation and
analysis of coupling methods with a focus on the quasicontinuum
method. The development of coupling methods for crystalline
materials that are reliable and accurate for configurations
near the onset of lattice instabilities such as dislocation
formation has been particularly challenging. I will present
theory developed with Matthew Dobson and Christoph Ortner to
assess currently utilized methods and to propose more reliable,
accurate, and efficient methods.
|
|
Mitchell Luskin (University of Minnesota) | Wednesday 7th July 09:15 |
Hybrid Atomistic-to-Continuum Coupling Methods |
|
Mitchell Luskin (University of Minnesota) | Thursday 8th July 11:40 |
Hybrid Atomistic-to-Continuum Coupling Methods |
|
Mitchell Luskin (University of Minnesota) | Thursday 31st December : |
Hybrid Atomistic-to-Continuum Coupling Methods |
|
Roland Masson (Institute Francais du Petrole) | Friday 9th July 12:15 |
Finite volume schemes for multiphase porous media flows
|
(Joint work with Leo Agelas, Daniele Di Pietro, Robert Eymard, Cindy Guichard, Roland)
This talk focuses on cell centered finite volume schemes with
applications to multiphase porous media flows. We shall first motivate
the choice of cell centered finite volume schemes for reservoir
simulation, CO2 sequestration and basin modeling and enhance the
difficulties to be overcome from the points of view of the models, of
the geometry and of the properties of the porous media. Unfortunately
there is not yet a cell centered finite volume scheme on practical
meshes combining all the desired properties, mainly, linear fluxes,
coercivity, compact stencil, parallelism, exactness on piecewise linear
solutions for cellwise constant diffusion tensors, ... We will hence
discuss the pros and cons for our applications of some finite volume
schemes chosen among MPFA (MultiPoint Flux Approximation) schemes based
on flux construction such as the O,L,G schemes and among SUSHI type
schemes based on discrete variational formulations. |
|
Markus Melenk (TU Vienna) | Monday 12th July 11:40 |
Helmholtz problems at large wavenumbers |
(Joint work with S. Sauter (Zurich) and M. Loehndorf (Vienna))
Time-harmonic wave propagation problems are often modelled with the
Helmholtz equation.This setting arises, for example, in acoustic or
electromagnetic scattering. When numerically solving Helmholtz
problems, several issues arise, in particular in the case of large
wavenumbers k.Firstly,
a decision has to be made whether a volume-based method (such as FEM)
is used or a boundary integral equation formulation (i.e., BEM) is
employed. Secondly, given the highly
oscillatory nature of the solution, it may be of interest to employ
special, problem-adapted ansatz functions in the numerical method
instead of the standard piecewise polynomial based ones. Such function
could be obtained, for example, from asymptotic methods. Thirdly,
besides the approximation properties of the ansatz spaces, the
stability of the numerical method has to be considered.
This series of talks will survey several methods currently employed
for Helmholtz problems. In particular, we will discuss methods that
take the highly oscillatory nature of the solution into account. The
primary focus of the talks, however, will be on recent results
concerning the stability of discretizations. Here, we will restrict our
attention to the setting of standard piecewise polynomial ansatz
spaces. We discuss standard high order finite element discretizations
(hp-FEM) as well high order boundary integral equation approaches
(hp-BEM). In the latter case, we focus on the so-called Brakhage-Werner
or Burton-Miller formulations. For both hp-FEM and hp-BEM, we show k-independent stability of the discretization under the following two assumptions:
- (scale resolution condition):the mesh spacing h and the approximation order p satisfy the condition that
kh/p is sufficiently small and p > C log k.
-
(well-posedness of the continuous problem):
the solution operator for the continuous problem grows at most polynomially in the wavenumber k.
The stability analysis rests on suitably defined adjoint problems and
how the solutions of these adjoint problems can be approximated from
the ansatz spaces. Thus, the stability analysis is reduced to an
approximation theoretic problem, which, in the present context of
piecewise polynomial approximation, can in turn
be answered by a suitable k-explicit regularity theory for
Helmholtz problems.
This regularity theory takes the form of an additive splitting of the
solution
into a part with finite Sobolev regularity and an analytic part. The
essential point of the regularity theory is that the stability
constants in the estimates for both parts can be controlled explicitly
in the wavenumber k.
|
|
Markus Melenk (TU Vienna) | Tuesday 13th July 09:15 |
Helmholtz problems at large wavenumbers |
|
Markus Melenk (TU Vienna) | Wednesday 14th July 10:10 |
Helmholtz problems at large wavenumbers |
|
Ray Millward (University of Bath) | Poster |
A new adaptive multiscale finite element method with applications to high contrast interface problems. |
This new adaptive multiscale method extends the
work of Durlofsky, Efendiev and Ginting to introduce a multiscale
method where the shape of the basis functions adapt to the underlying
pde. The method avoids the need for technical local boundary conditions
when performing local solves and still allows a coarse global solve.
The method has been applied to high contrast interface problems where
the loss of regularity at the interface reduces the rate of
convergence, however, the adaptive method converges as if the interface
wasn't there. The method also doesn't require the mesh to fit the
interface and through the adaptive process the mesh remains constant.
Examples will be presented which relate to structural optimization and
the resulting high contrast problem that arises. |
|
Peter Monk (University of Delaware) | Monday 12th July 15:30 |
The solution of time harmonic wave equations using complete families of elementary solutions |
This presentation is devoted to discussing plane
wave methods for approximating the time-harmonic wave equation paying
particular attention to the Ultra Weak Variational Formulation (UWVF).
This method is essentially an upwind Discontinuous Galerkin (DG) method
in which the approximating basis functions are special traces of
solutions of the underlying wave equation. In the classical UWVF, due
to Cessenat and Despres, sums of plane wave solutions are used element
by element to approximate the global solution. For these basis
functions, convergence analysis and considerable computational
experience shows that, under mesh refinement, the method exhibits a
high order of convergence depending on the number of plane wave used on
each element. Convergence can also be achieved by increasing the number
of basis functions on a fixed mesh (or a combination of the two
strategies). However ill-conditioning arising from the plane wave basis
can ultimately destroy convergence. This is particularly a problem near
a reentrant corner where we expect to need to refine the mesh.
The presentation will start with a summary of the UWVF and some typical
analytical and numerical results for the Hemholtz equation. It may be
that different basis functions need to be used in different parts of
the domain. I shall present some numerical results investigating
convergence on an L-shaped domain using singular Bessel functions near
the corner. An alternative, that also extends to 3D, is to use
polynomial basis functions on small elements. Using mixed finite
element methods, we can view the UWVF as a hybridization strategy and I
shall also present theoretical and numerical results for this approach.
Although neither the Bessel function or the plane wave UWVF are
free of dispersion error (pollution error) they can provide a method
that can use large elements and small number of degrees of freedom per
wavelength to approximate the solution. Extensions to Maxwell's
equations and elasticity will be briefly discussed. Perhaps the main
open problems are how to improve on the bi-conjugate gradient method
that is currently used to solve the linear system, and how to
adaptively refine the approximation scheme.
|
|
Frédéric Nataf (Université Pierre et Marie Curie) | Tuesday 13th July 16:45 |
Coarse grid correction for domain decomposition methods for problems with high heterogeneities. |
We present an automatic construction of an adapted
coarse grid for problems with highly discontinuous coefficients. The
method is very robust with respect to the size of the jumps and the
decomposition (automatic or manual partitioner). |
|
Richard Norton (University of Oxford) | Poster |
Evolution of Microstructure |
A simple model problem for the emergence and
evolution of microstructure based on a double well potential is
considered. Analytical issues of interest are the existence and
stability of rest points while numerical issues include how the error
analysis of FEM depends on a regularization parameter. |
|
Jill Ogilvy (BAE Systems (Operations) Limited) | Friday 9th July 11:45 |
Modelling electromagnetic performance of large structures |
The talk will provide an overview of some of the
interests and capabilities of BAE SYSTEMS in the modelling of
electromagnetic phenomena for radar applications. General methods will
be outlined, together with some examples of model predictions. Some
implications for multi-scale modelling will be addressed. |
|
Christoph Ortner (University of Oxford) | Wednesday 7th July 18:10 |
Atomistic/continuum coupling schemes for solids. |
Low energy equilibria of crystalline materials are typically characterised by
localized defects that interact with their environment through long-range elastic
fields. By coupling atomistic models of the defects with continuum models for the
elastic far field one can, in principle, obtain models with near-atomistic accuracy at
significantly reduced computational cost. However, several pitfalls need to be
overcome to find a reliable coupling mechanism. In this talk I will discuss some
selected possible mechanisms and their analysis. |
|
Tim Payne (Met Office UK) | Friday 9th July 15:00 |
The assimilation of data into atmospheric models, and the use of linearisations optimised for finite perturbations.
|
Atmospheric forecast models attempt to represent
spatial scales from tens to millions of metres, and temporal scales
from seconds to many hours.
All major weather prediction centres currently assimilate data into
their
forecasting model by four dimensional variational data assimilation.
This
finds the atmospheric state (or "analysis") which "best fits" both the
prior
information (or "background", a short forecast from a previous
analysis) and
recent observations. It does this by minimising a cost function which
simultaneously penalises the departure of the analysis from the
background, and the departures of the forecast from the analysis to the
observations distributed in time. To make the problem manageable the
latter is done using a linear model which predicts the evolution of the
analysis-background increments to the observation times.
Conventionally this linear model is taken to be the first
derivative of the forecast model, which is appropriate for
infinitesimal increments but is a poor predictor of the true evolution
of finite-sized increments. In this talk we show that if the pdf of the
increments is known we may construct better linearisations, and show
how the use of this type of linearisation can improve the assimilation
of data and thereby the model forecast.
|
|
Gibin Powathil (University of Dundee) | Poster |
Modelling the Spatial Distribution of Chronic Tumour Hypoxia: Implications for Experimental and Clinical Studies |
Tumour hypoxia (i.e. a lack of oxygen) is
considered to be an important prognostic factor in tumor progression,
possibly affecting the aggressiveness of tumours as well as the
metastatic and invasive potential of cancer cells. It is usually
measured by direct (invasive) measurements of tumour oxygenation
tension using needle electrodes or through quantification of intrinsic
or extrinsic biomarkers. An alternative approach to estimate tumour
hypoxia is through theoretical computational simulations that
incorporate knowledge of various measurable parameters supplemented by
non-invasive imaging of tumour vasculature. The method developed here
illustrates an alternative way to estimate tumour hypoxia and provides
guidance in planning accurate and effective therapeutic strategies and
invasive estimation techniques.
The main purpose of this study is to model and quantify hypoxia
using a known spatial distribution of tumour vasculature, obtained
through available imaging techniques, and study the effects of hypoxia
on radiation response. The results of theoretical analysis of estimated
hypoxia, quantified by finding the percentage of area and through the
electrode sampling method, show a reasonable agreement with biomarker
stained hypoxic proportions obtained through biopsies. In addition, the
estimated hypoxic proportions are used to study the effect of
radiation, by using a modified linear quadratic model. |
|
Catherine Powell (University of Manchester) | Thursday 8th July 17:05 |
Recycling Techniques for Solving Stochastic Collocation Systems |
Recently there have been attempts to compare the
computational costs of solving elliptic PDEs with random coefficients
via stochastic Galerkin and stochastic collocation techniques. Fair
computational comparisons can only be made if the best possible solvers
are used for the linear systems in question. In the case of stochastic
collocation, the linear systems are not only decoupled (which is seen
as the major advantage) but also (for some model problems) highly
similar. In this talk, we discuss a number of ways in which this
similarity can be exploited to gain computational savings. |
|
Olof Runborg (KTH Stockholm) | Wednesday 7th July 17:00 |
A Multiscale Method for the Wave Equation in Heterogeneous Medium |
We consider the wave equation in a medium with a rapidly varying speed of propagation. We
construct a multiscale scheme based on the heterogeneous multiscale method, which can
compute the correct coarse behavior of wave pulses traveling in the medium, at a
computational cost essentially independent of the size of the small scale variations.
This is verified by theoretical results and numerical examples. We also consider the case
when waves travel over long time in heterogeneous medium, where dispersion effects are
introduced which are not captured by standard homogenization.
|
|
Ruth Sabariego (University of Liege) | Poster |
Multiscale computational modelling in electromagnetism |
Almost all problems in science and engineering are
multiscale (in space and/or time) and multiphysical. The interactions
in the microscale may significantly influence the solution of the
macroscale problem, and should not be disregarded in the numerical
model. However, resolving the full problem at the microscopic level
with classical numerical methods is prohibitively expensive if not
impossible. Dedicated multiscale techniques that take advantage of the
separation of scales prove indispensable.
Material synthesis is an example of an emerging technology that
urgently needs efficient multiscale methods for numerically determining
the effective properties of engineered materials, i.e. their
constitutive law. Artificially tailored materials exhibit exceptional
macroscopic properties that are directly linked to their
microstructural complexity. The ability to simulate numerically the
properties of novel materials is an invaluable help for design
optimization and can avoid expensive and time-consuming trial and error
tests.
Our team aims at developing efficient numerical techniques for solving
multiscale electromagnetic problems. The developments already
successfully applied in mechanical and thermal analyses will be adapted
to the particularities of electromagnetism. For electrostatic,
magnetostatic or magnetodynamic problems, the electromagnetic
multiscale methods will be related to the techniques recently proposed
for heat transfer problems. An important and open challenge lies in
solving full wave problems, in particular in the presence of internal
resonances. |
|
Marcus Sarkis (Worcester Polytechnic Institute ) | Tuesday 13th July 15:30 |
Infinite-dimensional stochastic Darcy equations, finite-dimensional Petrov-Galerkin approximations
and a priori error estimates. |
(Joint work with Juan Galvis (Texas AM))
In this talk we consider a stochastic Darcy's pressure equation with
random log-normal permeability and random right-hand side. To accommodate
the
lack of ellipticity and continuity, and singular right-hand sides, we
introduce an appropriate representation of
the permeability stochastic fields and infinite-dimensional
norms and spaces. We then introduce new continuous and
discrete weak formulations based on a Petrov-Galerkin strategy and
present inf-sup conditions, well-posedness, a priori error estimations
and numerical experiments. |
|
Daniela Schlueter (University of Dundee) | Poster |
Multi-scale mathematical modelling of cancer cell invasion: The role of cell-cell and cell-matrix adhesion
|
‘Cancer’ is an umbrella term for about 200
different diseases and with its diversity it is one of the main causes
of death in the world. The malignancy of almost all types of tumours is
determined by the ability of cancer cells to invade the surrounding
tissues and then to form secondary tumours (metastases) at distant
sites in the body. These metastases are responsible for ~90% of cancer
deaths. In order to advance in cancer treatment strategies, it is
therefore of high importance to understand the processes involved in
cancer cell invasion. A crucial aspect of cancer cell invasion is the
role of cell adhesion, both cell-cell and cell-matrix.
We focus on understanding the first steps leading to cancer
invasion and try to identify key processes that allow the detachment of
individual cells or small cell clusters from the main tumour mass and
their local invasion. For this we use an individual force-based
multi-scale approach to model physical properties of the cells and
intra- and inter-cellular protein pathways involved in tumour growth,
cell-cell and cell-matrix adhesion. The key pathways include those of
E-cadherin and beta-catenin.
Using computational simulations, with our model we can investigate
the spatio-temporal distribution of E-cadherin and beta-catenin levels
in individual cancer cells and predict what implications this has for
the adhesion of the cancer cells to each other and to the extracellular
matrix. By examining the cell-matrix interactions with our model we can
also highlight the importance of the microenvironment in tumour
progression and how cell-matrix interactions can lead to more
aggressive tumours.
|
|
Christoph Schwab (ETH Zürich) | Saturday 10th July 11:40 |
Sparse Tensor Discretizations
of PDEs with stochastic and multiscale data
|
We report on recent work on
the numerical analysis of several
discretization schemes for PDEs
with random inputs.
The three lectures will address
1. Sparse adaptive tensor FEM
for elliptic and parabolic
PDEs with random loadings.
2. Sparse adaptive gpc FEM for
elliptic and parabolic
PDEs with random coefficients.
3. Sparse adaptive tensor FEM for
elliptic problems with multiple
scales and random coefficients.
We shall survey recent mathematical
results and algorithmic developments
on the numerical analysis of deterministic
and adaptive sparse tensor discretizations
of PDEs with random inputs.
We review in 1. and 2. recent,
sharp mathematical results
on the convergence rates of MC methods
as well as of adaptive spectral methods
of "generalized polynomial chaos" type
for these problems and derive guidelines
for implementation and complexity bounds.
In 3., we apply sparse tensor techniques
to obtain efficient solutions of k-scale
elliptic homogenization problems, possibly
with random coefficients by combining with
1. and 2. |
|
Christoph Schwab (ETH Zürich) | Monday 12th July 09:15 |
Sparse Tensor Discretizations
of PDEs with stochastic and multiscale data
|
|
Christoph Schwab (ETH Zürich) | Tuesday 13th July 11:40 |
Sparse Tensor Discretizations
of PDEs with stochastic and multiscale data
|
|
Andrew Stuart (Warwick University) | Tuesday 6th July 09:15 |
Multiscale modelling and inverse problems |
Joint work with Greg Pavliotis, Imperial College
The need to blend observational data and mathematical models arises
in many applications
and leads naturally to inverse problems. Parameters which are
functions, such as constitutive tensors, initial conditions and forcing
can be estimated on the basis of observed data. The resulting inverse
problems are often ill-posed and some form of regularization is
required. When the function being estimated has a multiscale structure
a number of natural questions arise, in particular: (i) how should the
data be used, and the regularization chosen, if only an averaged or
homogenized solution to the inverse problem is required; (ii) how
should the data be used, and the regularization chosen, if the details
of the multiscale structure are important.
We will devote three lectures to a development of the mathematics
required to address these questions.
We adopt a probabilistic approach to the inverse problems, based on the
Bayesian viewpoint, and show how the choice of prior measure is
intimately related to answering questions (i) and (ii). The ideas will
be illustrated throughout in the context of simple models for
groundwater flow.
The lectures will be pedagogical in style and accesible to an audience with basic
knowledge of differential equations and probability.
Background material on the Bayesian approach to inverse problems can be found in:
Inverse Problems: A Bayesian
Perspective AM Stuart, Acta Numerica 19 (2010).
Background material on multiscale methodology can be found in:
Multiscale Methods: Averaging and Homogenization, GA Pavliotis and AM Stuart,
Springer-Verlag, 2008. An excerpt from the book may be found at:
Multiscale Methods: Averaging and Homogenization.
|
|
Andrew Stuart (Warwick University) | Wednesday 7th July 11:40 |
Multiscale modelling and inverse problems |
|
Andrew Stuart (Warwick University) | Thursday 8th July 10:10 |
Multiscale modelling and inverse problems |
|
Marc Sturrock (University of Dundee) | Poster |
Mathematical modelling of the p53-mdm2 oscillatory system |
(Joint work with A.J. Terry, D.P. Xirodimas, A.M. Thompson and M.A.J. Chaplain) The
p53 network is arguably the most important pathway involved in
preventing the initiation of cancer. The p53 transcription factor is
responsible for the regulation of DNA repair, cellular senescence and
apoptosis. Mutations that inactivate p53 function have been detected in
more than 50% of human cancers and even tumours with wild type p53 have
defects in upstream regulators or downstream effectors of p53. A vital
negative regulator of p53 function in cells is the Mdm2 oncogene
product. Mdm2 protein enhances p53 degradation in both the nucleus and
cytoplasm via ubiquitination. Mdm2 is also a target gene for p53. This
creates a negative feedback loop which provides tight regulation of p53
function in cells. Experiments have been performed to measure the
dynamics of fluorescently tagged p53 and Mdm2 over several days in
individual living cells. Some cells exhibited undamped oscillations for
at least 3 days (more than 10 peaks).
Building on previous mathematical modelling approaches, we derive a
system of partial differential equations (PDEs) to capture the
evolution in space and time of the concentrations of variables in the
p53-Mdm2 system. Through computational simulations we show that our
reaction-diffusion model is able to produce sustained oscillations both
spatially and temporally, reflecting experimental evidence well and
providing further insight than previous models. The simulations of our
models also allow us to calculate a diffusion coefficient range for
which the model exhibits oscillatory dynamics.
|
|
Endre Süli (University of Oxford) | Wednesday 14th July 11:40 |
Existence, equilibration and approximation of global weak solutions to kinetic models of dilute polymers |
We establish the existence of global-in-time weak
solutions to a general class of coupled microscopic-macroscopic
FENE-type bead-spring chain models that arise from the kinetic theory
of dilute solutions of polymeric liquids with noninteracting polymer
chains. The class of models involves the unsteady incompressible
Navier-Stokes equations in a bounded domain in two and three space
dimensions, for the velocity and the pressure of the fluid, with an
elastic extra-stress tensor appearing on the right-hand side of the
momentum equation. The extra-stress tensor stems from the random
movement of the polymer chains and is defined by the Kramers expression
through the associated probability density function that satisfies a
Fokker-Planck type parabolic equation, a crucial feature of which is
the presence of a center-of-mass diffusion term. We require no
structural assumptions on the drag term in the Fokker-Planck equation;
in particular, the drag term need not be corotational.
With a square-integrable and divergence-free initial velocity datum
for the Navier-Stokes equation and a nonnegative initial probability
density function for the Fokker-Planck equation, which has finite
relative entropy with respect to the Maxwellian, we prove the existence
of global-in-time weak solutions to the coupled
Navier-Stokes-Fokker-Planck system, satisfying the initial condition,
such that the velocity belongs to the classical Leray space
and the probability density function has bounded relative entropy and
square integrable Fisher information over any time interval. The key
analytical tool in our proof is Dubinskii's compactness theorem in
seminormed sets. It is also shown using the Csisza´r-Kullback
inequality that, in the absence of a body force, the global weak
solution decays exponentially in time to the equilibrium solution, at a
rate that is independent of the choice of the initial datum and of the
centre-of-mass diffusion coefficient. We also discuss briefly
computational difficulties associated with the numerical approximation
of the high-dimensional Fokker-Planck equation with unbounded drift
featuring in the model.
The talk is based on joint work with John W. Barrett (Department of Mathematics, Imperial College London).
|
|
Aretha Teckentrup (University of Bath) | Poster |
Multilevel Monte Carlo methods for elliptic PDEs with random coefficients |
When solving partial differential equations (PDEs)
with random coefficients numerically, one is usually interested in
finding the expected value of a certain statistic of the solution. A
common way to obtain estimates is to use Monte Carlo methods combined
with spatial discretisations of the PDE on sufficiently fine grids.
However, standard Monte Carlo methods have a rather slow rate of
convergence with respect to the number of samples used, and individual
samples of the solution are usually costly to compute numerically. In
this talk we introduce the multilevel Monte Carlo method, with the aim
of achieving the same accuracy of standard Monte Carlo at a much lower
computational cost. The method exploits the linearity of expectation,
by expressing the quantity of interest on a fine spatial grid in terms
of the same quantity on a coarser grid and some “correction” terms. It
has been extensively studied in the context of stochastic differential
equations in the area of financial mathematics by Mike Giles and
co-authors. We will give an outline of the method applied to elliptic
PDEs with random coefficients, and also show some numerical results on
the reduction of the computational cost resulting from it. The
efficiency of the multilevel method is assessed by comparing it to
standard Monte Carlo. |
|
Raul Tempone (KAUST, Saudi Arabia) | Thursday 8th July 16:30 |
Towards automatic global error control: computable weak error
expansion for the tau-leap method |
This work develops novel error expansions with computable
leading order terms for the global weak error in the tau-leap
discretization of pure jump processes arising in kinetic Monte Carlo
models. Accurate computable a posteriori error approximations are
the basis for adaptive algorithms; a fundamental tool for numerical
simulation of both deterministic and stochastic dynamical
systems. These pure jump processes are simulated either by the
tau-leap method, or by exact simulation, also referred to as dynamic
Monte Carlo, the Gillespie algorithm or the Stochastic simulation
algorithm. Two types of estimates are presented: an emph{a priori}
estimate for the relative error that gives a comparison between the
work for the two methods depending on the propensity regime, and an
emph{a posteriori} estimate with computable leading order term.
Numerical examples show good agreement with the theory. |
|
Alan Terry (University of Dundee) | Poster |
A mathematical model of the NF-kB negative feedback loop |
Many stressful, inflammatory, and innate immune
responses are regulated by the NF-kB signal transduction pathway.
Deregulation of this pathway has been observed in numerous types of
human cancer. A negative feedback loop is central to the mechanism by
which NF-kB proteins signal. Experiments have shown that this feedback
loop can cause oscillations in NF-kB activity. Various target genes of
NF-kB are only transcribed after a certain number of such oscillations
have occurred. Here we describe the mechanism by which the NF-kB
pathway functions and we capture its essence in a mathematical model.
Simulations of our model are consistent with experimental results in
that they demonstrate oscillatory dynamics. Non-dimensionalisation of
our model allows us to estimate the diffusion rates for the key
proteins in the NF-kB pathway. Given that we can estimate these
diffusion rates, we hope that experimentalists will feel inspired to
measure them. |
|
Dumitru Trucu (University of Dundee) | Poster |
The bio-heat equation, which is widely accepted
as a mathematical model that describes the heat transfer process within
the human body tissue, has important applications in many biomedical
investigations. Among other theoretical aspects that we are concerned
with regard to this equation, the perfusion coefficient, denoted with
Pf, receives a particularly important interest because of its physical
meaning, as the rate at which a unit of blood travels through a unit of
tissue in a unit of time. Our analysis is placed in the non-steady
state case and is focused on the inverse problems that are concerned
with the perfusion coefficient, when Pf is considered as being either
constant, time-dependent, space- and time- dependent, or temperature-
dependent. This inverse analysis allows us to accurately recover of the
perfusion information from temperature and heat flux measurements taken
in minimally invasive regions. |
|
|
Richard Tsai (University of Texas - Austin) | Tuesday 6th July 18:10 |
A continuum-pore scale coupling algorithm for flow in porous media |
(Joint work with Bjorn Engquist, Jay Chu, and Masa Prodonovic.)
We present our results in simulating flows in porous media by a coupled
continuum and network model.
In reservoir simulations, total flow rate and the oil cut are import
macroscopic qualities. They can be computed by knowing macroscopic
pressure and saturation. Typical simulations assume that the flux is a
linear function of the pressure gradient whose coefficient stays
constant.
However, in reality, there is no general way to determine the effective
flux or the permeability field at the macroscopic level.
In particular, these functions may be nonlinear function of the
macroscopic quantities.
We propose to use some detailed microscopic network models to estimate
the macroscopic flux.
Furthermore, the macro-quantities (macroscopic pressure, velocity, or
flux) from macroscopic simulations are used to
determine whether updating the microscopic configurations is needed,
e.g. the opening of the throats or growing of the fractures.
The updated microscopic configurations then are used to update our
estimate on the flux at the macroscopic level.
|
|
Grigory Vilensky (University College London) | Friday 9th July 16:00 |
Mathematical modelling of anomalous absorption of ultrasound in human tissue |
The work examines possible formulations of the
problem of nonlinear ultrasound wave propagation in soft biological
tissue from the standpoint of high intensity focused ultrasound
applications for medical treatment.
The proposed work has been written with the needs of the applied
mathematical and modelling communities in mind and brings together
essential information about the available experimental results,
complementing these with the practical approach to modelling. It aims
to provide the means for theoretical understanding of the physics of
the interaction of ultrasound with tissue.
Central to this is the problem of anomalous absorption of sound energy
by tissue. It occurs as a result of excitation of internal degrees of
freedom of molecular motion by the sound field known in the literature
as molecular relaxation processes. The work discusses the
phenomenological theory of anomalous absorption of ultrasound in tissue
and also proposes a statistical model to describe the underlying
physical mechanism.
The proposed model treats the phenomenon as a mixture of random
variables, each of these characterised by its own probability density
function. In its main features the theory is similar to the related
traditional methodologies used in reliability theory, [1], and
statistical radio physics, [2].
References
1. Gnedenko, B.V., Beliaev, Yu.K, Soloviev, A.D. 1965 Mathematical Methods of reliability Theory. Moscow. “Nauka”. 524 p.
2. Rytov, S.M. 1976 Introduction Into Statistical Radiophysics. Pt. 1. Random Processes. GRFML “Nauka”. 494 p. |
|
Holger Wendland (University of Oxford) | Wednesday 14th July 15:05 |
Multiscale Radial Basis Functions |
We study a multiscale scheme for the approximation of Sobolev
functions on bounded domains. Our method employs scattered data sites
and compactly supported radial basis functions of varying support
radii at scattered data sites.
The actual multiscale approximation is constructed by a sequence of
residual corrections, where different support radii are employed to
accommodate different scales. Convergence theorems for the scheme are proven,
and it is shown that the condition numbers of the linear systems at
each level are independent of the level, thereby establishing for the
first time a mathematical theory for
multiscale approximation with scaled versions of a single compactly supported
radial basis function at scattered data points on a bounded domain.
This work is based upon earlier work with Ian Sloan and Thong Le Gia (University of New South Wales, Australia) |
|
Xiao-Hui Wu (ExxxonMobil) | Tuesday 13th July 17:55 |
Approaches toward Reliable Reservoir Performance Prediction |
A survey of emerging approaches toward reliable reservoir performance prediction will be presented. |
|
Ludmil Zikatanov (Penn State) | Tuesday 13th July 17:20 |
Decompositions of Discontinuous Galerkin Finite Element Spaces and Preconditioning |
(Based on joint works with Blanca Ayuso de Dios from Centre de Recerca Matematica (CRM), Spain) We
introduce a natural decomposition of the discontinuous Galerkin Finite
Element spaces. For the lowest order case this decomposition is a
direct sum of of the Crouzeix-Raviart non-conforming finite element
space and a subspace that contains functions discontinuous at interior
faces. We will also indicate how to construct such decompositions for
higher order elements. Based on these decompositions we develop
iterative and preconditioning techniques for the solution of the linear
systems resulting from several discontinuous Galerkin (DG) Interior
Penalty (IP) discretizations of elliptic problems. We analyze the
convergence properties of these algorithms for both symmetric and
non-symmetric IP schemes. We also present numerical examples confirming
the theoretical results. Further extension to problems with jumps in
the coefficients will also be discussed. |