09.02.2107 13:00 (s.t), We6 6.020
David Krieg (Jena)
## On the approximation of tensor product operators

02.02.2107 13:00 (s.t), We6 6.020 Joachim Rehberg (WIAS Berlin)## On optimal elliptic Sobolev regularity

19.01.2107 13:00 (s.t), We6 6.020 Glenn Byrenheid (Bonn)## Non-linear approximation with respect to the Faber-Schauder dictionary

12.01.2107 13:00 (s.t), We6 6.020 Constantin Diez (Adam Opel AG, Rüsselsheim)## Knowledge Extraction from Crash Simulations

15.12.2016 13:00 (s.t), We6 6.020 Christian Pfeiffer (Bonn)## Space-Time discontinuous Galerkin methods for parabolic optimal
control problems.

08.12.2016 13:00 (s.t), We6 6.020 Veronika Karl (Würzburg)## An Augmented Lagrange Method for elliptic state constrained optimal control
problems

01.12.2016 13:00 (s.t), We6 6.020 Bart Vandereycken (Geneva)## Robust methods for computing extremal points of real pseudospectra

17.11.2016 13:00 (s.t), We6 6.020 Lucas Bonifacius (TU München)## Strong stability of linear parabolic time-optimal control problems

03.11.2016 13:00 (s.t), We6 6.020 Constantin Weiser (Düsseldorf)## Numerical Quadrature in Econometrics – Two Examples

20.10.2016 13:00 (s.t), We6 6.020 Markus Bachmayr (Bonn)## Representations of Gaussian random fields and sparse Hermite approximation of lognormal diffusion problems

14.07.16 13:00 (s.t), We6 6.020 Jens Henrik Göbbert (FZ Julich)## Lowering the Barriers to In-Situ Visualization

21.07.16 13:00 (s.t), We6 6.020 Bashar Ibrahim## Mathematical Modeling and Simulation of DNA segregation

30.06.16 13:00 (s.t), We6 6.020 Friederike Hellwig, Humboldt-Universität zu Berlin## Low-Order Discontinuous Petrov-Galerkin Finite Element Methods for
Linear Elasticity

23.06.16 13:00 (s.t.) Reinhold Schneider (TU Berlin)## Hierachical Johnson-Lindenstrauß transform as a randomized hierarchical
singular value decomposition

09.06.16 13:00 (s.t), We6 6.020 Peter Oswald (Bonn)## Subspace Correction Methods: Randomization and Acceleration

02.06.16 13:00 (s.t), We6 6.020 Thomas Wick (RICAM)## Goal function evaluations and adaptive finite elements for phase-field fracture problems

19.05.16 13:00 (s.t), We6 6.020 Paul Sinz## Applications of a Spectral Problem: Maximum Energy Penetration Inside Composite Structures and MS-GFEM

## Stabilised finite element methods in anisotropic quadrilateral meshes

28.04.16 13:00 (s.t), We6 6.020 Christop Kacwin (Bonn)## On orthogonality of the Chebychev-Frolov lattice and applications

21.03.16 13:00 (s.t), We6 6.020 Ilyass Tabiai (Montreal)## Damage initiation and propagation in fiber reinforced composites: observation and modeling

31.03.16 13:00 (s.t), We6 6.020 Mario Ullrich (JKU Linz, Österreich)## On a Monte Carlo method for smooth functions

We present error bounds for a lattice rule for numerical integration
equipped with a random shift and a random dilation of the point set.
In particular, we show that the error of this method can be bounded by
the L_2-norm of the Fourier transform away from the origin. This
implies, e.g., that the worst-case error for mixed Sobolev spaces H^s_p
with p>2 and s>0 is of the order n^{-s-1/2} without additional
log-factors. Hence, the order is independent of the dimension.
This seems to be the first non-trivial example where randomization can
improve the performance of an algorithm by more than n^{-1/2}.

17.03.16 13:00 (s.t), We6 6.020 Yurii Kolomoitsev (NAS, Kiev)## Sharp Ulyanov inequalities in the spaces $L_p$, $p>0$

## Variational formulation of problems involving fractional order
differential operators

10.03.16 13:00 (s.t), We6 6.020 Sergey Tikhonov (CRM, Barcelona)## Polynomial inequalities with weights

## On probabilistic transformations and optimal sampling rules for emulator based inverse uncertainty quantification problems

18.02.16 13:00 (s.t), We6 6.020 Winnifried Wollner (TU Darmstadt)## Optimization of partial differential equations subject to pointwise constraints on the gradient of the state

28.01.16 13:00 (s.t), We6 6.020 Ira Neitzel (Bonn)## Finite element error estimates for elliptic Dirichlet-boundary control
problems with pointwise state constraints

21.01.16 13:00 (s.t), We6 6.020 Thomas Heller (Erlangen)## C++ on its way to exascale and beyond - The HPX Parallel Runtime System

14.01.16 13:00 (s.t), We6 6.020 Johannes Holke (Bonn)##
Dynamic load-balancing on adaptive tetrahedral meshes

17.12.15 13:00 (s.t), We6 6.020 Carsten Carstensen (HU Berlin)## Axioms of Adaptivity: Rate optimality of adaptive algorithms with
separate marking

03.12.15 13:00 (s.t), We6 6.020 Dietrich Braess (Bochum)## A posteriori error estimates for discontinuous Galerkin methods
by the two-energies principle

26.11.15 13:00 (s.t), We6 6.020 V. N. Temlyakov (U South Carolina, Columbia)## Incoherent systems and covering in finite dimensional Banach spaces

19.11.15 13:00 (s.t), We6 6.020 Emil Kieri (Uppsala)## Dynamical low-rank approximation in the presence of small
singular values

12.11.15 13:00 (s.t), We6 6.020 Thomas Kühn (Leipzig)## Approximation in multivariate periodic Gevrey spaces

05.11.15 13:00 (s.t), We6 6.020 Glenn Byrenheid (Bonn)## Non-optimality of rank-1 lattice sampling in spaces of mixed smoothness

29.10.15 13:00 (s.t), We6 6.020 Christian Kuske (Bonn)## Multi-Frequency Analysis (Combining $\mathcal{H}$-Matrices and MACA)

18.06.15 13:00 (s.t), We6 6.020 David Molitor (Bonn)## Randomized dimensionality reduction for binary classification

11.06.15 13:00 (s.t), We6 6.020 Dr. Andreas Mueller (Naval Postgraduate School, Monterey, CA, USA)## Parallel performance of the atmospheric model NUMA using element-based Galerkin methods

28.05.15 13:00, We6 6.020 Sebastian Mayer (University of Bonn)## The effect of sparsity and relaxations thereof in certain function approximation problems

21.05.15 13:00, We6 6.020 Alfonso Caiazzo (WIAS Berlin)## Multiscale modeling of weakly compressible elastic materials in harmonic
regime

07.05.15 13:00, We6 6.020 Daniel Elfverson (University of Uppsala)## A multilevel subset simulation for estimation of rare events

07.05.15 13:30, We6 6.020 Fredrik Hellman (University of Uppsala)## A multilevel Monte Carlo method for failure probabilities

23.04.15 13:00, We6 6.020 Lutz Gross (University of Queensland)## On the Solution of Large-scale Geophysical Inversion using Finite
Element Methods on Parallel Computers

12.03.15 14:00, We6 6.020 Vladimir N. Temlyakov (South Carolina)## Greedy algorithms in compressed sensing: Lebesgue-type inequalities for Chebyshev Greedy Algorithm smoothness

12.03.15 15:00, We6 6.020 Dinh Dung (Hanoi)## Hyperbolic cross approximation in infinite dimensions

02.03.15 14:00, We6 6.020 Wolfgang Hackbusch (MPI Leipzig and CAU Kiel)## Recursive Low-Rank Truncation of Block-Structured Matrices

22.01.15 13:00, We6 6.020 Dirk Plüger und Fabian Franzelin (Stuttgart)## Sparse Grids: challenges from applications

## From data to uncertainty: an integrated sparse grid approach

15.01.15, We6 6.020 Philipp Morgenstern (Bonn)## Analysis-suitable adaptive T-mesh refinement with linear complexity

08.01.15, We6 6.020 André Uschmajew (Bonn)## On low-rank approximabilty of solutions to Kronecker-structured operator
equations

18.12.14 Beatrice Vedel (Bretagne-Sud)## Hyperbolic multifractal analysis and application to textures classification

04.12.14 Van Kien Nguyen (Jena)## Weyl numbers of embeddings of Besov spaces with dominating mixed smoothness

27.11.14 Room We6, 6.020 Seyedehsomayeh Hosseini (ETH Zürich)## Subgradient algorithms for locally Lipschitz functions on Riemannian manifolds

20.11.14 Anna Persson (Göteborg)## Multiscale techniques for parabolic equations

13.11.14 Mira Schedensack (HU Berlin)## Comparison results for first-order FEMs

06.11.14 Henning Kempka (TU Chemnitz)## Function spaces with variable exponents

30.10.14 Dietmar Gallistl (Bonn)## Adaptive finite element approximation of eigenvalue clusters

23.10.14 Holger Rauhut (RWTH Aachen)## Numerical solution of high-dimensional parametric operator equations via compressive sensing

14.07.14 Toni Volkmer (TU Chemnitz)## Approximation of high-dimensional multivariate periodic functions based on
rank-1 lattice sampling

14.07.14 Lutz Kämerer (TU Chemnitz)## Sampling multivariate trigonometric polynomials and smooth periodic functions along rank-1 lattices

11.07.14 Peter Oswald (Jacobs University, Bremen)## Stable space splittings: New developments

11.07.14 Weiqi Zhou (Jacobs University, Bremen)## Convergence rate of the classical Kaczmarz method

07.07.14 Jan Kleinert (Fraunhofer IWTM, Kaiserslautern)## Nonsmooth Rigid Body Dynamics – Simulating Granular Material

30.06.14 Lei Zhang (Shanghai Jiao Tong University)## Construction and analysis of consistent energy based atomistic/continuum coupling methods

23.06.14 Volodya Temlyakov (University of South Carolina)## Bilinear and multilinear approximation of functions

16.06.14 Benjamin Doerr (Ecole Polytechnique, Paris)## The discrepancy of random points

26.05.14 Aviv Gibali (Fraunhofer IWTM, Kaiserslautern)## Projection methods - a powerful tool for real-world problems and combinatorial games

26.05.14 Jan Schwientek (Fraunhofer IWTM, Kaiserslautern)## Chebyshev approximation and semi-infinite programming

19.05.14 Christiane Helzel (HHU Düsseldorf)## High-order WENO finite volume methods for Cartesian grids

12.05.14 Hieu Nguyen (Edinburgh)## Combining domain decomposition and adaptivity: a study for additive Schwarz
methods

05.05.14 Aicke Hinrichs (Rostock)## Complexity of Banach space valued integration in the randomized setting

28.04.14 Mario Ullrich (Jena)## On integration in Sobolev spaces using Frolov's method

03.04.14 Dinh Zung (Hanoi)## Sampling Recovery and Cubature on Sparse Grids Based
on a B-spline Quasi-Interpolation

20.01.14 Jens Oettershagen (Bonn)## Integration of bivariate periodic functions with bounded mixed derivatives

03.02.14 Glenn Byrenheid (Bonn)## Optimal sampling of d-variate periodic functions with respect to energy sparse grids

27.01.14 Karoline Köhler (HU Berlin)## Efficient and Reliable Error Control for the Obstacle Problem

13.01.14 Fabian Franzelin (Stuttgart)## Spatially adaptive sparse grid collocation for multivariate peridynamic simulations

17.12.13 Philipp Morgenstern (Bonn)## Local refinement of regular quadrilateral meshes

03.12.13 Lev Markhasin (Stuttgart)## Discrepancy and integration in function spaces with dominating mixed smoothness

26.11.13 Sebastian Mayer (Bonn)## Dimensionality reduction in machine learning

Let $T$ be a bounded linear operator between two Hilbert spaces. We want to approximate $T$ by an algorithm that evaluates less than a given number $n$ of linear functionals. The minimal worst case error of such an algorithm is given by the $n$th approximation number of $T$. If $T$ is a tensor product operator, this is the $n$th largest number in the tensor product of the sequences of approximation numbers of its factors.
I will talk about the asymptotic and preasymptotic behavior of tensor products of sequences of polynomial decay. The results will be applied to the $L_2$-approximation of mixed order Sobolev functions on the $d$-cube. It turns out that this problem is much harder for nonperiodic functions than for periodic functions, if $n\leq 2^d$. Asymptotically, however, there is no difference at all.
This investigation is inspired by [1] and can be found in [2].

[1] T. Kühn, W. Sickel, T. Ullrich: Approximation of mixed order Sobolev functions on the d-torus – asymptotics, preasymptotics and d-dependence. Constructive Approximation 42, 353–398, 2015.

[2] K.: Tensor power sequences and the approximation of tensor product operators. ArXiv e-prints, 2016. arXiv:1612.07680 [math.NA]

02.02.2107 13:00 (s.t), We6 6.020 Joachim Rehberg (WIAS Berlin)

In recent years it was anticipated by the community, in particular in optimal control, that real world problems are 'non-smooth', i.e. the resulting elliptic/parabolic equations have to be considered on non-smooth domains, often exhibit discontinuous coefficients or/and are to be complemented by mixed boundary conditions. That leads, in this generality, to the failure of many of the classical elliptic regularity results in these cases. The lecture shows up substitutes for these in many of the above outlined cases -- and these are sharp.

19.01.2107 13:00 (s.t), We6 6.020 Glenn Byrenheid (Bonn)

We apply the theory of unconditional Faber-Schauder characterizations of non-periodic function spaces provided in [1] and extend it to Sobolev spaces $S^r_{p}W([0,1]^d)$ with dominating mixed smoothness. These characterizations yield intrinsic discretizations of $S^r_{p}W([0,1]^d)$ and Besov spaces $S^r_{p,\theta}B([0,1]^d)$ that allow us to study best $m$-term approximation with respect to the Faber-Schauder dictionary where the error is measured in $L_q([0,1]^d)$. Compared to linear (sampling) approximation in case $p<q$ the exact values of $p$ and $q$ do not affect the asymptotic rate. We obtain always
$$\sigma_m(S^r_pW([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{-r} (\log m)^{(d-1)(r+\frac{1}{2})},$$
especially in the case $q=\infty $. For mixed Besov spaces $S^r_{p,\theta}B([0,1]^d)$ with $\theta<1$ we prove in the smoothness range $\frac{1}{p}<r<\min\{\frac{1}{\theta}-1,2\}$ the purely polynomial rate
$$\sigma_m(S^r_{p,\theta}B([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{-r}.$$ Note all our results are obtained by constructive algorithms using coefficients that are generated by a finite number of function evaluations.
This is joint work with Tino Ullrich.

[1] Triebel, Hans . Bases in function spaces, sampling, discrepancy, numerical integration. EMS Tracts in Mathematics, 11. European Mathematical Society (EMS), Zürich, 2010. x+296~pp. ISBN: 978-3-03719-085-2

12.01.2107 13:00 (s.t), We6 6.020 Constantin Diez (Adam Opel AG, Rüsselsheim)

In order to understand the crash behavior of cars before expensive
tests take place, virtual robustness investigations get more important
in the development process. The evaluation is most commonly done by
watching solely system responses, such as intrusion or dummy criteria,
with simple statistical measurements such as mean or variance.

In order to extract more information in an even more compressed way,
machine learning methods can be used to help engineers understand their
simulations much faster and also give hints for improvement. Therefore a
dimension reduction scheme was specially designed for crash simulations
and enables cheap storage of results, as well as fast comparison. Since
this reduction enables the computation of simulation similarity,
clustering methods in combination with low-dimensional space embedding
can be used to find groups of simulations with similar buckling
behavior. Finally to avoid or reproduce certain deformation
patterns, solutions in terms of variable ranges can be computed with
decision tree learning, so that an engineer knows in which areas he may
choose a design.

15.12.2016 13:00 (s.t), We6 6.020 Christian Pfeiffer (Bonn)

This presentation applies the space-time DG-Method of Neumueller to parabolic optimal control problems. The talk includes theoretical results (error estimates for distributed and boundary control problems) as well as numerical experiments.

08.12.2016 13:00 (s.t), We6 6.020 Veronika Karl (Würzburg)

We want to solve a convex optimal control problem with an elliptic state equation and a pointwise state constraint. Because of the low regularity of the Lagrange Multipliers, pointwise state constraints cause several difficulties in their numerical treatments and are therefore still challenging. In this talk we want to apply an adapted Augmented Lagrange Method in order to eliminate the explicit given state constraints. Using a special update rule our algorithm is not only yielding strong convergence of the solution as well as weak convergence of the adjoint states but also boundedness of the multiplier associated to the state constraint and its weak-* convergence.

01.12.2016 13:00 (s.t), We6 6.020 Bart Vandereycken (Geneva)

We introduce a criss-cross type algorithm to compute the rightmost point
in the real pseudospectrum of a given matrix, that is, the set of
eigenvalues of all real, norm-bounded, additive perturbations of this
matrix. Existing methods, which are based on eigenvalue optimization
over the rank-2 matrix manifold using steepest descending schemes, may
converge to only locally rightmost solutions and their convergence can
be slow. To avoid these difficulties, we formally extend the criss-cross
algorithm originally developed for unstructured pseudospectra and to
reduce its computational cost, we exploit a supersets characterisation
of the real pseudospectrum. Each criss and cross search is performed on
carefully selected supersets, and involves solving only linear
eigenvalue problems and singular value optimization problems.
Furthermore, we also propose a subspace projection framework, which
combines the criss-cross algorithm with subspace projection techniques
to make the algorithm applicable to large-scale matrices. The developed
algorithms are proven to be globally convergent and their robustness and
efficiency are demonstrated by a number of numerical examples.

Joint work with Ding Lu, University of Geneva.

17.11.2016 13:00 (s.t), We6 6.020 Lucas Bonifacius (TU München)

The goal of this talk is to derive qualified optimality conditions for a class of linear time-optimal control problems with general convex target set. To this end, we state sufficient conditions for strong stability that in turn guarantees the existence of Lagrange multipliers for this state constrained optimal control problem. The theory is based on a characterization of weak invariance of the target set under the controlled equation. An appropriate strengthening of the resulting Hamiltonian condition guarantees strong stability and yields a priori bounds on the size of multipliers, independent of, e.g., the initial point, the running cost or the a priori unknown optimal solution.

03.11.2016 13:00 (s.t), We6 6.020 Constantin Weiser (Düsseldorf)

Integration is the key operator in stochastic calculus. It‘s used whenever a probability for a continuous random variable has to be computed. In practical applications, these integrals are very often multi-dimensional and no closed form solution exist. Therefore they have to be approximated numerically. Due to historical developments, Monte-Carlo methods has extended its dominance over other numerical methods like numerical quadrature.

We present two highly relevant econometric models, where we use numerical quadrature for computing integrals within an optimization process: the probit-model and the random coefficient logit model. For the first we propose a highly customized sparse grid approach for which we find an impressive improvement in terms of efficiency (accuracy & computational cost) relative to the state of the art quasi Monte Carlo methods. The second model suffers from a strong interrelationship between the integrals. Therefore, an adaptive quadrature algorithm leads to high computational costs and the standard sparse-grids approach does not work better than the standard Monte Carlo approach.

We find in general the need for customization of quadrature methods to be competitive with Monte-Carlo methods.

20.10.2016 13:00 (s.t), We6 6.020 Markus Bachmayr (Bonn)

We consider the convergence of sparse Hermite expansions of solutions of elliptic PDEs with lognormally distributed diffusion coefficients. An important ingredient is the choice of an expansion of the Gaussian random field in terms of independent scalar Gaussian random variables, which can be interpreted as a choice of stochastic coordinates. Here it turns out that multilevel-type expansions can lead to better results than Karhunen-Loève decompositions. We establish convergence rates and moreover consider the construction of such multilevel expansions for the important case of Matérn covariances. This talk is based on joint works with Albert Cohen, Ronald DeVore and Giovanni Migliorati.

14.07.16 13:00 (s.t), We6 6.020 Jens Henrik Göbbert (FZ Julich)

Extracting scientific insight from large simulations is of crucial importance for science. Scientific advances are made only once the data produced by the simulations is processed into a meaningful analysis. But as simulations increase in size for more detail, post-processing is becoming the bottleneck on the way to the desired scientific insight. In situ techniques, like in situ visualization, are known to play an important part in tackling this problem. Sparing some supercomputing time to process, structure, reduce and visualize the data in real-time during the simulation offers several benefits. In particular, when data reduction becomes inevitable, only during the simulation all relevant data about the simulated fields and any embedded geometry is readily available at the highest resolution and fidelity for critical decision making. But even though in situ visualization has been successfully deployed over more than two decades, its use still has not gone "mainstream". Scientists today often have to abstain from meaningful in situ visualizations of their simulation data. This talk will give an introduction to in situ visualization and discuss state-of-the-art implementation-, optimization- and coupling techniques to integrate the needed functionality to each simulation on the basis of VisIt/Libsim and ParaView/Catalyst.

21.07.16 13:00 (s.t), We6 6.020 Bashar Ibrahim

Correct DNA segregation is a fundamental process that ensures the faithful inheritance of genomic information for the propagation of cell life. Segregation failures underlie many human health problems, most notably aneuploidy and cancer. DNA segregation, is inherently complex and highly cross linked. It cannot be understood from reflections on the individual components (proteins, complexes etc) alone, but should be understood through considerations involving many components at the same time. The current experimental techniques are not sufficient to make quantitative predictions. Thence, the integration of experimental and mathematical approaches is employed for understanding of biological systems. In this talk, examples will be presented and mathematical challenges will be discussed. First challenge is how can we distinguish between active transport and diffusion of PDEs of cellular processes given various data like life-cell imaging. Additionally, how can we make whole-cell computational model for DNA segregation.

30.06.16 13:00 (s.t), We6 6.020 Friederike Hellwig, Humboldt-Universität zu Berlin

Since the design of pointwise symmetric stress approximations in H(div, Ω; S) for linear elasticity is cumbersome, especially in 3D, the discontinuous Petrov-Galerkin methodology promises a low-order symmetric stress approximation. In this talk, we use the ultraweak formulation to introduce three new methods with piecewise constant ansatz functions for the displacement and the stress variable and piecewise affine and continuous interface displacements and piecewise constant interface tractions. The three methods for linear elasticity differ from each other in the choice of norms and the occurence of some constraint. The minimal test space of piecewise (and, in general, discontinuous) symmetric parts of lowest-order Raviart-Thomas functions and piecewise affine functions allow for a direct proof of the discrete inf-sup condition and a complete a priori and a posteriori error analysis which is robust in the incompressible limit as λ → ∞. Numerical experiments with uniform and adaptive mesh-refinings investigate λ-robustness and confirm that the third scheme is locking-free.

23.06.16 13:00 (s.t.) Reinhold Schneider (TU Berlin)

Hierarchical Tucker tensor format and Tensor Trains (TT) have been introduced to offer stable and robust approximation by a low-order cost. The hierarchical singular value decomposition (HSVD) introduced independently by Hackbusch & Kühn and Oseledets and analyzed by Grasedyck, provides a reliable tool for quasi best tensor product approximation. It can be used in hard/soft thresholding iterations (Bachmayr et al.) providing guaranteed convergence. To reduce the operational cost of of the SVDs, we consider randomized SVD in the sense of Tropp et al. which is based on Johnson-Lindenstrauß transform. We introduce a randomized HSVD computation by extending these ideas to hierarchical tensors.

09.06.16 13:00 (s.t), We6 6.020 Peter Oswald (Bonn)

Continuing on a topic which I already reported on in 2014 in the same seminar, I will survey some more recent developments in the theory of subspace correction methods. The main issues are new randomization strategies (order of subspace access and grouping of subspaces) and acceleration (similar to what we gain if we go from gradient to conjugate gradient methods). I will spend some time on an interesting problem connecting quantitative estimates for matrix paving (the recently proved Anderson Paving Conjecture) with choosing the best equation ordering for the SOR (Gauss-Seidel) iteration.

02.06.16 13:00 (s.t), We6 6.020 Thomas Wick (RICAM)

Currently, fracture propagation and multiphysics problems are major topics in applied mathematics and engineering. It seems to turn out that one of the most promising methods is based on a variational setting (proposed by Francfort and Marigo) and more specifically on a thermodynamically con- sistent phase-field model. Here a smoothed indicator function determines the crack location and is characterized through a model regularization pa- rameter. In addition, modeling assumes that the fracture can never heal, which is imposed through a temporal constraint, leading to a variational inequality system. In this talk, this basic fracture model is augmented with several hints and discussions of serious challenges in developing state-of-the-art numeri- cal methods. Key aspects are robust and efficient algorithms for imposing the previously mentioned crack irreversibility constraint, treatment of the indefinite Jacobian matrix in terms of a monolithic error-oriented Newton approach, computational analysis of the interplay of model and discretiza- tion parameters, and finally coupling to other multiphysics problems such as fluid-filled fractures in porous media, fluid-structure interaction, and steps towards high performance computing for tackling practical field problems. Our focus in these developments will be on goal functional evaluations, i.e., certain quantities of interest. This aim is accomplished with adap- tive finite elements and more specifically, on the one hand, with predictor- corrector mesh adaptivity and, on the other hand, using partition-of-unity dual-weighted residual a posteriori error estimation. Together with high per- formance computing, mesh adaptivity allows for systematic investigations of the previously mentioned problem settings.

19.05.16 13:00 (s.t), We6 6.020 Paul Sinz

A systematic method for identifying the worst case load amongst all boundary loads of a fixed energy is introduced. Here the worst case load delivers the largest fraction of input energy into a prescribed subdomain of interest. This leads to an eigenvalue problem, for which the largest eigenvalue is the maximum fraction of energy which can penetrate the subdomain. The associated eigenfunctions are the worst case solutions. The properties of these eigenfunctions motivates a particular Generalized Finite Element Method (GFEM) called the Multiscale Spectral GFEM (MS-GFEM), developed by Babuˇska and Lipton (2011). The MS-GFEM method and some computations are discussed.

19.05.16 14:00 (s.t), We6 6.020 Gabriel BarrenecheaIn this talk I will review some recent results on the stabilisation of finite element methods in anisotropic quadrilateral meshes. Our first attempt is the family $Q_{k+1}\times P_{k-1}$. This pair is inf-sup stable, but their stability constant depends on the aspect ratio of the triangulation. Then, we identify the minimal amount of pressure modes which are responsible for this behaviour, and we propose a method that penalises them in such a way that the resulting scheme is stable independently of the aspect ratio. Next, we move to the, optimal and not inf-sup stable, $Q_1\times P_0$ pair, and the Oseen equation.

28.04.16 13:00 (s.t), We6 6.020 Christop Kacwin (Bonn)

As a representative of hyperbolic cross methods, the Frolov cubature formula is currently gaining center stage in the research of numerical inte- gration, since it has optimal convergence rates for a wide range of function spaces, especially the Besov and Triebel-Lizorkin spaces with dominating mixed smoothness. However, finding the integration nodes for this cuba- ture formula has proven to be quite difficult with increasing dimension. We introduce algorithms on this problem and will deal with the special case of Frolov-Chebychev lattices, which surprisingly turn out to be or- thogonal. Finally, we will compare the worst case error of this method to other up-to-date integration schemes. This is a joint work with J. Oettershagen and T. Ullrich.

21.03.16 13:00 (s.t), We6 6.020 Ilyass Tabiai (Montreal)

An overview of Fiber Reinforced Composites’ (FRCs) manufacturing an its consequences on the properties of the cured material will be presented. FRC are increasingly used for primary structures in the aerospace industry. FRC’s damage starts at the fiber-level then grows through the heteroge- neous material. Quantitative data of the deformation fields in the fiber’s vicinity during damage are essential to calibrate simulation models able to handle simultaneous initiation and growth of several damage mechanisms. After analysis of the results, an in-plane displacement resolution of ±0.1μm can be reached. Results from such experiments will be presented. A 1D Peridynamics' based Prony series viscoelastic constitutive model has been developed. Such a model can be implemented into the existing Peridigm open-source software for Peridynamics’ simulations. Full-field measurements have been prepared for a 1D fiber, a 2D film and a 3D dogbone specimen under quasi- static tensile loading. A Peridynamics inverse method based on the quan- tification of energy available for each material point can be developed to implement and calibrate an energetical damage model.

31.03.16 13:00 (s.t), We6 6.020 Mario Ullrich (JKU Linz, Österreich)

17.03.16 13:00 (s.t), We6 6.020 Yurii Kolomoitsev (NAS, Kiev)

In the talk we will discuss recent progress in investigation of the problem stated in 60s by Ulyanov: to find sharp inequalities between moduli of smoothness in the different Lebesgue spaces. We consider the general sharp Ulyanov inequalities for moduli of smoothness, generalized $K$-functionals, and their realizations in the spaces $L_p(\mathbb{R}^d)$. The main feature of our work is that we deal with the Ulyanov inequalities in the case $0<p<1$ and in the limit cases $p = 1$ and $p=\infty$.

17.03.16 14:00 (s.t), We6 6.020 Bangti Jin (UCR)In this talk, we consider boundary value problems involving either Caputo or Riemann-Liouville fractional derivatives of order $\alpha\in (1,2)$ on the unit interval (0,1). The variational problem for the Riemann-Liouville case is coercive on the space $H^{\alpha/2}_0$ but the solutions are less regular, whereas that for the Caputo case involves different test and trial spaces. We establish the regularity pickup of the solutions of the variational problem, which enables one to establish convergence rates of the finite element approximations. Finally, numerical results are presented to illustrate the error estimates.

10.03.16 13:00 (s.t), We6 6.020 Sergey Tikhonov (CRM, Barcelona)

In this talk we will discuss recent developments in the theory of polynomial inequalities with weights. An overview of the problem will be given. The talk does not require any prior knowledge of this topic.

10.03.16 14:00 (s.t), We6 6.020 Fabian Franzelin (Stuttgart)The generalized polynomial chaos expansion (gPCE) is a very attractive technique for uncertainty quantification. However, it assumes model inputs that are independent random variables. For non-independent random variables the basis polynomials of the gPCE are no longer orthogonal to the probability density, which decreases the efficiency of all PCE based methods. Moreover, they are not even applicable for data-driven problems, which arise in the context of data assimilation. A common way to overcome these issues are isoprobabilistic transformations, at least theoretically. This talk will discuss isoprobabilistic transformations such as the Nataf and the Rosenblatt transformation that decorrelate random variables for improving the efficiency of emulator based Bayesian inverse problems. We will present numerical studies that demonstrate the effectiveness of these transformations and present some alternatives such as Leja sequences in combination with arbitrary polynomial chaos expansion.

18.02.16 13:00 (s.t), We6 6.020 Winnifried Wollner (TU Darmstadt)

In many processes modeled by partial differential equations the size of derivatives plays a decicive role. Prototypical examples include problems of damage or plastificity. When optimizing such processes additional constraints on the size of the derivative need to be included. The analysis of such optimization problems is complicated by the mismatch between the natural topology for the derivative constraint and the, weaker, natural topology for solutions of the partial differential equation. The talk will be concernded with the existence of solutions to such problems as well as their approximation by numerical methods.

28.01.16 13:00 (s.t), We6 6.020 Ira Neitzel (Bonn)

PDE constrained optimal control problems with pointwise state
constraints are known to cause certain theoretical and numerical
difficulties. In this talk, we are concerned with an elliptic Dirichlet
boundary control problem with low regularity of the state variable.
After discussing first order optimality conditions in KKT-form, we will
focus on presenting a priori error estimates for the finite element
discretization of such linear-quadratic problems with pointwise state
constraints in the interior of the domain.

This is joint work with M. Mateos.

21.01.16 13:00 (s.t), We6 6.020 Thomas Heller (Erlangen)

The High Performance Computing (HPC) community is facing a technology shift which will result in a performance boost by three orders of magnitudes within the next 5 years. This rise of performance will mainly be acquainted by increasing the level of concurrency in such a way that a user of those systems needs to accommodate to billion way parallelism. The main problems to solve are: Programmability, Portability, Energy Saving and Resiliency. The author believes that leveraging modern C++ will lead to a possible solution to those problems from a software perspective. This talk will discuss the use of C++ in such a massively parallel environment: Using the HPX Parallel Runtime System - a future based API - - to present a lightweight and efficient mechanism to support massively parallel, multi-way parallelism.

14.01.16 13:00 (s.t), We6 6.020 Johannes Holke (Bonn)

Load-balancing is essential to ensure scalability of parallel numerical applications. On adaptive meshes that are refined and coarsened frequently during the computation (i.e. in every time step) space-filling curves have been established as a fast and reliable tool for this task. We introduce a space-filling curve for adaptive triangular and tetrahedral meshes that can be computed using bitwise interleaving operations similar to the well-known Morton curve for cubical meshes. Using this space-filling curve we establish constant time algorithms to compute the parent, children and face-neighbors of a given mesh element, as well as the next and previous element in the space-filling curve.

17.12.15 13:00 (s.t), We6 6.020 Carsten Carstensen (HU Berlin)

03.12.15 13:00 (s.t), We6 6.020 Dietrich Braess (Bochum)

The two-energies principle admits to achieve reliable a posteriori error bounds without unknown constants. They are also efficient. The error bounds are p-robust while residual error estimates are not. We establish these estimates for discontinuous Galerkin methods (DG) that are now very popular. We use the uniform treatment by Arnold et al with a mixed method to construct equilibrated fluxes. The construction by a local procedure is here simpler than in the case of conforming Lagrange elements. This is a hint to other advantages of DG. We focus on the Poisson equation and comment the more involved treatment of the biharmonic equation.

26.11.15 13:00 (s.t), We6 6.020 V. N. Temlyakov (U South Carolina, Columbia)

We discuss construction of coverings of the unit ball of a finite dimensional Banach space. The well known technique of comparing volumes gives upper and lower bounds on covering numbers. This technique does not provide a construction of good coverings. Here we study incoherent systems and apply them for construction of good coverings. We use the following strategy. First, we build a good covering by balls with a radius close to one. Second, we iterate this construction to obtain a good covering for any radius. We mostly concentrate on the first step of this strategy.

19.11.15 13:00 (s.t), We6 6.020 Emil Kieri (Uppsala)

We consider low-rank approximations to large time-dependent matrices and tensors. When singular values in the low-rank approximation tend to zero the curvature of the approximation manifold grows, forcing standard time-stepping schemes to take very small time steps. This is a situation one has to expect. Novel time integrators, based on splitting the projection onto the tangent space of the low-rank manifold, were recently proposed. Standard error estimates for splitting methods break down when the singular values are small, since the Lipschitz constant of the projection then is large. On the other hand, the integrators are exact when given time-dependent matrices or tensors already are of the prescribed rank. We provide an error analysis which unifies these properties. We show that in cases where the exact solution is an epsilon-perturbation of a low-rank matrix or tensor train, the error of the integrator is favourably bounded in terms of epsilon and the stepsize, independently of the smallness of the singular values. Such a result does not hold for any standard integrator.

12.11.15 13:00 (s.t), We6 6.020 Thomas Kühn (Leipzig)

The classical Gevrey classes, already introduced in 1916, play an important role in analysis, in particular in the context of PDEs. They consist of $C^\infty$-functions on $\mathbb{R}^d$ whose derivatives satisfy certain growth conditions. For periodic functions, these growth conditions can be expressed in terms of Fourier coefficients. Using this approach, I will introduce periodic Gevrey spaces $G^{s,c}(\mathbb{T}^d)$ on the $d$-dimensional torus $\mathbb{T}^d$, where $s\in(0,1)$ is a smoothness parameter and $c>0$ a fine index. All these spaces contain non-analytic functions, so they are between analytic functions and functions of finite smoothness. The talk is devoted to the study of approximation numbers $a_n$ of the embeddings $ G^{s,c}(\mathbb{T}^d)\hookrightarrow L_2(\mathbb{T}^d)$, with special emphasis on the dependence of the "hidden" constants on the dimension $d$. In particular, the exact asymptotic rate of $a_n$ as $n\to \infty$ is determined. Not surprisingly, this rate is sub-exponential and faster than polynomial. We also find preasymptotic two-sided estimates for small $n$. In high dimensions this is the most relevant case for numerical purposes. These results are based on joint work with Martin Petersen (Leipzig).

05.11.15 13:00 (s.t), We6 6.020 Glenn Byrenheid (Bonn)

We consider the approximation of functions with hybrid mixed smoothness based on rank-1 lattice sampling. We prove upper and lower bounds for the sampling rates with respect to the number of lattice points in various situations and achieve improvements in the main error rate compared to earlier contributions to the subject. This main rate (without logarithmic factors) is half the optimal main rate coming for instance from sparse grid sampling and turns out to be best possible among all algorithms taking samples on rank-1 lattices.

29.10.15 13:00 (s.t), We6 6.020 Christian Kuske (Bonn)

It is hard to find an approximation of the Helmholtz operator for an arbitrary frequency range. $\mathcal{H}$-Matrices are feasible for fixed frequencies only. The aim of our approach is a method which is capable of generating an approximation for a frequency range out of few chosen frequencies. The basic idea originates in the so-called multivariate adaptive cross approximation (MACA). This method is a generalization of ACA to multivariate functions, preserving the logarithmic-linear complexity in both runtime and storage. MACA constructs a low-rank tensor approximation for the entire frequency range by approximating the chosen frequencies with $\mathcal{H}$-Matrices. The resulting system can be solved with an adjusted tensor-valued Krylov subspace method and an available preconditioner.

18.06.15 13:00 (s.t), We6 6.020 David Molitor (Bonn)

We want to compare the success rate of a classification algorithm on data in $\mathbb{R}^d$ with the success rate on the same data, which has previously been projected onto $\mathbb{R}^k$, $k\leq d$. To do this, we will use Johnson-Lindenstrauß embeddings and Support Vector Machines. Finally, we want to compare our numerical results with theoretic bounds proved in the recent paper "Compressive classification and the rare eclipse problem" by Bandeira, Mixon and Recht.

11.06.15 13:00 (s.t), We6 6.020 Dr. Andreas Mueller (Naval Postgraduate School, Monterey, CA, USA)

Even with today's supercomputers it is still impossible to explicitly represent all scales involved in weather prediction. A large number of different numerical methods have been developed for improving the efficiency of numerical simulations. Among those methods are the element-based Galerkin methods which allow arbitrary high order for the spatial discretization and promise excellent scalability on large supercomputers.

The author is one of the developers of the dynamical core NUMA which is used within the next-generation weather prediction model NEPTUNE of the US Navy. NUMA incorporates many recent developments like element-based Galerkin methods and adaptive mesh refinement. The presentation gives an introduction to the solution of the Euler equations of gas dynamics with continuous (spectral element) and discontinuous Galerkin methods.

Recently, the author has shown strong scaling of NUMA on more than 2.7 million threads using the continuous Galerkin method. Two test cases are considered: 1) a baroclinic instability test case by Jablonowski and Williamson (2006) on the sphere using a cubed sphere mesh and 2) a 3D rising thermal bubble test case in a cube. Details of this scalability study are presented and the sensitivity of the results with respect to the polynomial order of the spatial discretization is discussed.

28.05.15 13:00, We6 6.020 Sebastian Mayer (University of Bonn)

Sparsity proved to be a highly useful property in modern numerical analysis, for instance, in the context of compressive sening or sparse grid methods. In this talk, I consider what effects sparsity assumptions and relaxations thereof have in connection with the reconstruction of multivariate functions from partial information. Here, I am particularly interested in understanding the effects from the viewpoint of information-based complexity. Two model function classes will be in the focus of the talk. On the one hand, we study classes of ridge functions $f(x) = g(a \cdot x)$, where the ridge direction $a$ is assumed to be compressible. On the other hand, we consider periodic Sobolev classes, where Fourier frequency vectors are assumed to be compressible or to lie in a hyperbolic cross.

21.05.15 13:00, We6 6.020 Alfonso Caiazzo (WIAS Berlin)

This talk focuses on the modeling of elastic materials composed by an incompressible elastic matrix and small compressible gaseous inclusions, under a time harmonic excitation. In a biomedical context, this model describes the dynamics of a biological tissue (e.g. liver) when wave analysis methods (such as Magnetic Resonance Elastography) are used to estimate tissue properties. Due to the multiscale nature of the problem, direct numerical simulations are prohibitive. First, we describe an homogenized model for the time harmonic regime which describes the solid-gas mixture as a compressible material in terms of an effective elasticity tensor. As next, we derive and validate numerically analytical approximations for the effective elastic coefficients in terms of macroscopic parameters only. This simplified description is used to to set up an inverse problem for the estimation of the tissue porosity, using the mechanical response to external harmonic excitations.

07.05.15 13:00, We6 6.020 Daniel Elfverson (University of Uppsala)

We want to estimate the probability that a functional is below or above some critical threshold. Standard Monte Carlo is unfeasible due to the huge amount of samples that are needed to produce even a single rare event. Remedies for this are using variance reduction techniques, e.g., subset simulation. In this work we consider a multilevel subset simulation approach with a varying number of samples on the different levels. The key idea in our new method is to use a posteriori error estimators to guarantee the critical subset property that may be violated when changing the model resolution from one failure set to the next.

07.05.15 13:30, We6 6.020 Fredrik Hellman (University of Uppsala)

We analyze the multilevel Monte Carlo (MLMC) method for the irregular failure probability functional. The failure probability is the value of the cumulative distribution function for the quantity of interest at a given point. The low regularity makes MLMC perform suboptimally. The present work uses a posteriori error estimates for the quantity of interest to regain optimal performance. The issue of estimating the numerical bias of the estimator (which is an essential but difficult part of the MLMC algorithm) is also discussed.

23.04.15 13:00, We6 6.020 Lutz Gross (University of Queensland)

The inversion of geophysical data is the attempt to build a three-dimensional model of the Earth's subsurface from data collected on or near the surface. The problem can be formulated as an optimization problem minimizing the defect of measurements and corresponding predictions subject to constraints from physical models in the form of partial differential equations. It is common practice to solve the problem using a 'first discretize then optimize' approach.. Although being very efficient for small to medium size problems the approach has strong limitations for large-scale data sets and non-quadratic inversion problems for instance for electro-magnetic data sets and joint inversion problems. In the talk we will present an alternative framework tailored for using the finite element method following the 'first optimize then discretize' approach. As this approach maintains the problem sparsity throughout the inversion the calculations can easily be applied to large spatial grids in parallel across thousands of processor cores with good scalability. We will discuss mathematical and computational aspects of the approach in the context of the inversion of magnetic and gravity anomaly data. As an application scenario we will present the inversion of satellite gravity data over the entire Australian continent.

12.03.15 14:00, We6 6.020 Vladimir N. Temlyakov (South Carolina)

We study sparse representations and sparse approximations with respect to redundant dictionaries and bases. We address the problem of designing and analyzing greedy methods of approximation. A key question in this regard is: How to measure efficiency of a specific algorithm? Answering this question we prove the Lebesgue-type inequalities for algorithms under consideration. A very important new ingredient of the talk is that we perform our analysis in a Banach space instead of a Hilbert space. It is known that in many numerical problems users are satisfied with a Hilbert space setting and do not consider a more general setting in a Banach space. There are known arguments that justify interest in Banach spaces. In this talk we give one more argument in favor of consideration of greedy approximation in Banach spaces. We introduce a condition on a dictionary in a Banach space which is a generalization of the RIP concept in a Hilbert space. We analyze the Weak Chebyshev Greedy Algorithm, which is a generalization of the Orthogonal Greedy Algorithm (Orthogonal Matching Pursuit) for Banach spaces. We demonstrate the power of this method on several examples, including the trigonometric system.

12.03.15 15:00, We6 6.020 Dinh Dung (Hanoi)

We give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed Sobolev-analytic-type smoothness in the infinite-dimensional case where specific summability properties of the smoothness indices are fulfilled. These estimates are then applied to the linear approximation of functions from the associated spaces in terms of the $\varepsilon$-dimension of their unit balls. Here, the approximation is based on linear information. Such function spaces appear for example for the solution of parametric and stochastic PDEs. The obtained upper and lower bounds of the approximation error as well as of the associated $\varepsilon$-complexities are completely independent of any dimension. Moreover, the rates are independent of the parameters which define the smoothness properties of the infinite-variate parametric or stochastic part of the solution. These parameters are only contained in the order constants. This way, linear approximation theory becomes possible in the infinite-dimensional case and corresponding infinite-dimensional problems get tractable. This is joint work with M. Griebel (Bonn).

02.03.15 14:00, We6 6.020 Wolfgang Hackbusch (MPI Leipzig and CAU Kiel)

The best approximation of a matrix by a low-rank matrix can be obtained by the singular value decomposition. For large-sized matrices this approach is too costly. Instead one may use a block decomposition. Approximating the small submatrices by low-rank matrices and agglomerating them into a new, coarser block decomposition, one obtains a recursive method. The required computation work is $O(rnm)$ where $r$ is the desired rank and $n\times m$ is the size of the matrix. The paper discusses the new error estimates for $A-B$ and $M-A$ where $A$ is the result of the recursive truncation applied to $M$, while $B$ is the best approximation.

22.01.15 13:00, We6 6.020 Dirk Plüger und Fabian Franzelin (Stuttgart)

In the first part of the presentation we will give an overview on typical challenges for sparse grid methods that are posed by applications. Think of black-box simulation codes on future exascale machines where one has limited access to code and numerics. Think of surrogate modeling for optimization or uncertainty quantification where every function evaluation is costly, or of data-driven problems where the amount of data is limited. These problems impose new requirements to sparse grids.

In the second part we present a novel data-driven approach to propagate uncertainty. We use an integrated sparse grid approach for the uncertainty quantification pipeline from data to the stochastic analysis of the quantity of interest of some physical model. The integrated sparse grid approach gives us two main advantages: First, we can easily link the density estimation to the adaptive sparse grid collocation method to propagate uncertainty. Second, we can efficiently estimate the moments of the sparse grid surrogate without any additional approximation errors. We evaluate the new approach using an expensive subsurface flow simulation.

15.01.15, We6 6.020 Philipp Morgenstern (Bonn)

At present, the main interest in IGA is in finding discrete function spaces that integrate well into CAD applications and, at the same time, can be used for Finite Element Analysis. One of the most promising approaches are T-splines, introduced as a free-form geometric technology in 2003. We present an efficient adaptive refinement procedure that preserves analysis-suitability of the T-mesh, this is, the linear independence of the T-spline blending functions. We prove analysis-suitability of the overlays and boundedness of their cardinalities, nestedness of the generated T-spline spaces, and linear computational complexity of the refinement procedure in terms of the number of marked and generated mesh elements.

08.01.15, We6 6.020 André Uschmajew (Bonn)

Let $A$ and $B$ be invertible, and let $C$ have low rank. Then it is trivial that the rank of the solution to the matrix equation $A X B^T = C$ is not larger than the rank of $C$. Formally speaking, this matrix equation admits a low-rank solution. There is some numerical evidence that the solution of frequently occurring linear equations like $A_1 X B_1^T + \dots + A_R X B_R^T = C$, where the number of terms is still considerably smaller than the matrix dimension (which might be infinite in an Hilbert space setting), has good low-rank approximability, that is, fast decaying singular values. Exponential decay, as sometimes observed in practice, seems very challenging to prove. It is, however, rather easy to obtain a nontrivial algebraic decay rate by relating the convergence speed of some polynomial approximation to the Kronecker rank of the corresponding operator polynomial. For an eigenvalue problem $AXB^T = \lambda X$ a similar argumentation is possible, although with considerable restrictions. This simple method of estimating the low-rank approximation error for matrices has important consequences for the low-rank approximability of solutions to Kronecker-structured operator equations in higher-order tensor product spaces, as it provides estimates on the singular value decay of certain matrix unfoldings of the solution tensor, which in turn govern the error in higher-order SVD truncations. The talk is based on joint-work with Daniel Kressner.

18.12.14 Beatrice Vedel (Bretagne-Sud)

Multifractal analysis has been proposed to describe the complexity of roughness in numereous phenomena - such as turbulent flows or financial data. In the first part of the talk, I will introduce the classical multifractal analysis and show how it can be applied on real data. In the second part, I will present our last works on a hyperbolic multifractal analysis able to describe roughness in presence of anisotropy.

04.12.14 Van Kien Nguyen (Jena)

Let $X$ be a Banach space and $T:\, X\to X$ be a compact linear operator. By $(\lambda_n(T))_{n=1}^\infty$ we denote the sequence of all non-zero eigenvalues of $T$. The particular interest in Weyl numbers comes from the fact that they are the smallest known $s$-number satisfying the famous Weyl-type inequalities, i.e., $$ \Big( \prod_{k=1}^{2n-1} |\lambda_k (T)|\Big)^{1/(2n-1)} \le \sqrt{2e} \, \Big( \prod_{k=1}^{n} x_k (T)\Big)^{1/n} $$ for all $n \in \mathbb{N}$. The above inequality shows that Weyl numbers can be used to control the distribution of the eigenvalues of $T$. In this talk we shall give a sharp estimates of Weyl numbers embeddings of Besov spaces of dominating mixed smoothness into Lebesgue spaces.

27.11.14 Room We6, 6.020 Seyedehsomayeh Hosseini (ETH Zürich)

This talk is concerned with the numerical solution of optimization problems defined on Riemannian manifolds where the objective function may be nonsmooth. Such problems arise in a variety of applications, e.g., in computer vision, signal processing, motion and structure estimation, or numerical linear algebra. We present a descent direction method for finding extrema of locally Lipschitz functions defined on Riemannian manifolds. Then, we establish the convergence of our algorithm to a stationary point.

20.11.14 Anna Persson (Göteborg)

We use local orthogonal decomposition techniques to derive a generalized finite element method for a parabolic equation with multiscale coefficients. A review of existing results for elliptic multiscale problems is given and we discuss how to extend these to the parabolic case. We conclude by outlining the proof of optimal order convergence rate of the error in the L_2-norm (in space).

13.11.14 Mira Schedensack (HU Berlin)

This talk establishes the equivalence of the conforming $P_1$ finite element
method, the nonconforming Crouzeix-Raviart finite element method,
and several first-order discontinous Galerkin finite element methods
in the sense that the respective energy error norms are equivalent
up to generic constants and higher-order data oscillations in a Poisson
model problem.
The Raviart-Thomas mixed finite element method is better
than the previous methods whereas the conjecture of the converse relation
is proved to be false.

Similar results hold for linear elasticity. The error of two nonconforming
first-order methods is bounded by the error of the conforming
first-order method up to a multiplicative constant which is robust
with respect to the incompressible limit while the converse estimates
deteriorate
for the Lam\'e parameter $\lambda\to\infty$. This reflects the
well-known locking
behaviour of the conforming $P_1$ finite element method.

06.11.14 Henning Kempka (TU Chemnitz)

Lebesgue spaces with variable exponent $L_{p(\cdot)}$ have been introduced already in 1931 by Orlicz. The recent interest on these spaces is a result in the upcoming applications in the theory of non-newtonian fluids and image processing. Surprisingly, Diening showed in 2004 that the Hardy-Littlewood maximal operator acts boundedly on $L_{p(\cdot)}$ which was the functional anaytic breakthrough and started a growing interest in variable exponent Lebesgue spaces. Subsequently, many scientists are nowadays interested in function spaces with variable exponents and a lot of other function spaces (Sobolev, Hölder, Besov, Triebel-Lizorkin, Hardy, Morrey, ...) with variable integrability and variable smoothness are in consideration today. We focus on the basic definitions of $L_{p(\cdot)}$ and the Besov-Triebel-Lizorkin spaces $B^{s(\cdot)}_{p(\cdot),q(\cdot)}$ and $F^{s(\cdot)}_{p(\cdot),q(\cdot)}$ with all three indices variable and show some important properties in these scales.

30.10.14 Dietmar Gallistl (Bonn)

The adaptive finite element approximation of multiple eigenvalues
leads to the situation of eigenvalue clusters because the
eigenvalues of interest and their multiplicities may not
be resolved by the initial mesh.
In practice, also little perturbations in coefficients or in
the geometry immediately lead to an eigenvalue cluster.
The first part of this talk presents an adaptive algorithm for eigenvalue
clusters and shows optimal decay rates of the error in terms
of degrees of freedom. Numerical tests suggest that the adaptive cluster
approximation is superior compared to the use of an adaptive scheme for
each eigenvalue separately, even if the multiplicities are known.

The optimality in terms of the concept of nonlinear approximation
classes is concerned with the approximation of invariant subspaces
spanned by eigenfunctions of an eigenvalue cluster.
In order to obtain eigenvalue error estimates in the case of
nonconforming finite element approximations, the second part of
the talk presents new error estimates for nonconforming finite elements
which relate the error of the eigenvalue approximation to the error of
the approximation of the invariant subspace.

23.10.14 Holger Rauhut (RWTH Aachen)

Compressive sensing enables accurate recovery of approximately sparse vectors from incomplete information. We apply this principle to the numerical solution of parametric operator equations where the parameter domain is high-dimensional. In fact, one can show that the solution of certain parametric operator equations (parametric PDEs) is analytic in the parameters which can be exploited to show convergence rates for nonlinear (sparse) approximation. Building on this fact, we show that methods from compressive sensing can be used to compute approximations from samples (snapshots) of the parametric operator equation for randomly chosen parameters, which in turn can be computed by standard techniques including Petrov-Galerkin methods. We provide theoretical approximation rates for this scheme. This is joint work with Christoph Schwab.

14.07.14 Toni Volkmer (TU Chemnitz)

We consider the approximation of multivariate periodic functions $f$ belonging to
subspaces of the Wiener algebra. For the approximation of $f$, a trigonometric
polynomial $p$ with frequencies supported on an index set $I\subset\mathbb{Z}^d$ is
used and the Fourier coefficients of $p$ are computed by sampling the function $f$
along a rank-1 lattice. For this computation, a fast algorithm can be used,
which is based mainly on a single one-dimensional fast Fourier transform, and
the arithmetic complexity of the algorithm is
$\mathcal{O}(|I|^2 \log |I| + d|I|)$.
We show upper bounds for the approximation error and achieve an optimal order
of convergence, when using suitable frequency index sets $I$. Numerical results
are presented up to dimension $d=10$, which confirm the theoretical findings.

This is joint work with Lutz Kämmerer and Daniel Potts.

14.07.14 Lutz Kämerer (TU Chemnitz)

The approximation of functions \(f\) in \(d\) spatial dimensions by sparse multivariate trigonometric polynomials supported on known frequency index sets \(I\subset\mathbb{Z}^d\) is an important task with a variety of applications. We are interested in computing such an approximation from function values in a fast way. Rank-1 lattices as sampling scheme allow for the fast approximation of the Fourier coefficients \(\hat{f}_{\mathbf{k}}\), \(\mathbf{k}\in I\), of \(f\) using a single one-dimensional fast Fourier transform.

For fixed arbitrary frequency index sets \(I\) and suitable rank-1 lattice size \(M\), we apply a simple component-by-component construction in order to determine a generating vector \(\mathbf{z}\in\mathbb{Z}^d\), such that sampling along the rank-1 lattice \(\Lambda(\mathbf{z},M)\) allows for the exact reconstruction of all trigonometric polynomials with frequencies supported on the index set \(I\). We discuss necessary and sufficient conditions on the rank-1 lattice size \(M\) guaranteeing that our component-by-component strategy succeeds.

Once we determined a reconstructing rank-1 lattice \(\Lambda(\mathbf{z},M)\) for the frequency index set \(I\), we can use this as sampling scheme in order to approximate smooth periodic functions \(f=\sum_{\mathbf k\in\mathbb{Z}^d}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}\) by trigonometric polynomials \(p=\sum_{\mathbf k\in I}\hat{p}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}\) with frequencies supported on \(I\). Assuming the exact Fourier partial sum \(S_If:=\sum_{\mathbf k\in I}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}\) of \(f\) is a good approximation of \(f\), the approximation \(\tilde{S}_If:=\sum_{\mathbf k\in I}\hat{\tilde{f}}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}\) of \(f\), \(\hat{f}_{\mathbf{k}}\approx\hat{\tilde{f}}_{\mathbf{k}}=\sum_{j=0}^{M-1}f(\frac{j}{M}\mathbf{z})\mathrm{e}^{-2\pi\mathrm{i}jM^{-1}\mathbf{k}\cdot\mathbf{z}}\), \(\mathbf{k}\in I\), is in some sense (almost) as good as the exact Fourier partial sum \(S_If\).

Furthermore, we discuss the relation between the reconstruction of trigonometric polynomials and the approximation properties of the presented sampling method in detail. We show that the reconstruction property of the rank-1 lattice is necessary in order to achieve approximations \(\tilde{S}_If\) of \(f\) that allow for upper bounds on \(f-\tilde{S}_If\) in the same order of the error bounds of \(f-S_If\) in general.

11.07.14 Peter Oswald (Jacobs University, Bremen)

I review the Hilbert space concept of stable space splittings and address its connection with some current research topics such as randomized POCS algorithms for least-squares problems.

11.07.14 Weiqi Zhou (Jacobs University, Bremen)

The Kaczmarz method (also referred to as "ART") is an easy to implement and convergent iterative method for solving rectangular linear systems. The method carries out alternating ortho-projections and can be analyzed within the framework of POCS methods or using its interpretation as Gauss-Seidel method on an equivalent system

We review the development and main results of this method and present an upper bound for the convergence rate of its classic version, which, in contrary to common perception, turns out to be not much worse than the convergence rate estimates of its recently proposed randomized versions. Numerical examples about the impact of ordering on the convergence rates will also be presented if time permits

07.07.14 Jan Kleinert (Fraunhofer IWTM, Kaiserslautern)

The field of Nonsmooth Contact Dynamics (NSCD) provides an increasingly popular simulation
framework for granular material. In contrast to penalty-based Discrete Element Methods (DEM),
this approach is stable for arbitrary time steps and produces visually acceptable results in very
short computing time. Yet when it comes to the prediction of draft forces, NSCD relies on
powerful numerical solvers.

This talk sets out to discuss NSCD as an alternative to DEM. The physical model behind NSCD is
laid out, starting from Hamilton’s principle of least action to the discretized equations of
motion. To integrate these equations in time, it is necessary to solve a large Cone
Complementarity Problem (CCP) per time step. Two numerical methods to solve the CCP are
introduced. The first is the Projected Gauß-Jacobi solver, which is widely used for the simulation
of granular material. But it suffers from slow convergence when applied to large systems. The
second technique, an Interior Point Method based on Jordan algebras, exhibits better
convergence properties whenever the accuracy requirements are high.

30.06.14 Lei Zhang (Shanghai Jiao Tong University)

We discuss the construction and numerical analysis of energy based atomistic/continuum coupling methods (A/C Coupling) for modeling crystalline solids with defects, in particular, the issues of consistency (so-called 'ghost force removal') and stability of the coupling method. For general multi-body interactions on the 2D triangular lattice, we show that ghost force removal (patch test consistent) a/c methods can be constructed for arbitrary interface geometries[1]. Moreover, we prove that all methods within this class are first-order consistent at the atomistic/continuum interface and second-order consistent in the interior of the continuum region. The convergence and stability of the method is analyzed and justified with numerical experiments[2,3]. Development of optimal implementation for consistent methods is dicussed [3].

References:

[1] Construction and sharp consistency estimates for atomistic/continuum coupling methods with general interfaces: a 2D model problem, C. Ortner, L. Zhang, SIAM Numerical Analysis. Volume 50, Issue 6, pp. 2940-2965 (2012).

[2] (In-)Stability and Stabilisation of QNL-Type Atomistic-to-Continuum Coupling Methods, C. Ortner, A. Shapeev, L. Zhang, 2013, Accepted by SIAM Multiscale Model. Simul.

[3] Implementation of Geometric Reconstruction Based Atomistic-to-Continuum Coupling, C. Ortner, L. Zhang, 2013, Accepted by Computer Methods in Applied Mechanics and Engineering

23.06.14 Volodya Temlyakov (University of South Carolina)

We are interested in approximation of a multivariate function $f(x_1, \ldots, x_d)$ by linear combinations of products $u_1 (x_1 ) \cdots u_d(x_d )$ of univariate functions $u_j (x_j )$, $j = 1, \ldots , d$. In the case $d = 2$ it is a classical problem of bilinear approximation. In the case of approximation in the $L_2$ space the bilinear approximation problem is closely related to the problem of singular value decomposition (also called Schmidt expansion) of the corresponding integral operator with the kernel $f(x_1 , x_2 )$. We will discuss some known results on the rate of decay of errors of best bilinear approximation in $L_p$ under different smoothness assumptions on $f$. The problem of multilinear approximation in the case $d \geq 3$ is more difficult and much less studied than the bilinear approximation problem. We will present some known and recent results on best multilinear approximation in $L_p$ under mixed smoothness assumption on $f$.

16.06.14 Benjamin Doerr (Ecole Polytechnique, Paris)

In this talk, we discuss the star discrepancy of a random point set. We show that there are constants $k, K > 0$ such that for all $N, s \in \mathbb{N}$, $s \le N$, the point set consisting of $N$ points chosen uniformly at random in the $s$-dimensional unit cube $[0,1]^s$ with probability at least $1 - e^{-ks}$ admits an axis-parallel rectangle $[0,x] \subseteq [0,1]^s$ containing $K \sqrt{sN}$ points more than expected. Together with the famous upper bound by Heinrich, Novak, Wasilkowski, and Wo{\'z}niakowski (2001), our result shows that the expected star discrepancy of a random point set is of order $\sqrt{s/N}$. Towards the end of the talk, we shall discuss to what extent these result also hold for spherical cap discrepancies.

26.05.14 Aviv Gibali (Fraunhofer IWTM, Kaiserslautern)

Projection methods are iterative algorithms that use projections
onto sets while relying on the general principle that when a family
of sets is present, then projections onto the given individual sets are
easier to perform than projections onto other sets (intersections, image
sets under some transformation, etc.) that are derived from the given
individual sets.

Their robustness, low computational effort and their ability to
handle huge-size problems make them very useful for many convex
real-world problems such as Image Reconstruction (IR) and Intensity-
Modulated Radiation Therapy (IMRT) Treatment Planning.
Over the past decade, some projection methods have proven very
effective also in some non-convex and discrete settings such as combi-
natorial games.

In this talk we describe several types of projection methods and
illustrate their performances both for convex problems such as IR and
IMRT, and also for non-convex problems such as the 8 queens puzzle
and Sudoku.

26.05.14 Jan Schwientek (Fraunhofer IWTM, Kaiserslautern)

In applications one often seeks to approximate a given continuous func-
tion $F$ on a non-empty and compact set $Z \subset \mathbb R^n$ by a simpler function $f(\textbf p, \cdot)$ which can be chosen from a parametric family of continuous functions $\{f (\textbf p, \cdot) | \textbf p \in P \}$ with some parameter set $P \subset \mathbb R^m$. Depending on the application different norms may be used to measure the deviation
between $F$ and $f(\textbf p, \cdot)$ on $Z$. If one wishes to minimize the maximal deviation, the Chebyshev norm has to be used. This leads to the non-smooth problem of *Chebyshev approximation*:
$$
CA : \quad \min_{p \in P} \|F (\cdot) − f (\textbf p, \cdot)\|_{\infty,Z}.
$$
Via the epigraph reformulation CA can be rewritten as a smooth standard
semi-infinite programming (SIP) problem.

In engineering applications a modification of CA, termed *reversed
Chebyshev approximation*, has received interest. There, given an approx-
imating family of functions $\{f(\textbf p, \cdot) | \textbf p \in P \}$ and a desired precision $e(p, q)$, the aim is to find parameter vectors $p \in P$ and $q \in Q$ such that the domain $Z(q)$ is as large as possible without exceeding the approximation error $e(p, q)$:
$$
RCA : \quad \max_{(p,q) \in P \times Q} \text{Vol}[Z(q)] s.t. \|F (\cdot) − f (p, \cdot)\|_{\infty,Z(q)} \leq e(p, q).
$$
This problem can be reformulated as a semi-infinite programming prob-
lem, too, but it results in a *general(ized)* semi-infinite programming (GSIP)
problem.
In this talk we give an overview about semi-infinite programming (the-
oretical foundations, solution methods and further applications) and show
how (reverse) Chebyshev approximation problems can be solved by that.
Furthermore, we present a real-world application of Chebyshev approxi-
mation, namely the uniform coverage problem in spunbond production,
and illustrate its solution as semi-infinite programming problem by a nu-
merical example.

19.05.14 Christiane Helzel (HHU Düsseldorf)

High-order WENO (i.e., weighted essentially non-oscillatory) methods are widely used for the approximation of hyperbolic partial differential equations. A common approach to use WENO methods on multidimensional Cartesian grids consists in applying a one-dimensional WENO method in each direction. This spatial discretization is typically combined with a Runge-Kutta method in time, i.e., during each stage of a Runge-Kutta method one-dimensional WENO schemes are used in a dimension-by-dimension fashion.

However, it is known that finite volume WENO methods based on a dimension-by-dimension approach retain the full order of accuracy (of the one-dimensional method) for linear multidimensional problems, but they are only second order accurate for the approximation of nonlinear multidimensional problems.

In my talk, I will present a simple modification of finite volume WENO methods, which leads to the full spatial order of accuracy by using only one-dimensional polynomial reconstructions in a dimension-by-dimension approach.
Furthermore, I will discuss the use of this method on adaptively refined grids as well as on Cartesian grids with embedded boundaries.

This is recent joint work with Pawel Buchmüller and Jürgen Dreher.

12.05.14 Hieu Nguyen (Edinburgh)

Domain decomposition refers to the splitting of a PDE into coupled subproblems on smaller subdomains forming a partition of the original domain. It is the most popular approach to solve large-scale problems on parallel supercomputers as the subproblems can be solved largely independently on different processors. When domain decomposition is combined with adaptivity, there are challenges in maintaining a good load balancing and mesh conformity among subdomains. These challenges can be overcome using the Bank-Holst parallel adaptive meshing paradigm. In this talk, we will formulate and analyze new additive Schwarz methods for this paradigm. Our methods take advantages of the local adaptive meshes of the whole domain which are fine in their associated subdomains but generally much coarser elsewhere. We will show that these methods are optimal, i.e, their rates of convergence do not depend on the mesh sizes or the number of subdomains. Numerical results will be provided to support our theoretical findings.

05.05.14 Aicke Hinrichs (Rostock)

We study the complexity of Banach space valued integration problems in the randomized setting. We explain the interplay between the geometry of the underlying Banach space and the rate of the $n$-th minimal errors. The relevant notion of Banach space geometry is the Rademacher type of the space. We also explain the motivation which comes from applications to the complexity of definite and indefinite integration and to the solution of ODEs. This is joint work with Stefan Heinrich.

28.04.14 Mario Ullrich (Jena)

We present the method of Frolov for the integration of (periodic) functions in the Sobolev space $H^s_{\rm mix}$, $s\in\mathbb{N}$, of dominating mixed smoothness. This method is completely constructive and has the optimal order of convergence for functions with support (strictly) contained in the unit cube. A standard modification of the algorithm also leads to the optimal order for non-periodic functions. Due to the simplicity of the method, we will present the whole proof.

03.04.14 Dinh Zung (Hanoi)

Let $X_n = \{x_j\}^n_{j=1}$ be a set of n points in the $d$-cube $[0, 1]^d$ , and $ \Phi_n = \{\varphi_j \}^n_{j=1}$ a family of $n$ functions in the space $L_q([0, 1]^d)$, $0 < q \leq \infty$. We consider the approximate recovery in $L_q([0, 1]^d)$ of functions $f$ on $[0, 1]^d$ from the sampled values $f (x_1), \dots, f(x_n)$, by the linear sampling algorithm $$ L_n (X_n , \Phi_n , f ) := \sum_{j=1}^n f(x_j) \varphi_j $$ The error of sampling recovery is measured in the norm of the space $L_q([0, 1]^d)$-norm or the energy norm of the isotropic Sobolev space $W_q^{\gamma} ([0, 1]^d)$ for $0 < q \leq \infty$ and $\gamma > 0$. Functions $f$ to be recovered are from the unit ball in Besov type spaces of an anisotropic smoothness, in particular, spaces $B^a_{p,\theta}$ of a nonuniform mixed smoothness $a \in \mathbb R^d$ , and spaces $B^{\alpha, \beta}_{p,\theta}$ of a “hybrid” of mixed smoothness $\alpha > 0$ and isotropic smoothness $\beta \in \mathbb R$. We constructed optimal linear sampling algorithms $L_n(X_n^* , \Phi_n^* , \cdot)$ on special sparse grids $X_n^∗$ and a family $\Phi_n^*$ of linear combinations of integer translated dilations of tensor products of B-splines. We computed the asymptotic of the error of the optimal recovery. As consequences we obtained the asymptotic of optimal cubature formulas for integration of functions from from the unit ball of these Besov type spaces.

20.01.14 Jens Oettershagen (Bonn)

We investigate integration in certain reproducing kernel Hilbert spaces of bivariate functions that possess bounded mixed derivatives up to order r. We compare worst-case errors of full tensorproducts of the rectangle rule, sparse grids, Hammersley QMC and the Fibonnacci lattice rule, with the latter being superior to all the other approaches.

03.02.14 Glenn Byrenheid (Bonn)

Motivated by Yserentant's results for the regularity of eigenfunctions of the electronic Schrödinger operator, it seems useful to study the approximation behavior of functions belonging to a space of dominating mixed smoothness. We are interested in finding good approximations of functions with this kind of regularity from finite dimensional subspaces, where the error is measured in the so-called energy-norm $H^1$. We propose subspaces (energy-norm sparse grids) and corresponding linear mappings which yield the optimal asymptotic error rate (approximation numbers) in terms of the number of degrees of freedom for periodic d-variate functions with mixed smoothness $s>1$. Moreover, the mentioned linear mappings only consider function evaluations of the function $f$ to be approximated. In particular, we obtain sharp bounds for sampling numbers in this context, which coincide with the corresponding estimates of the approximation numbers of the respective embedding. The rate is poynomial of order $s-1$ without logarithmic perturbance which would occur for classical sparse grid methods. Our proofs rely on classical Bernstein and discrete Hardy type inequalities to obtain dyadic sampling representations, i.e., equivalent norms in the source space where only function values are questioned. The results are part of a joint work with W. Sickel (Jena), Dinh Dung (Hanoi) and Tino Ullrich (Bonn).

27.01.14 Karoline Köhler (HU Berlin)

This talk presents some results for the error analysis of the obstacle problem, which is the protoypical example of a variational inequality. The first part of the talk deals with a general reliable and efficient error estimate, independent of any specific finite element method. Therein, the efficiency is understood in terms of the error $u-v$ in $H^1_0(\Omega)$ for the exact solution $u$ and an approximation $v$ in $H^1_0(\Omega)$ in the primal variable and the error $\lambda-\mu$ for the exact Lagrange multiplier $\lambda$ and an approximation $\mu$ in the dual space $H^{-1}(\Omega)$. This leads to the equivalence \[|||u-v|||+|||\lambda-\mu|||_*\approx \text{computable terms}\] possibly up to multiplicative generic constants. This general result is applied to the conforming Courant finite element method (FEM) in the second part of the talk, where the efficiency is analysed beyond the aforementioned equivalence and leads to \[|||u-v|||\approx \text{computable terms}.\] Numerical experiments which demonstrate and highlight the results for various finite element methods conclude the talk.

13.01.14 Fabian Franzelin (Stuttgart)

Peridynamics is an accepted method in engineering for modeling crack propagation on a macroscopic scale. However, the sensitivity of the method to two important model parameters—elasticity $\alpha$ and the particle density—has not yet been studied. To make an efficient computation of sensitivity values possible, we used the adaptive sparse grid collocation method (ASGC). Motivated by the works of Silling and Owhadi we applied it to a simulation of a high-speed projectile impacting against a plate using peridynamics. We introduced the force of magnitude $K$ exerted by the indenter as one additional uncertain parameter. The scenario requires custom-tailored criteria for adaptive refinement to reduce the number of sampling points as well as possible. The results obtained by the stochastic collocation method using spatially adaptive sparse grids are evaluated and compared to previous methods in this context.

17.12.13 Philipp Morgenstern (Bonn)

Adaptive Finite Element Methods (AFEM) are commonly used for partial differential equations that require high resolution in small regions of the domain (due to local singularities such as re-entering corners) while keeping a coarse mesh on other parts of the domain. This requires an algorithm for adaptive mesh refinement which preserves the shape of the elements as well as the regularity of the mesh. Moreover, theoretical purposes such as optimality of the convergence rates require an algorithm that allows for overlays and has linear memory complexity. We introduce a novel algorithm for the local refinement of regular quadrilateral meshes. This algorithm preserves the shape regularity of the initial mesh, allows the construction of overlays (this is, the coarsest common refinement of two meshes) and comes with linear complexity in terms of memory. The latter means that the number of new quadrilaterals in the fine mesh is essentially bounded by the number of atomic markings in the preceeding meshes.

03.12.13 Lev Markhasin (Stuttgart)

Optimal lower bounds for discrepancy in Besov spaces with dominating mixed smoothness are known from the work of Triebel. Hinrichs proved upper bounds in the plane. Chen-Skriganov point sets are known to be optimal in the sense of L_{p}-discrepancy, but they also satisfy the optimal upper bounds for discrepancy in Besov spaces with dominating mixed smoothness for certain parameters. Results for Triebel-Lizorkin and Sobolev spaces with dominating mixed smoothness and for the integration error can be concluded.

26.11.13 Sebastian Mayer (Bonn)

In this talk, I survey the usage of (randomized) dimensionality reduction in machine learning, with special emphasis being put on Johnson-Lindenstrauss embeddings (JLE). First, I outline a number of applications, including approximate nearest neighbor and classification tasks. There, dimensionality reduction has played a major role to make computations possible in feasible time. With that said, I delve into the theoretical foundations of Johnson-Lindenstrauss embeddings. This will put us into the position to understand why randomized dimensionality reduction has been a success in several situations. But at the same time, it will also help us to understand why it has little chance to work in others.