Description
===========

This folder contains the sources of the dune-dpg library, which allows to
solve Partial Differential Equations with Discontinuous Petrov-Galerkin
finite elements. It is built upon the finite element package Dune. The
scientific interest of the code can be found in the paper "The dune-dpg
library for solving PDEs with Discontinuous Petrov-Galerkin finite elements"
by F. Gruber, A. Klewinghaus and O. Mula.

For build instruction see the accompanying INSTALL file.
See section "Running dune-dpg" in this README document for explainations
on how to run basic examples which are described in the paper.
If you are interested in an API documentation of dune-dpg, see the
instructions in section "Generating API documentation with Doxygen" in
the INSTALL file.

Required Software and Libraries
===============================

Additional to the Dune libraries, you'll need the following programs and
libraries installed on your system:

  - Programs:
    - a C++11-compatible compiler (e.g. GCC >= 4.9)
    - cmake >= 2.8.12
  - Libraries:
    - Boost Fusion >= 1.56, < 1.60
      (Boost Fusion 1.60 and 1.61 are known to cause compilation errors.
       Versions other than 1.59 have not been thoroughly tested.)
    - A grid manager. Our examples use UG 3.12.1
      (http://www.iwr.uni-heidelberg.de/frame/iwrwikiequipment/software/ug)
      but one could also use other managers, e.g., ALBERTA or YASPgrid.
    - UMFPACK (which is part of Suitesparse, www.suitesparse.com)
    - DUNE core libraries 2.4.1
      (https://dune-project.org/releases/2.4.1/)
    - dune-typetree 2.4.1
      (https://gitlab.dune-project.org/pdelab/dune-typetree.git
       branch releases/2.4)
    - dune-functions 2.4
      (https://gitlab.dune-project.org/staging/dune-functions.git
       branch releases/2.4-compatible)

Instructions on how to build dune-dpg can be found in INSTALL.

Running dune-dpg
================

In the following, we assume that we are in the directory
$DUNEDIR/dune-dpg/build-cmake .

Dune-dpg comes with three example programs:

src/plot_solution.cc
src/convergence_test.cc
src/profile_testspacecoefficientmatrix.cc

After compilation (see INSTALL for instructions), the example programs
are found in $DUNEDIR/dune-dpg/build-cmake/src/ .

Description of src/plot_solution.cc
-----------------------------------

plot_solution.cc computes the solution with Discontinuous Petrov-Galerkin
finite elements of the pure transport problem
$$
  \beta \cdot \phi +c \phi = 1 in [0,1]x[0,1]
                      \phi = 0 on the boundary
$$
We refer to the paper for notations and also
for indications on how to solve other Partial Differential Equations.

To visualize the solution, run in $DUNEDIR/dune-dpg/build-cmake/src/

    ./plot_solution <n> <c> <betaX> <betaY>

where
  <n> has to be replaced by the desired grid resolution,
  <c> is the linear term in the transport problem
  <betaX> <betaY> are the X and Y components of the transport direction
                  beta=(betaX, betaY).
  (When unspecified, c=0 and beta=(cos(π/8), sin(π/8)).)

The program will write two .vtu files to the current working directory,

  transport_solution.vtu
  transport_solution_trace.vtu

They contain the numerical solution of the interior variable $\phi$ and
the lifting $w$ of the trace variable (cf. paper).
The files allow to visualize $\phi$ and $w$ in ParaView. If you have the
pvpython interpreter shipped with ParaView, you can also run
scripts/plot_solution.py to regenerate the solution plot given in the paper.
(This script was run with ParaView 4.2.0. As the Python interface of ParaView
seems to be highly unstable, we cannot guarantee, that the script will run
unmodified on another version of ParaView.)

Description of src/convergence_test.cc
--------------------------------------

convergence_test.cc uses several preprocessor variables to set the
refinement level and polynomial degree of the test search space and the
search space used in the computation of the a posteriori error.
These variables get instantiated with different values by CMake to create
executables of the form

    src/convergence_test_ls$LS_ks$KS_la$LA_ka$KA

where $LS is the refinement level of the test search space,
      $KS is the polynomial degree of the test search space,
      $LA is the refinement level of the a posteriori search space,
      $KA is the polynomial degree of the a posteriori search space.

They can be run in $DUNEDIR/dune-dpg/build-cmake/src/ by e.g.

    ./convergence_test_ls0_ks3_la0_ka5 <n>

to compute the numerical example of the paper,
$$
  \beta \cdot \phi = 1 in [0,1]x[0,1]
              \phi = 0 on the boundary
$$
for beta=(cos(pi/8),sin(pi/8)). The additional feature that is given in
this example is the computation of the exact L2 error and its a posteriori
estimation.

Remarks:
    - The a posteriori error indicator includes both the error on the
      interior variable and also the trace variable.
    - It is possible to compute the a posteriori error estimator for
      other PDEs. The current program is just an example.

The paper takes this problem as the support for numerical tests and
analyses the impact of
 - the mesh size H of the trial space
 - the h-refinement level of the test search space
on the error and its a posteriori estimation.

To reproduce the convergence results from the paper, call the script

    ../scripts/run_convergence_test.sh 10 20 40 60 80 100 120 140 \
                                       160 200 250 300

which will call the convergence_test_* programs with the given grid
resolutions n=1/H.
The computation takes several hours. (You can leave out some of the larger
grid resolutions n, if you don't want to wait for this long. Especially
the test case with locally refined test search space of level 3 gets very
slow for large n).

Finally, the convergence plots can be generated with

    ../scripts/convergence_plots.py

Remark: To regenerate the plots from the paper, it is advised to compile
the test programs in release mode to significantly speed up the computations.
The release mode can be configured with

    cmake -DCMAKE_BUILD_TYPE=Release .

Afterwards, we can compile the test programs with

    make

Description of src/profile_testspacecoefficientmatrix.cc
--------------------------------------------------------

profile_testspacecoefficientmatrix.cc uses the preprocessor variables
LEVEL_SEARCH and K_SEARCH to set the refinement level and polynomial
degree of the test search space.
These variables get instantiated with different values by CMake to create
executables of the form

    src/profile_testspacecoefficientmatrix_ls$LS_ks$KS

where $LS is the refinement level of the test search space,
      $KS is the polynomial degree of the test search space.

They can be run in $DUNEDIR/dune-dpg/build-cmake/src/ by e.g.

    ./profile_testspacecoefficientmatrix_ls0_ks3 <n>

to measure the time to compute the numerical example of the paper,
$$
  \beta \cdot \phi = 1 in [0,1]x[0,1]
              \phi = 0 on the boundary
$$
for beta=(cos(pi/8),sin(pi/8)).

To reproduce the results from the paper, call the script

    ../scripts/run_profile.sh 10 20 40 60 80 100 120 140 160 200 250 300 \
                              > profile

which will call profile_testspacecoefficientmatrix_* for the given
grid resolutions n=1/H.  The computation takes several minutes.

Finally, the profile plots can be generated with

    ../scripts/profile_plots.py -o profile.pdf profile

Remark: To regenerate the plots from the paper, it is advised to compile
the test programs in release mode to significantly speed up the computations.
The release mode can be configured with

    cmake -DCMAKE_BUILD_TYPE=Release .

Afterwards, we can compile the test programs with

    make

License
=======

Licensing information for dune-dpg can be found in the accompanying file
COPYING.
