Commit a2baaad5 authored by Christian Engwer's avatar Christian Engwer

updated the parfinitevolume example:

 now using the improved vtkout method,
 the problem from the transportproblem2.hh
 and (if wanted) load balancing is used too now and explained in the howto.

creadits to Martin Drohmann

[[Imported from SVN: r237]]
parent 08cc6761
......@@ -2164,17 +2164,25 @@ Finally, we need a new main program, which is in the following listing:
numberstyle=\tiny, numbersep=5pt]{../parfinitevolume.cc}
\end{lst}
Essential differences to the sequential program are in line
A difference to the sequential program can be found in line
\ref{pfc:rank0} where the printing of the data of the current time
step is restricted to the process with rank 0 and in line \ref{pfv:lb} where
the method \lstinline!loadBalance!\ is called on the grid. This method
re-partitions the grid in a way such that on every partition there is an equal
amount of grid elements. Some parallel grids like e.g.~\lstinline!YaspGrid!
however do not support load balancing and therefore need to start with a
sufficiently fine grid that allows a reasonable partition on all processes.
Note that during each global refinement step the overlap region of a
\lstinline!YaspGrid!\ grows and therefore the communication overhead increases.
step is restricted to the process with rank 0.
\lstinline!YaspGrid! does not support dynamical load balancing and therefore
needs to start with a sufficiently fine grid that allows a reasonable partition
where each processes gets a non-empty part of grid. This is why we do not use
DGF Files in the parallel example and initialize the grid by the UnitCube class
instead. For \lstinline!YaspGrid! this allows an easy selection of the grid's
initial coarseness through the second template argument of the
\lstinline!UnitCube!. This argument should be chosen sufficiently high, because
after each global refinement step the overlap region grows and therefore the
communicaton overhead increases.
If you want to use a grid with support for dynamical load balancing, define the
macro \lstinline!LOAD\_BALANCING!\ and uncomment on of the possible definitions
for such a grid in the code. In this case in line \ref{pfv:lb} the method
\lstinline!loadBalance!\ is called on the grid.
This method re-partitions the grid in a way such that on every partition there
is an equal amount of grid elements.
% \chapter{Input and Output}
......
......@@ -7,9 +7,11 @@
#include <dune/grid/common/mcmgmapper.hh> // mapper class
#include <dune/common/mpihelper.hh> // include mpi helper class
// checks for defined gridtype and inlcudes appropriate dgfparser implementation
#include "vtkout.hh"
#include "unitcube.hh"
#include "transportproblem.hh"
#include "transportproblem2.hh"
#include "initialize.hh"
#include "parfvdatahandle.hh"
#include "parevolve.hh"
......@@ -90,9 +92,41 @@ int main (int argc , char ** argv)
// start try/catch block to get error messages from dune
try {
UnitCube<Dune::YaspGrid<2>,64> uc;
using namespace Dune;
UnitCube<YaspGrid<2>,64> uc;
uc.grid().globalRefine(2);
partimeloop(uc.grid(),0.5);
/* To use an alternative grid implementations for parallel computations,
uncomment exactly one definition of uc2 and the line below. */
// #define LOAD_BALANCING
// UGGrid supports parallelization in 2 or 3 dimensions
#if HAVE_UG
// typedef UGGrid< 2 > GridType;
// UnitCube< GridType, 2 > uc2;
#endif
// ALUGRID supports parallelization in 3 dimensions only
#if HAVE_ALUGRID
// typedef ALUCubeGrid< 3, 3 > GridType;
// typedef ALUSimplexGrid< 3, 3 > GridType;
// UnitCube< GridType , 1 > uc2;
#endif
#ifdef LOAD_BALANCING
// refine grid until upper limit of level
uc2.grid().globalRefine( 6 );
// re-partition grid for better load balancing
uc2.grid().loadBalance(); /*@\label{pfv:lb}@*/
// do time loop until end time 0.5
partimeloop(uc2.grid(), 0.5);
#endif
}
catch (std::exception & e) {
std::cout << "STL ERROR: " << e.what() << std::endl;
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment