Skip to content
Snippets Groups Projects
Commit 00c2c9ef authored by Markus Blatt's avatar Markus Blatt
Browse files

Started documentation of the istl communication classes .

[[Imported from SVN: r1964]]
parent c6997114
No related branches found
No related tags found
No related merge requests found
...@@ -167,6 +167,8 @@ AC_CONFIG_FILES([Makefile ...@@ -167,6 +167,8 @@ AC_CONFIG_FILES([Makefile
doc/devel/Makefile doc/devel/Makefile
doc/appl/Makefile doc/appl/Makefile
doc/appl/refelements/Makefile doc/appl/refelements/Makefile
doc/istl/Makefile
doc/istl/comm/Makefile
doc/layout/Makefile doc/layout/Makefile
doc/doxygen/Makefile doc/doxygen/Makefile
m4/Makefile m4/Makefile
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# distribute these files: # distribute these files:
EXTRA_DIST = Buildsystem EXTRA_DIST = Buildsystem
SUBDIRS = devel appl doxygen layout SUBDIRS = devel appl doxygen layout istl
PAGES = index.html PAGES = index.html
......
Makefile
Makefile.in
semantic.cache
\ No newline at end of file
# $id: $
SUBDIRS = comm
Makefile
Makefile.in
semantic.cache
output
indexset
.deps
.libs
*.aux
*.bbl
*.blg
*.log
*.out
*.toc
*.dvi
*.pdf
*.ps
*.rel
\ No newline at end of file
# $Id$
# only build these programs if an MPI-implementation was found
if MPI
MPIPROGRAMS = indexset
endif
SUFFIXES = .dvi .tex .pdf
dist_pkgdata_DATA = communication.pdf communication.ps
noinst_PROGRAMS = $(MPIPROGRAMS)
indexset_SOURCES = indexset.cc
indexset_CXXFLAGS = $(MPI_CPPFLAGS)
indexset_LDADD = $(MPI_LDFLAGS)
# rerun TEX if log-file suggests that
.tex.dvi:
$(TEX) $*
while grep Rerun $*.log > /dev/null ; do \
$(TEX) $* ; \
done
# check if Bibtex needs to be called
if grep '^\\citation{' *.aux > /dev/null ; then \
$(BIBTEX) $* ; \
$(TEX) $* ; \
while grep Rerun $*.log > /dev/null ; do \
$(TEX) $* ; \
done ; \
fi
.dvi.pdf:
$(DVIPDF) $*
.dvi.ps:
$(DVIPS) $*
CLEANFILES = *.aux *.bbl *.blg *.log *.out *.toc *.dvi *.pdf *.ps
\documentclass[11pt]{article}
\usepackage{multicol}
\usepackage{ifthen}
%\usepackage{multitoc}
%\usepackage{german}
%\usepackage{bibgerm}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{color}
\usepackage{hyperref}
\usepackage[dvips]{epsfig}
\usepackage[dvips]{graphicx}
\usepackage[a4paper,body={148mm,240mm,nohead}]{geometry}
\usepackage[ansinew]{inputenc}
\usepackage{listings}
\lstset{language=C++, basicstyle=\ttfamily,
stringstyle=\ttfamily, commentstyle=\it, extendedchars=true}
\newif\ifpdf
\ifx\pdfoutput\undefined
\pdffalse % we are not running PDFLaTeX
\else
\pdfoutput=1 % we are running PDFLaTeX
\pdftrue
\fi
\ifpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage{graphicx}
\fi
\ifpdf
\DeclareGraphicsExtensions{.pdf, .jpg, .tif}
\else
\DeclareGraphicsExtensions{.eps, .jpg}
\fi
%\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{class}[theorem]{Class}
\newtheorem{algorithm}[theorem]{Algorithm}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\newcommand{\C}{\mathbb{C}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\K}{\mathbb{K}}
\newcommand{\loc}{\mbox{loc}}
\title{Communication within the Iterative Solver Template Library (ISTL)\thanks{Part of the
Distributed and Unified Numerics Environment (DUNE) which is
available from the site
\texttt{http://www.dune.uni-hd.de/}}}
\author{%
Markus Blatt\\
Interdisziplinäres Zentrum für Wissenschaftliches Rechnen,\\
Universität Heidelberg, Im Neuenheimer Feld 368, D-69120 Heidelberg, \\
email: \texttt{Markus.Blarr@iwr.uni-heidelberg.de}}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
This document describes usage and interface of the classes meant for
setting up the communication within a parallel programm using
ISTL. As most of the communication in distributed programm occur in
the same pattern it is often more efficient (and of course more easy
for the programmer) to build the communication pattern once in the
programm and then use multiple times (e.~g. at each iteration step
of an iterative solver).
\end{abstract}
\begin{multicols}{2}
{\small\tableofcontents}
\end{multicols}
\section{Introduction}
\label{sec:introduction}
\section{Index Sets}
\label{sec:index-sets}
During distributed computations every discretization point needs to be
indentified uniquely by every process regardless of where it is
actually stored. In most scenarios it not advisable to store all the
data needed for the computation on every process as memory is often a
limiting factor in scientific computing. Therefore the data will
distributed between the processes and each process will store only the
data corresponding to its own part of the distribution. Due to the
efficiency of the local commnunication is it normally best practice to
hold the locally stored data in consecutive memory chunks.
This means that for the local computation the data must be adressable
by a consecutive index starting from 0. When using adaptive
discretization methods there might be need to reorder the indices
after adding and/or deleting some of the discretization
points. Therefore this index does not have to be persistent. Further
on we will call this index {\em\index{local index}local index}.
For the communication phases of our algorithms these locally stored
indices must also be adressable by a global identifier to be able to
store the received values tagged with the global identifiers at the
correct local index in the consecutive local memory chunk. To ease the
addition and removal of discretization points this global identifier has
to be persistent. Further on we will call this global identifier
{\em\index{global index}global index}.
\paragraph{IndexSet}
Let $I \subset \N_0$ be an arbitrary, not necessarily consecutive,
index set identifying all discretization points of the computation.
Further more let $(I_p)_{p\in[0,P)}$,
$\bigcup\limits_{p=0}^{P-1} I_p = I$ be an overlapping decompostion of the global index set
$I$ into the sets of indices $I_p$ corresponding to the
discretization points stored locally on process $p$.
Then the
\begin{lstlisting}{}
template<typename TG, typename TL, int N>
class IndexSet;
\end{lstlisting}
realizes the one to one mapping
$$
\gamma_p\::\: I_p \longrightarrow I^{\loc}_p := [0, n_p)
$$
of the globally unique index onto the local index.
The template parameter \lstinline!TG! is the type of the global
index and
\lstinline!TL! is the type of the local index, that has to be
convertible to \lstinline!std::size_t!, and the parameter
\lstinline!N! is used internally to specify the chunk size of the
array list.
To be able to attach further information to the index the only
prerequesite for the type of the local index is that it is convertible
to \lstinline!std::size_t! as it it meant for adressing array
elements.
\paragraph{ParallelLocalIndex}
When dealing with overlapping index sets in distributed computing
there often is the need to distinguish different part of the index
set, e.~g. mark some of the indices as owned by the process and others
as owned by another process.
This can easily be done by using the class
\begin{lstlisting}{}
template<typename TA>
class ParlallelLocalIndex;
\end{lstlisting}
where the template parameter \lstinline!TA! is the type of the
attributes used, e.~g. \lstinline!enum{owner, overlap}!.
As the programmer often knows in advance which indices might also be
present on other processes there is the possiblity to mark the index
as public.
Let us look at a short example on how to use the classes:
\lstinputlisting[caption=build an index set, label=lst:build_indexset]{indexset.cc}
\section{Remote Indices}
\label{sec:remote-indices}
\section{Communication Interface}
\label{sec:comm-interf}
\section{Communicator}
\label{sec:communicator}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
// -*- tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 2 -*-
// vi: set et ts=4 sw=2 sts=2:
// $Id$
#include <dune/istl/indexset.hh>
#include <dune/istl/plocalindex.hh>
#include <iostream>
#include "mpi.h"
/**
* @brief Flag for marking the indices.
*/
enum Flag {owner, overlap};
int main(int argc, char **argv)
{
// This is a parallel programm so we need to
// initialize mpi first.
MPI_Init(&argc, &argv);
// The number of processes
int size;
// The rank of our process
int rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
// The type used as the local index
typedef Dune::ParallelLocalIndex<Flag> LocalIndex;
// The type used as the global index
typedef int GlobalIndex;
// The index set we use to identify the local indices with the globally
// unique ones
typedef Dune::IndexSet<GlobalIndex,LocalIndex,100> IndexSet;
// The index set
IndexSet indexSet;
// Indicate that we add or remove indices.
indexSet.beginResize();
if(rank==0) {
indexSet.add(0, LocalIndex(0,overlap,true));
indexSet.add(2, LocalIndex(1,owner,true));
indexSet.add(6, LocalIndex(2,owner,true));
indexSet.add(3, LocalIndex(3,owner,true));
indexSet.add(5, LocalIndex(4,owner,true));
}
if(rank==1) {
indexSet.add(0, LocalIndex(0,owner,true));
indexSet.add(1, LocalIndex(1,owner,true));
indexSet.add(7, LocalIndex(2,owner,true));
indexSet.add(5, LocalIndex(3,overlap,true));
indexSet.add(4, LocalIndex(4,owner,true));
}
// Modification is over
indexSet.endResize();
// Print the index set
std::cout<<indexSet<<std::endl;
// Let MPI do a cleanup
MPI_Finalize();
return 0;
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment