dune-istl issueshttps://gitlab.dune-project.org/core/dune-istl/-/issues2023-03-13T13:59:45Zhttps://gitlab.dune-project.org/core/dune-istl/-/issues/108Generic implementation of parallel solvers2023-03-13T13:59:45ZChristian EngwerGeneric implementation of parallel solversCurrently dune-istl offers infrastructure for parallel solvers, but one still has to write a parallel operator-apply, a parallel scalar-product and potentially a parallel preconditioner.
As has been discussed a couple of times, it would...Currently dune-istl offers infrastructure for parallel solvers, but one still has to write a parallel operator-apply, a parallel scalar-product and potentially a parallel preconditioner.
As has been discussed a couple of times, it would be desirable to provide generic parallel solvers, based on a yet-to-be-defined communication interface.
Such generic communication could build upon
- [ ] **1. pattern based communication**: per rank a links of multi-indices tell what to send and receive (prototype in [PDELab](https://gitlab.dune-project.org/pdelab/dune-pdelab/-/tree/feature/cachedcomm), extending the approach in [dune-fem](https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/space/common/cachedcommmanager.hh))
- [ ] **2. direct support for multi-indices**: required to gather/scatter to/from a communication buffer (see issue #105 and a [branch](https://gitlab.dune-project.org/core/dune-istl/-/commits/feature/cachedcomm) with a [`forEachIndex` implementation](https://gitlab.dune-project.org/core/dune-istl/-/blob/feature/cachedcomm/dune/istl/access.hh))
- [ ] **3. partial scalar product**: computing (in some way) a scalar product that avoid multiple contributions for overlap DOFs (see [experiments](https://gitlab.dune-project.org/core/dune-istl/-/blob/experiment/performancetest-parallel-sp-algorithms/dune/istl/test/spperformancetest.cc) in branch [experiment/performancetest-parallel-sp-algorithms](https://gitlab.dune-project.org/core/dune-istl/-/tree/experiment/performancetest-parallel-sp-algorithms))https://gitlab.dune-project.org/core/dune-istl/-/issues/105Direct support for multi indices2023-03-05T20:17:45ZChristian EngwerDirect support for multi indicesConceptually ISTL uses multi indices, but there is no direct support in `dune-istl`.
On the other hand most discretization modules use multi-index objects internally to handle nested vectors and matrices. In all these modules we find in...Conceptually ISTL uses multi indices, but there is no direct support in `dune-istl`.
On the other hand most discretization modules use multi-index objects internally to handle nested vectors and matrices. In all these modules we find in some way additional helper functions that allow to access or modify ISTL containers, based on multi indices or partial multi indices.
I discussed with @simon.praetorius and tried to collect some functions that might be helpful. Including such functions in `dune-istl` directly should significantly simplify the implementation of backends, e.g. in PDELab, functions or Amdis. FEM currently has very limited support of nested containers, on the other hand, it the burden of writing a backend vanishes, this might perhaps change?!
This issue should act as collection of ideas for such a multi index interface. I start with ideas for the vectors, matrices a bit more complex and should follow later:
1. we need a proper definition of a multi index concept.
2. for convenience we might want to add an `operator [](MultiIndex)` or some free standing access function like `accessVectorEntry(Vector,MultiIndex)`.
3. functions for perform operators on a set of MultiIndex. This allows for example to
* gather/scatter entries into a communication buffer
* compute a parallel scalar product only on entries owned by the current rankhttps://gitlab.dune-project.org/core/dune-istl/-/issues/90NonoverlappingSchwarzOperator doesn't work for MultiTypeBlockMatrix2020-04-06T08:39:42ZTimo KochNonoverlappingSchwarzOperator doesn't work for MultiTypeBlockMatrixCurrent implementation in `novlschwarz.hh` doesn't compile and work for MultiTypeBlockMatrices as `matrix_type`.Current implementation in `novlschwarz.hh` doesn't compile and work for MultiTypeBlockMatrices as `matrix_type`.https://gitlab.dune-project.org/core/dune-istl/-/issues/21Unexpected behaviour of BCRSMatrix implict buildmode2017-09-22T04:19:36ZCarsten Gräsergraeser@math.fau.deUnexpected behaviour of BCRSMatrix implict buildmodeThe behaviour of the implicit build mode is in several ways unexpected and does not fit its documentation:
* (a) In contrast to the documentation, the allocated overflow area is not used while filling the matrix in favour of a separate `...The behaviour of the implicit build mode is in several ways unexpected and does not fit its documentation:
* (a) In contrast to the documentation, the allocated overflow area is not used while filling the matrix in favour of a separate `std::map`.
* (b) The allocated overflow area does always contain `4*avg` more entries than expected.
* (c) The given `avg` value is not really related to average row sizes. If you give the exact average, `compres()` may fail if the overflow size is zero.
* (d) The allocated overflow area is in fact used to balance the mismatch between the given `avg` value and rows having more entries than this.
* (e) `compress()` may fail depending on the row order: The following will fail for `row=0` but succeed for `row=5`
```cpp
using M = Dune::BCRSMatrix<Dune::FieldMatrix<double,1,1>>;
M m(6, 6, 1, 0.0, M::implicit);
for(std::size_t i=0; i<6; ++i)
m.entry(row,i) = i;
m.compress();
```
Explanation for this behaviour: When allocating storage, this is partitioned into the overflow area followed by `avg` reserved entries per row.
When more than `avg` entries are used in a row, they are stored in an additional `std::map`. On `compress()` all entries are shifted forward towards the first
free position, inserting intermediate entries from the overflow map when necessary. This explains, why early rows cannot have more than `avg`+`overflow`
entries, while later ones may consume remaining space left by preceding less populated rows.
Due to (a) there is a possible fix for several of those problems. Before compressing forward with insertion of overflow entries, one can compress backwards
without this insertion. This would solve (c)-(e) at the price of an additional shifting operation. Even more, this would allow to compute the number of non-zeros in the
first sweep and than allocate new storage if the overflow was to small instead of throwing an exception and letting the user assemble all entries again with a hopefully better guess.https://gitlab.dune-project.org/core/dune-istl/-/issues/2add indexed iterator ranges2020-03-03T11:11:23ZChristian Engweradd indexed iterator rangescurrently we can not use ranged-based for in dune-istl, because many algorithms have to know the index of an entry in the sparse matrix. This index is not the same as the `std::distance(begin,it)` and can not be computed using standard i...currently we can not use ranged-based for in dune-istl, because many algorithms have to know the index of an entry in the sparse matrix. This index is not the same as the `std::distance(begin,it)` and can not be computed using standard iterators.
I suggest to add a wrapper type, with the following interface:
```
template<typename T>
struct IndexedValue {
std::size_t index() const;
T & value();
const T & value() const;
};
```
This will allow us to implement a special `index_range(Matrix)` or `index_range(Vector)` which yields `IndexedValue` instead of the value directly.
These new ranges and new types can be implemented completely outside the current istl containers.
The `BCRSMatrix::umv` might now look like:
```
void umv (const X& x, Y& y) const
{
for (const auto & row : indexed_range(*this))
for (const auto & entry : indexed_range(row.value()))
entry.value().umv(x[entry.index()],y[row.index()]);
}
```
instead of
```
void umv (const X& x, Y& y) const
{
ConstRowIterator endi=end();
for (ConstRowIterator i=begin(); i!=endi; ++i)
{
ConstColIterator endj = (*i).end();
for (ConstColIterator j=(*i).begin(); j!=endj; ++j)
(*j).umv(x[j.index()],y[i.index()]);
}
}
```