dune-istl issueshttps://gitlab.dune-project.org/core/dune-istl/-/issues2023-10-17T16:49:52Zhttps://gitlab.dune-project.org/core/dune-istl/-/issues/113Solver factory cannot be instantiated with dynamic blocks2023-10-17T16:49:52ZSantiago Ospina De Los Ríossospinar@gmail.comSolver factory cannot be instantiated with dynamic blocksThe solver factory fails to be instantiated with dynamic blocks like `BCRSMatrix<BCRSMatrix<...>>`. This seems to me that is because that there is not an agreent on [what a matrix is](https://gitlab.dune-project.org/core/dune-common/-/is...The solver factory fails to be instantiated with dynamic blocks like `BCRSMatrix<BCRSMatrix<...>>`. This seems to me that is because that there is not an agreent on [what a matrix is](https://gitlab.dune-project.org/core/dune-common/-/issues/253#note_128057)... The preconditioners assume that the assembled operators are a matrix and its blocks are either numbers or matrices with the `DenseMatrix<...>` interface, thus, all of the preconditioners with the exception of `Richardson` will fail to be instantiated for `depth==1`.https://gitlab.dune-project.org/core/dune-istl/-/issues/109Why shared_ptr used in solver factory?2023-10-21T11:22:32ZSimon PraetoriusWhy shared_ptr used in solver factory?The `SolverFactory::get` and `SolverFactory::getPreconditioner` and the utility method `getSolverFromFactory` all return the result by `shared_ptr`. Why? A factory should mostly return a `unique_ptr`, or to cite [Herb Sutter GitW \#90](h...The `SolverFactory::get` and `SolverFactory::getPreconditioner` and the utility method `getSolverFromFactory` all return the result by `shared_ptr`. Why? A factory should mostly return a `unique_ptr`, or to cite [Herb Sutter GitW \#90](https://herbsutter.com/2013/05/30/gotw-90-solution-factories/):
> Guideline: A factory that produces a reference type should return a `unique_ptr` by default, or a `shared_ptr` if ownership is to be shared with the factory.
The issue with `shared_ptr` is, that 1. creation, copy, and destruction costs something, 2. a `shared_ptr` cannot be converted into a `unqiue_ptr`, but vice versa is fine.
Maybe second part of the guideline applies here? Do I miss something?https://gitlab.dune-project.org/core/dune-istl/-/issues/108Generic implementation of parallel solvers2023-03-13T13:59:45ZChristian EngwerGeneric implementation of parallel solversCurrently dune-istl offers infrastructure for parallel solvers, but one still has to write a parallel operator-apply, a parallel scalar-product and potentially a parallel preconditioner.
As has been discussed a couple of times, it would...Currently dune-istl offers infrastructure for parallel solvers, but one still has to write a parallel operator-apply, a parallel scalar-product and potentially a parallel preconditioner.
As has been discussed a couple of times, it would be desirable to provide generic parallel solvers, based on a yet-to-be-defined communication interface.
Such generic communication could build upon
- [ ] **1. pattern based communication**: per rank a links of multi-indices tell what to send and receive (prototype in [PDELab](https://gitlab.dune-project.org/pdelab/dune-pdelab/-/tree/feature/cachedcomm), extending the approach in [dune-fem](https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/space/common/cachedcommmanager.hh))
- [ ] **2. direct support for multi-indices**: required to gather/scatter to/from a communication buffer (see issue #105 and a [branch](https://gitlab.dune-project.org/core/dune-istl/-/commits/feature/cachedcomm) with a [`forEachIndex` implementation](https://gitlab.dune-project.org/core/dune-istl/-/blob/feature/cachedcomm/dune/istl/access.hh))
- [ ] **3. partial scalar product**: computing (in some way) a scalar product that avoid multiple contributions for overlap DOFs (see [experiments](https://gitlab.dune-project.org/core/dune-istl/-/blob/experiment/performancetest-parallel-sp-algorithms/dune/istl/test/spperformancetest.cc) in branch [experiment/performancetest-parallel-sp-algorithms](https://gitlab.dune-project.org/core/dune-istl/-/tree/experiment/performancetest-parallel-sp-algorithms))https://gitlab.dune-project.org/core/dune-istl/-/issues/107BCRSMatrix Performance improvements2023-10-11T12:53:33ZSantiago Ospina De Los Ríossospinar@gmail.comBCRSMatrix Performance improvementsIn my latest benchmarks, I found that the matrix-vector operations have some deficiencies under certain conditions and can improved. These are the issues that I have found so far:
1. The `row_type` _stores_ redundant information. Each r...In my latest benchmarks, I found that the matrix-vector operations have some deficiencies under certain conditions and can improved. These are the issues that I have found so far:
1. The `row_type` _stores_ redundant information. Each row stores two pointers (begin of data, begin of sparse indices) and the size. Two of them are redundant because a vector of offsets (the out-of-the-book CRS implementation) can be produce all of them on the fly. This is problematic because loading them implies more memory traffic on an already memory bound operation. The solution to this, is to store the vector of offsets and create the rows on the fly. The obstacle is that matrix creation is written in terms of `row_type`. It's a lot of changes, but is possible.
2. On nested blocked structures (i.e. `BCRSMatrix<BCRSMatrix<Field,...>, ...>`), the size of `BCRSMatrix<Field,...>` is so big (~128 bytes) that loading each sub-block a whole cache line is loaded to only read three pointers (8*3 bytes). The solution to this, is to store the additional information on the heap so that it is out of the hot-path when doing blocking.
3. Also on nested blocked structures. Because `row_type` is stored using pointers to the actual data, it is not possible to reuse the row pattern when copying matrices with the same pattern. When reusing the vector of offsets, even less memory loads are required. This can be solved when using a shared pointer to the vector of offsets described above (same as how the column indices are handled now).
### Benchmark
For a matrix resulting of a structure 2D grid solving a system of N equations using finite differences. N corresponds to the block size on the following figures, and sparsity corresponds to the sparsity of the "sub-blocks". A matrix bigger than L3 cache was used, and the matrix-vector operation was repeated 20 times. Flat matrices with a low number of entries per row get a benefit out of 1, whereas nested blocked matrices get benefit from 1, 2 and 3.
| | Flat | Blocked |
| -- | ------ | ------ |
| Sample Pattern (BlockSize=5, BlockSparcity=0.5) | ![mat-flat.svg](/uploads/7aecd3b5e61255dd5275bf35a95d70ce/mat-flat.svg) | ![mat-bloked.svg](/uploads/49642dee531fa5ec52c93835f6cdd70f/mat-bloked.svg) |
| SpeedUp wrt partial implementation| ![smv-optimized-flat](/uploads/8efe3e9cb0cfd6452f080b5e624a6575/smv-optimized-flat.png) | ![smv-optimized-blocked-with-reuse](/uploads/07f058dc1917225afc148e0fa3cde86b/smv-optimized-blocked-with-reuse.png) |https://gitlab.dune-project.org/core/dune-istl/-/issues/106CompressedBlockVectorWindow copy constructor and copy assignement behave diff...2023-01-31T16:11:52ZSantiago Ospina De Los Ríossospinar@gmail.comCompressedBlockVectorWindow copy constructor and copy assignement behave differentIn the `CompressedBlockVectorWindow` class we have
* [Copy constructor](https://gitlab.dune-project.org/core/dune-istl/-/blob/32e3ff600257f9cce3c0717510d78c91b1e604f2/dune/istl/bvector.hh#L1038): Makes a shallow copy.
* [Copy assignment...In the `CompressedBlockVectorWindow` class we have
* [Copy constructor](https://gitlab.dune-project.org/core/dune-istl/-/blob/32e3ff600257f9cce3c0717510d78c91b1e604f2/dune/istl/bvector.hh#L1038): Makes a shallow copy.
* [Copy assignment](https://gitlab.dune-project.org/core/dune-istl/-/blob/32e3ff600257f9cce3c0717510d78c91b1e604f2/dune/istl/bvector.hh#L1046): Makes a deep copy.
They behave differently, which is unexpected for me. Moreover, since there is no move assignment operator, a move assign to this class results in a deep copy, which is even more unexpected.https://gitlab.dune-project.org/core/dune-istl/-/issues/105Direct support for multi indices2023-03-05T20:17:45ZChristian EngwerDirect support for multi indicesConceptually ISTL uses multi indices, but there is no direct support in `dune-istl`.
On the other hand most discretization modules use multi-index objects internally to handle nested vectors and matrices. In all these modules we find in...Conceptually ISTL uses multi indices, but there is no direct support in `dune-istl`.
On the other hand most discretization modules use multi-index objects internally to handle nested vectors and matrices. In all these modules we find in some way additional helper functions that allow to access or modify ISTL containers, based on multi indices or partial multi indices.
I discussed with @simon.praetorius and tried to collect some functions that might be helpful. Including such functions in `dune-istl` directly should significantly simplify the implementation of backends, e.g. in PDELab, functions or Amdis. FEM currently has very limited support of nested containers, on the other hand, it the burden of writing a backend vanishes, this might perhaps change?!
This issue should act as collection of ideas for such a multi index interface. I start with ideas for the vectors, matrices a bit more complex and should follow later:
1. we need a proper definition of a multi index concept.
2. for convenience we might want to add an `operator [](MultiIndex)` or some free standing access function like `accessVectorEntry(Vector,MultiIndex)`.
3. functions for perform operators on a set of MultiIndex. This allows for example to
* gather/scatter entries into a communication buffer
* compute a parallel scalar product only on entries owned by the current rankhttps://gitlab.dune-project.org/core/dune-istl/-/issues/100Compiler warnings with gcc 9.32022-09-23T09:38:50ZTimo KochCompiler warnings with gcc 9.3```
dune-istl/dune/istl/paamg/transfer.hh: In instantiation of 'static void Dune::Amg::Transfer<V, V1, Dune::Amg::SequentialInformation>::prolongateVector(const Dune::Amg::AggregatesMap<V>&, Dune::Amg::Transfer<V, V1, Dune::Amg::Sequenti...```
dune-istl/dune/istl/paamg/transfer.hh: In instantiation of 'static void Dune::Amg::Transfer<V, V1, Dune::Amg::SequentialInformation>::prolongateVector(const Dune::Amg::AggregatesMap<V>&, Dune::Amg::Transfer<V, V1, Dune::Amg::SequentialInformation>::Vector&, Dune::Amg::Transfer<V, V1, Dune::Amg::SequentialInformation>::Vector&, T, const Dune::Amg::SequentialInformation&) [with T1 = double; V = long unsigned int; V1 = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >; Dune::Amg::Transfer<V, V1, Dune::Amg::SequentialInformation>::Vector = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >]':
dune-istl/dune/istl/paamg/amg.hh:1124:27: required from 'void Dune::Amg::AMG<M, X, S, PI, A>::additiveMgc() [with M = Dune::MatrixAdapter<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > >; X = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >; S = Dune::SeqILU<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, 1>; PI = Dune::Amg::SequentialInformation; A = std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > >]'
dune-istl/dune/istl/paamg/amg.hh:892:9: required from 'void Dune::Amg::AMG<M, X, S, PI, A>::apply(Dune::Amg::AMG<M, X, S, PI, A>::Domain&, const Range&) [with M = Dune::MatrixAdapter<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > >; X = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >; S = Dune::SeqILU<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >, 1>; PI = Dune::Amg::SequentialInformation; A = std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > >; Dune::Amg::AMG<M, X, S, PI, A>::Domain = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >; Dune::Amg::AMG<M, X, S, PI, A>::Range = Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >]'
dune-istl/dune/istl/paamg/amg.hh:886:10: required from here
dune-istl/dune/istl/paamg/transfer.hh:119:10: warning: implicitly-declared 'constexpr Dune::Imp::base_array_unmanaged<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >::RealIterator<Dune::FieldVector<double, 1> >& Dune::Imp::base_array_unmanaged<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >::RealIterator<Dune::FieldVector<double, 1> >::operator=(const Dune::Imp::base_array_unmanaged<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >::RealIterator<Dune::FieldVector<double, 1> >&)' is deprecated [-Wdeprecated-copy]
119 | end=fine.end();
In file included from dune-istl/dune/istl/bvector.hh:26,
from dune-istl/dune/istl/bcrsmatrix.hh:18,
from dune-istl/dune/istl/matrixmarket.hh:29,
from dune-istl/dune/istl/owneroverlapcopy.hh:33,
from dumux/dumux/linear/parallelhelpers.hh:32,
from dumux/dumux/assembly/fvassembler.hh:34,
from dumux/test/freeflow/navierstokes/donea/main_momentum.cc:34:
dune-istl/dune/istl/basearray.hh:112:7: note: because 'Dune::Imp::base_array_unmanaged<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > >::RealIterator<Dune::FieldVector<double, 1> >' has user-provided 'Dune::Imp::base_array_unmanaged<B, A>::RealIterator<T>::RealIterator(const Dune::Imp::base_array_unmanaged<B, A>::RealIterator<typename std::remove_const<T>::type>&) [with T = Dune::FieldVector<double, 1>; B = Dune::FieldVector<double, 1>; A = std::allocator<Dune::FieldVector<double, 1> >; typename std::remove_const<T>::type = Dune::FieldVector<double, 1>]'
112 | RealIterator(const RealIterator<ValueType>& it)
| ^~~~~~~~~~~~
```
```
dune-istl/dune/istl/paamg/amg.hh:919:26: warning: implicitly-declared 'Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >::LevelIterator<Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int> >& Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >::LevelIterator<Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int> >::operator=(const Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >::LevelIterator<Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int> >&)' is deprecated [-Wdeprecated-copy]
919 | levelContext.pinfo = matrices_->parallelInformation().finest();
In file included from dune-istl/dune/istl/paamg/matrixhierarchy.hh:13,
from dune-istl/dune/istl/paamg/amg.hh:11,
from dumux/dumux/linear/amgbackend.hh:35,
from dumux/dumux/linear/seqsolverbackend.hh:43,
from dumux/test/freeflow/navierstokes/donea/main_momentum.cc:41:
dumux/dune-istl/dune/istl/paamg/hierarchy.hh:134:9: note: because 'Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >::LevelIterator<Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int> >' has user-provided 'Dune::Amg::Hierarchy<T, A>::LevelIterator<T1, T2>::LevelIterator(const Dune::Amg::Hierarchy<T, A>::LevelIterator<typename std::remove_const<T>::type, typename std::remove_const<T1>::type>&) [with C = Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >; T1 = Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>; T = Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>; A = std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > >; typename std::remove_const<T1>::type = Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>; typename std::remove_const<T>::type = Dune::Amg::Hierarchy<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>, std::allocator<Dune::BlockVector<Dune::FieldVector<double, 1>, std::allocator<Dune::FieldVector<double, 1> > > > >]'
134 | LevelIterator(const LevelIterator<typename std::remove_const<C>::type,
| ^~~~~~~~~~~~~
```DUNE 2.9.0https://gitlab.dune-project.org/core/dune-istl/-/issues/99[UMFPack] Allow to set UMFPACK_ORDERING via ParameterTree2021-03-04T09:24:03ZKilian Weishaupt[UMFPack] Allow to set UMFPACK_ORDERING via ParameterTreeWe have recently [found a case](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/999) where a different choice of the UMFPACK_ORDERING method (see page 17 [here](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=we...We have recently [found a case](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/999) where a different choice of the UMFPACK_ORDERING method (see page 17 [here](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjHmorN7JPvAhXQ8qQKHbkgDUkQFjAAegQIARAD&url=https%3A%2F%2Ffossies.org%2Flinux%2FSuiteSparse%2FUMFPACK%2FDoc%2FUMFPACK_UserGuide.pdf&usg=AOvVaw2LJe4yrqT0PhXusgXBc3HB)) decreases the linear solve time by a factor of 4.
It would be very convenient to set the ordering via the ParameterTree, using the new interface.https://gitlab.dune-project.org/core/dune-istl/-/issues/96Policy/Assumptions on field_types in MultitypeBlockVector/Matrix?2020-10-07T18:36:25ZMarkus BlattPolicy/Assumptions on field_types in MultitypeBlockVector/Matrix?I am trying to get rid of some warnings about shadowing variables/using
directions. Some of them are due to using field_type = ... being reset
in some methods (e.g. norms). I don't want to break anything, hence the
question arises:
Do w...I am trying to get rid of some warnings about shadowing variables/using
directions. Some of them are due to using field_type = ... being reset
in some methods (e.g. norms). I don't want to break anything, hence the
question arises:
Do we insist on field_type being the same for each entry? Seems like the
code at least insists that the field_types of all entries are convertible
to the one of the very first entry.
For the template parameters, arbitrary different types are possible.
Our tests assume always use the same type.https://gitlab.dune-project.org/core/dune-istl/-/issues/94METIS vs scotchmetis and ParMETIS vs ptscotchparmetis2020-10-28T16:07:51ZSimon PraetoriusMETIS vs scotchmetis and ParMETIS vs ptscotchparmetis### Summary
The file `repartition.hh` is the only place in the core modules using (Scotch)METIS or (Scotch)ParMETIS. There are currently some discussions in https://gitlab.dune-project.org/core/dune-common/-/merge_requests/822 and https:...### Summary
The file `repartition.hh` is the only place in the core modules using (Scotch)METIS or (Scotch)ParMETIS. There are currently some discussions in https://gitlab.dune-project.org/core/dune-common/-/merge_requests/822 and https://gitlab.dune-project.org/core/dune-common/-/merge_requests/821 whether the scotch variants should be dropped. This issues should provide some insights into this problem and tries to find a solution.
### Details
- Scotch implements the interface of some (Par)METIS functions, like `ParMETIS_V3_PartKway`. It ships under the open-source license CeCILL-C (compatible to GPL(?))
- METIS is under the Apache License Version 2.0, while ParMETIS is distributed under a special license for educational, research, non-profit and US-agency use only.
- debian-free does not ship parmetis
- The interface of Scotch is compatible with METIS-4/5, while the interface of PTScotch seems to be compatible with ParMETIS-4
- ParMETIS, on the other hand, requires METIS >= 5
- METIS >= 5 introduces typedefs for its real and index types, see discussion in https://gitlab.dune-project.org/core/dune-istl/issues/45
In summary: One can either use METIS-5 + ParMETIS-4, or Scotch + PTScotch. Not a mixup of scotch and METIS.
Scotch does not (in no version) provide the typedefs of METIS-5, but it is provided in several independent version for index types that makes it hard to configure.
Some more problems:
- Scotch-METIS is not fully compatible to METIS. In recent scotch version >= 6.0.7 you can select which METIS API you want to provide. Either API v3 or API v5. Both are not exactly the same as in METIS, but at least similar. Some function arguments even have a different meaning and the same values would result in completely different behaviour.
- PTScotch-ParMETIS is compatible to ParMETIS. There is only one API.
- PTScotch does not "export-C" its functions. This is a problem for using the library in C++ and introduces some restrictions on dependent libraries like MPI that, because of this, need to be provided for C and not for C++ plus lots of required platform dependent definitions
### Discussion
1. Do we want to support the (pt)scotch interface at all?
2. Do we want to assume that when calling `find_package(METIS)` in cmake we silently can also get scotch (and correspondingly for ParMETIS and PTScotch)?
3. Currently there is a workaround for the missing typedefs implemented in `dune/istl/repartition.hh` only, see MR https://gitlab.dune-project.org/core/dune-istl/-/merge_requests/173 Do we want to provide a similar workaround for the general core modules? This seems reasonable, since the find package is provided with dune-common.
4. Is there experience with scotch and ptscotch? Can someone test the changes in the cmake modules in https://gitlab.dune-project.org/core/dune-common/-/merge_requests/822 and https://gitlab.dune-project.org/core/dune-common/-/merge_requests/821https://gitlab.dune-project.org/core/dune-istl/-/issues/90NonoverlappingSchwarzOperator doesn't work for MultiTypeBlockMatrix2020-04-06T08:39:42ZTimo KochNonoverlappingSchwarzOperator doesn't work for MultiTypeBlockMatrixCurrent implementation in `novlschwarz.hh` doesn't compile and work for MultiTypeBlockMatrices as `matrix_type`.Current implementation in `novlschwarz.hh` doesn't compile and work for MultiTypeBlockMatrices as `matrix_type`.https://gitlab.dune-project.org/core/dune-istl/-/issues/88Use custom exceptions for solverfactory.2020-01-27T11:17:00ZMarkus BlattUse custom exceptions for solverfactory.We are currently throwing rather generic exceptions if something is wrong with the configuration.
Unfortunately, in user simulation code this might result in the algorithm misjudging this as convergence problems (e.g. in a Newton metho...We are currently throwing rather generic exceptions if something is wrong with the configuration.
Unfortunately, in user simulation code this might result in the algorithm misjudging this as convergence problems (e.g. in a Newton method) and will run the whole simulation.https://gitlab.dune-project.org/core/dune-istl/-/issues/87[solver factory] Sensible defaults missing2023-03-07T14:49:14ZMarkus Blatt[solver factory] Sensible defaults missingI played further around with this. It seems like there are some sensible defaults missing.
I can live with an exception if there is no type defined for preconditioner and solver. But e.g. throwing when constructing AMG because one has n...I played further around with this. It seems like there are some sensible defaults missing.
I can live with an exception if there is no type defined for preconditioner and solver. But e.g. throwing when constructing AMG because one has not defined maxlevel seems a bit rude.https://gitlab.dune-project.org/core/dune-istl/-/issues/86Inconsistent naming of parameters for solver factory2021-06-24T17:08:58ZMarkus BlattInconsistent naming of parameters for solver factoryWhile writing an adapter for the solver factory, I stumbled over the rather inconsistent naming of the parameters used for
the solver factory. It is a pity that nobody looked at this before merging and that this is now part of a stable r...While writing an adapter for the solver factory, I stumbled over the rather inconsistent naming of the parameters used for
the solver factory. It is a pity that nobody looked at this before merging and that this is now part of a stable release.
We might want to change this ASAP as otherwise this might last forever.
It seems like the parameter names taken are just the names of the constructor parameters. Depending on who wrote the code, these are a bit too different or rather unexplicit:
- "verbose" is used for the solver, while the AMG uses "verbosity" (these should be the same to avoid confusion)
- "mmax" for the number of vectors stored in FCG is not very explicit
- "n" for the order of the ILU decomposition might still be ok, but order seems better
My questions would be:
- Shouldn't we settle on either verbose or verbosity?
- Shouldn't we use expressive parameter names (maxOrthogonalizationVectors instead of mmax
- Should we prefix preconditioner specific parameters with the the preconditioner "n"-> "iluN" or "iluOrder"?https://gitlab.dune-project.org/core/dune-istl/-/issues/84Inconsistencies in naming conventions in dune-istl2020-01-21T10:30:22ZTimo KochInconsistencies in naming conventions in dune-istlClasses in dune-istl seem to follow the CamelCase naming scheme used throughout Dune. However free functions and member functions, as well as data members and aliases, seem to be an arbitrary mix between something like STL-style naming c...Classes in dune-istl seem to follow the CamelCase naming scheme used throughout Dune. However free functions and member functions, as well as data members and aliases, seem to be an arbitrary mix between something like STL-style naming conventions (all lowercase or lowercase+underscore) and "Dune naming conventions" (camelCase). Differences even occur in the same scope and in the same file (without visible reason).
For example:
* `dune/istl/io.hh` contains the free functions
```c++
void recursive_printvector(..);
void printvector (...);
inline void fill_row (...);
void print_row (...);
void printmatrix (...);
void printSparseMatrix(...);
void writeMatrixToMatlabHelper(...);
void writeMatrixToMatlab(...);
void writeVectorToMatlabHelper(...);
void writeVectorToMatlab(...);
```
* `dune/istl/common/registry.hh` (addressed in !350) contains a macro (usually all uppercase in Dune)
```c++
#define registry_put(Tag, id, ...) ...
```
and the free functions
```c++
auto registry_get(...);
int addRegistryToFactory(...);
```
* In `dune/istl/bcrs.hh`, `Dune::BCRSMatrix` has the typedefs
```c++
typedef Imp::CompressedBlockVectorWindow<B,A> row_type;
typedef ::Dune::CompressionStatistics<size_type> CompressionStatistics;
```
and the public data member
```c++
static constexpr unsigned int blocklevel = ...;
```
the public member functions
```c++
beforeBegin ();
beforeEnd ();
void setBuildMode(BuildMode bm);
void setSize();
void setImplicitBuildModeParameters();
CreateIterator createbegin ();
CreateIterator createend ();
void setrowsize ();
void getrowsize ();
void incrementrowsize ();
void addindex ();
void setIndices(...);
real_type frobenius_norm2 () const;
real_type infinity_norm() const;
N ();
M ();
nonzeroes();
BuildStage buildStage() const;
BuildMode buildMode() const;
```
and some private member functions like
```c++
void allocateData();
void implicit_allocate(...);
```
This mix looks unprofessional/ugly to me and makes it harder for users to guess the correct name of a function. It also makes the code harder to read because you wonder what the difference between two functions is that have different naming scheme ("something should be different right? because it looks different..."). I think I listed some rather surprising cases above.
Does it make sense to unify the naming scheme? Deprecate old names and promote new names? Or is that not bothering anyone else (enough)? Which naming scheme would be appropriate (I would guess camelCase for functions to be consistent with the rest of Dune. However there are also interfaces in dune-common like `DenseVector::two_norm` which are not camelCase although they are Dune-specific types. I'm willing to do some cleanup work if someone cares and the objective/rules are clear.
There is has been a related discussion about some of the dune-common types in https://gitlab.dune-project.org/core/dune-common/issues/83. So there might be already some rules that were decided on that I don't know about...https://gitlab.dune-project.org/core/dune-istl/-/issues/82factory to construct parallel solvers2020-03-11T10:58:16ZChristian Engwerfactory to construct parallel solversthis requires a few points
- [ ] define a coarse grained interface to abstract the parallel communication
- [ ] fallback implementation using parallel-index-sets
- [ ] implement generic parallel preconditioners and scalar-products bas...this requires a few points
- [ ] define a coarse grained interface to abstract the parallel communication
- [ ] fallback implementation using parallel-index-sets
- [ ] implement generic parallel preconditioners and scalar-products based on this interface
- [ ] factory methods to create parallel solvers
(perhaps we will have to split this issue in two steps...)https://gitlab.dune-project.org/core/dune-istl/-/issues/81use factories to create coarse solver, smoother, etc. in the AMG2021-01-26T11:11:52ZChristian Engweruse factories to create coarse solver, smoother, etc. in the AMGin !312 we implemented factories to create preconditioners, etc. On the longer run these should be used internally in the AMG to create coarse grid solvers, etc.in !312 we implemented factories to create preconditioners, etc. On the longer run these should be used internally in the AMG to create coarse grid solvers, etc.https://gitlab.dune-project.org/core/dune-istl/-/issues/80matrixredisttest leaks memory2019-12-17T09:35:11ZOliver Sanderoliver.sander@tu-dresden.dematrixredisttest leaks memoryWhen running `matrixredisttest` with the valgrind leak checker, it reports an error somewhere in `CollectiveCommunication::allreduce`:
==16702== 1,344 bytes in 1 blocks are definitely lost in loss record 65 of 66
==16702== at...When running `matrixredisttest` with the valgrind leak checker, it reports an error somewhere in `CollectiveCommunication::allreduce`:
==16702== 1,344 bytes in 1 blocks are definitely lost in loss record 65 of 66
==16702== at 0x483577F: malloc (vg_replace_malloc.c:299)
==16702== by 0x5051C3F: ompi_op_create_user (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
==16702== by 0x5089CE8: PMPI_Op_create (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
==16702== by 0x1CF0B8: Dune::Generic_MPI_Op<Dune::bigunsignedint<56>, Dune::Max<Dune::bigunsignedint<56> > >::get() (mpicollectivecommunication.hh:45)
==16702== by 0x1C2875: int Dune::CollectiveCommunication<ompi_communicator_t*>::allreduce<Dune::Max<Dune::bigunsignedint<56> >, Dune::bigunsignedint<56> >(Dune::bigunsignedint<56> const*, Dune::bigunsignedint<56>*, int) const (mpicollectivecommunication.hh:329)
==16702== by 0x1B383C: Dune::bigunsignedint<56> Dune::CollectiveCommunication<ompi_communicator_t*>::max<Dune::bigunsignedint<56> >(Dune::bigunsignedint<56> const&) const (mpicollectivecommunication.hh:228)
==16702== by 0x1B0F50: void Dune::fillIndexSetHoles<Dune::Amg::MatrixGraph<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > > >, Dune::bigunsignedint<56>, int>(Dune::Amg::MatrixGraph<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > > > const&, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<56>, int>&) (repartition.hh:115)
==16702== by 0x1A8589: bool Dune::graphRepartition<Dune::Amg::MatrixGraph<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > > >, Dune::bigunsignedint<56>, int>(Dune::Amg::MatrixGraph<Dune::BCRSMatrix<Dune::FieldMatrix<double, 1, 1>, std::allocator<Dune::FieldMatrix<double, 1, 1> > > > const&, Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<56>, int>&, int, std::shared_ptr<Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<56>, int> >&, Dune::RedistributeInterface&, bool) (repartition.hh:1275)
==16702== by 0x1A5995: int testRepart<Dune::FieldMatrix<double, 1, 1> >(int, int) (matrixredisttest.cc:66)
==16702== by 0x19DA57: main (matrixredisttest.cc:138)https://gitlab.dune-project.org/core/dune-istl/-/issues/74GlobalIndex of ParallelIndexSet used out of specification.2019-07-10T12:33:11ZSimon PraetoriusGlobalIndex of ParallelIndexSet used out of specification.The `GlobalIndex` in `ParallelIndexSet` is by documentation a type that "[...] has to provide at least a operator< for sorting". In the documentation of the class it is named a *global id* and it is stated:
```
[...] the global id might...The `GlobalIndex` in `ParallelIndexSet` is by documentation a type that "[...] has to provide at least a operator< for sorting". In the documentation of the class it is named a *global id* and it is stated:
```
[...] the global id might not be consecutive but definitely is persistent.
```
So, it is a persistent (over grid changes) partially ordered id type. (Similar to the global entity id type of the grid)
While this is already not enough for the `ParallelIndexSet` and `GlobalLookupIndexSet` implementation (I think, all comparison operators are used somewhere in the code), it gets worse in dune-istl code that uses the `ParallelIndexSet`.
Example: `fillIndexSetHoles()` from `repartition.hh`:
- a maximal GlobalIndex is determined by first initializing a global index with 0 (not possible in general) and then pairwise `std::max`. Why not using the already ordered list `ParallelIndexSet::localIndices_`?
- to determine the next free global index, the maximum is then incremented (using `operator++`) and shifted (using `operator+`).
Both points are not guaranteed to work by the `ParallelIndexSet`.
So, here some cleanup or at least better specification of the requirements is necessary!https://gitlab.dune-project.org/core/dune-istl/-/issues/70Test galerkintest fails in parallel2023-03-19T20:35:14ZTimo KochTest galerkintest fails in parallelRunning it on two cores triggers the following assertion (on master and release/2.6):
```
21: galerkintest: /home/timok/dumux-dune-master/dune-common/dune/common/parallel/indexset.hh:1045: Dune::GlobalLookupIndexSet<I>::GlobalLookupInde...Running it on two cores triggers the following assertion (on master and release/2.6):
```
21: galerkintest: /home/timok/dumux-dune-master/dune-common/dune/common/parallel/indexset.hh:1045: Dune::GlobalLookupIndexSet<I>::GlobalLookupIndexSet(const I&, std::size_t) [with I = Dune::ParallelIndexSet<int, Dune::ParallelLocalIndex<Dune::OwnerOverlapCopyAttributeSet::AttributeSet>, 512>; std::size_t = long unsigned int]: Assertion `pair->local()<size_' failed.
21: [pumbaa:27576] *** Process received signal ***
21: [pumbaa:27576] Signal: Aborted (6)
21: [pumbaa:27576] Signal code: (-6)
```
See !295