dune-common issueshttps://gitlab.dune-project.org/core/dune-common/-/issues2024-02-22T15:36:08Zhttps://gitlab.dune-project.org/core/dune-common/-/issues/362Assert and debug levels for error checking2024-02-22T15:36:08ZSimon PraetoriusAssert and debug levels for error checking### Summary
In several dune core modules, error checking is enabled differently.
- In dune-common `boundschecking.hh` a macro `DUNE_ASSERT_BOUNDS` is enabled, if the variable `DUNE_CHECK_BOUNDS` is set.
- In `densematrix.hh` and `diagon...### Summary
In several dune core modules, error checking is enabled differently.
- In dune-common `boundschecking.hh` a macro `DUNE_ASSERT_BOUNDS` is enabled, if the variable `DUNE_CHECK_BOUNDS` is set.
- In `densematrix.hh` and `diagonalmatrix.hh` additionally, the macro `DUNE_FMatrix_WITH_CHECKING` is checked and if set, some additional conditions are tested.
- Also in dune-common but also in dune-istl, the macro `DUNE_ISTL_WITH_CHECKING` is available for enabling extra checks in parallel code and in istl containers
- Then we have the classical `assert` macro as well as the extra macro `DUNE_ASSERT_AND_RETURN`
- In `debugallocator.hh` we have another macro: `ALLOCATION_ASSERT`
- And `stdthread.hh` defines `DUNE_ASSERT_CALL_ONCE`
- The file `reservedvector.hh` additionally introduces the macro `CHECKSIZE` that is activated by `CHECK_RESERVEDVECTOR`
- In some files, explicitly `NDEBUG` is used for deactivating extra checks
This situation is not very satisfactory. Probably, I have not listed all available macros and variables to do some error checking. We need a better way!
#### Another related issue:
- The `DUNE_THROW` macro cannot be used in `constexpr` contexts. This is, because the `Dune::Exception` classes are no literal types and one cannot even construct them with a string message directly.
### How do others solve this issue?
- *wxWidgets* introduce a `wxDEBUG_LEVEL` variable (default value 1) and macros `wxASSERT` and `wxASSER_LEVEL_2` to distinguish cheap and expensive debug checks
- *Microsoft* has `ASSERT` (checks if `_DEBUG` is set), `VERIFY` (always checked), and `ASSERTE` (check with message representing the checked expression)
- *deal.ii* has `Assert`, `AssertThrow` and `AssertNothrow` macros that are enabled if `DEBUG` is set.
- *LiveV* introduces macros `ASSERT(X,A)` (deactived with `-DNODEBUG`), `ASSERT_PRE(X,A)`, `ASSERT_POST(X,A)`, `ASSERT_INV(X,A)` (activated with `-DTEST_[PRE|POST|INV]`
- *CppCoreGuidelines* introduce `Expects()` and `Ensures` macros for pre- and post-conditions. In the GSL, these are implemented using the `[[likely]]` keyword
### Proposal
- I suggest to only have a single type of macro for all checks: `DUNE_ASSERT`
- I like the design of wxWidgets to have a dedicated name for expensive asserts, `DUNE_ASSERT_LEVEL_2`. Whether it is activated or not, can be controlled by the variable `DUNE_DEBUG_LEVEL` (name of that macro is up to discussion)
- I like the design of LiveV with a message argument by default, but maybe this can be made optional.
- Additionally, one might consider to distinguish pre-/post-conditions and invariants, and may introduce also assumptions, e.g. `DUNE_ASSUME` that are always checked but might not leads to direct termination. These are infos to the compiler and the user and termination with error-messages must be enabled explicitly.
- The `DUNE_THROW` macro and exceptions should be made `constexpr`-friendly
An alternative with a flexible set of arguments to the assert macro it described in https://www.foonathan.net/2016/09/assertions/
### Related issues/merge-requests
- !1346 (Add the macro DUNE_ASSUME)
- #110 (`DUNE_THROW` fails when used within `constexpr` function)https://gitlab.dune-project.org/core/dune-common/-/issues/332How to add include directories to dune targets2024-02-23T16:12:30ZSantiago Ospina De Los Ríossospinar@gmail.comHow to add include directories to dune targets### Issue
In order to move to a target based build system, our targets need to be equipped with the necessary include directories so that we can stop relying on project-wise include commands (i.e. `target_include_directories` vs `includ...### Issue
In order to move to a target based build system, our targets need to be equipped with the necessary include directories so that we can stop relying on project-wise include commands (i.e. `target_include_directories` vs `include_directories`). Currently, we are using `include_directories` in `dune_project()` so that all the targets in the project see the header files of all dependencies and the project itself. This needs to be removed in favor of the target based approach (i.e. `target_include_directories`). The question here is: what is the best alternative to enforce `target_include_directories` in all dune modules?
### Alternatives
_@simon.praetorius made a great summary in https://gitlab.dune-project.org/core/dune-common/-/merge_requests/1249#note_127842_
There are some ideas how to move the include_directories property from a global property into targets:
1. Manually setting the include dir to a target. Pro: it is explicit and works right now. Con: Every module needs to create a library. It is quiet a complicated cmake statement to get right. It should actually be something like
```cmake
target_include_directories(<target> [INTERFACE]
$<BUILD_INTERFACE:${PROJECT_BINARY_DIR}>
$<BUILD_INTERFACE:${PROJECT_SOURCE_DIR}>
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
```
2. Extending the function `dune_project(<target> [INTERFACE]...)` by additional arguments to directly create a module library and set the include directories on this. Pro: expects only little change in the code, can be backwards compatible. Con: now `dune_project` has two functions, configuring the module and creating a library. See !944
3. Adding a helper function just for the purpose of semiautomatically configuring a module library, e.g., `dune_add_module_library(<target> [INTERFACE]...)`. Pro: clear separation of functionality `dune_project` vs. `dune_add_library` but still some automatism. Con: Not yet clear about backwards compatible implementation.CMake ModernizationSimon PraetoriusSimon Praetoriushttps://gitlab.dune-project.org/core/dune-common/-/issues/330[discussion] drop or simplify pip & venv magic during configure2023-04-27T14:33:51ZChristian Engwer[discussion] drop or simplify pip & venv magic during configureAs we have seen in !1235, the automagic handling of python dependencies leads to a couple of strange problems, e.g. packages installed automatically by `cmake` are not always found.
I would like to start a discussion on how we can simpl...As we have seen in !1235, the automagic handling of python dependencies leads to a couple of strange problems, e.g. packages installed automatically by `cmake` are not always found.
I would like to start a discussion on how we can simply this whole cmake-venv-pip integration. My provocative statement is that it leads to more problems than it solves and that a clear error message about missing dependencies would be far more helpful.
Opinions?https://gitlab.dune-project.org/core/dune-common/-/issues/317Callable in algorithm.run2022-10-10T17:27:19ZRobert KCallable in algorithm.runCurrently it's only possible to receive `std::function<void(...)>` in an algorithm from the Python side since pybind11::function can only be cast into that. This is when the argument passed to algorithm.run has not cppType.
Setting the ...Currently it's only possible to receive `std::function<void(...)>` in an algorithm from the Python side since pybind11::function can only be cast into that. This is when the argument passed to algorithm.run has not cppType.
Setting the cppType by hand to, e.g. `std::function<bool(...)>` is a working fix.https://gitlab.dune-project.org/core/dune-common/-/issues/300GenerateTypeName is fragile2023-04-09T14:08:30ZChristian EngwerGenerateTypeName is fragileThe `GenerateTypeName` infrastructure in `typeregistry.hh` requires detailed knowledge about the details of the C++ type. Providing these manually is error-prone, see e.g. the global basis in dune-functions (dune-functions#70).
The glob...The `GenerateTypeName` infrastructure in `typeregistry.hh` requires detailed knowledge about the details of the C++ type. Providing these manually is error-prone, see e.g. the global basis in dune-functions (dune-functions#70).
The global basis defines a `DiscreteGlobalBasisFunction`, but sets a type name to `GlobalBasis::DiscreteFunction`. When trying to generate C++ code the can operate on such a discrete function, we get compiler errors, because there is no typename `GlobalBasis::DiscreteFunction` in the concrete `GlobalBasis`.
Couldn't we make `GenerateTypeName` more robust by using `className` to directly extract the correct type name?
The problem is that `GenerateTypeName` is hardly documented, thus I'm unsure about implications of such a change.https://gitlab.dune-project.org/core/dune-common/-/issues/291Fix version numbers for dune package requirements2022-01-27T11:31:42ZAndreas DednerFix version numbers for dune package requirementsAt the moment we (mostly) don't fix specific versions for the `PythonRequirements` in `dune.module` - except for `pip>=21` and `setuptools>=41`. I just now ran into the problem that the binding code didn't work correctly with a newer `nu...At the moment we (mostly) don't fix specific versions for the `PythonRequirements` in `dune.module` - except for `pip>=21` and `setuptools>=41`. I just now ran into the problem that the binding code didn't work correctly with a newer `numpy` version due to a change in the `dtype.format` value (changed from `Q` to `<Q`).
Should we fix versions (or at least ranges if possible) for those packages?
**Advantage**: code will be more stable and less issues report by users
**Disadvantage**: more difficult to mix code, i.e., we now have some projects with students mixing `fenics` and `dune` or using `petsc4py` if those package were to set other version requirements (which would actually be fine for dune) then that becomes impossible.
A possible compromise would be to add specific version of the required packages into the internal venv but not into an active external venv and making it clear that we support stability with respect to package version _only_ when using the internal venv. But that might look inconsistent.https://gitlab.dune-project.org/core/dune-common/-/issues/290[MPI] MPICommunication should clone the communicator in the constructor.2022-01-10T14:14:21ZRobert K[MPI] MPICommunication should clone the communicator in the constructor.As discussed in !1068 the MPICommunication class (https://gitlab.dune-project.org/core/dune-common/-/blob/master/dune/common/parallel/mpicommunication.hh#L110) in dune-common should in my opinion clonse (MPI_Comm_dup) the communicator pa...As discussed in !1068 the MPICommunication class (https://gitlab.dune-project.org/core/dune-common/-/blob/master/dune/common/parallel/mpicommunication.hh#L110) in dune-common should in my opinion clonse (MPI_Comm_dup) the communicator passed in the constructor. Otherwise this can lead to some serious deadlock problems, for example, when using barriers etc or Send/Recv without tagging.https://gitlab.dune-project.org/core/dune-common/-/issues/279Python packages should not be installed during configure and make2023-04-09T14:02:35ZTimo KochPython packages should not be installed during configure and makeThe usual procedure of installing C++ based projects is configure, make, install. With the new Python bindings enabled per default there some IMO __unexpected behaviour__:
1. During `configure` (e.g. `dunecontrol configure`) Python pack...The usual procedure of installing C++ based projects is configure, make, install. With the new Python bindings enabled per default there some IMO __unexpected behaviour__:
1. During `configure` (e.g. `dunecontrol configure`) Python packages (like `jinja2`, `numpy`) are silently installed into the virtual environment (external if a venv activated, internal in dune-common if we are not in a venv). This doesn't even produce output.
2. During `make all` (e.g. `dunecontrol make`) Python packages are installed into the virtual environment. This time it's the Dune packages depending on the Python bindings.
3. During `make install_python` packages get installed globally (system-wide) even if a virtual env is activated. EDIT: This only happens when the internal venv is used.
__Expected behaviour__:
1. `configure` only configures (and doesn't create a venv)
2. `make all` only builds (also building Python bindings (the C++ part) that are enabled per default now)
3. `make install_python` installs Python packages (system-wide may be the default but if a venv is actived it would be nice to install it in there per default)
4. `make install_python_editable` installs Python packages editable instead (would be nice to have)
__Some current issues__
1. It currently doesn't seem possible to just configure and build without installing Python modules. This requires an internet connection (ok it prints some warnings if there is no internet and skips it I think). I might want to just build my code locally without an internet connection and compile some C++ stuff. Why do I need to install Python bindings for that? Or start my own virtual env after running configure. Then the packages are installed in a now useless internal venv. Why not wait?
2. Or in a CI pipeline, I might want to separate Python and C++ testing in different build jobs where the C++ job doesn't need to know anything about the Python job.
3. I might want to install my Python modules system-wide with `make install_python`. However, per default everything always gets installed editable inside the internal venv during `make` already, although this is completely unnecessary in that case.
4. In the case of internal venv the build dependency / flow of information is inverted. Usually, downstream modules depend on upstream modules. Now, the dune-common build directory contains information from all downstream modules.
5. Dune now (at least on Ubuntu) requires to install `python-venv` otherwise the dunecontrol fails.
Building the Python C++ binding modules per default is of course ok. But why is the __installation__ of Python packages not optional anymore (like in Dune 2.8)? Also is there a reason not to delay the installation until installed Python packages are actually needed?
__Suggestion__:
1. Don't create an internal virtual env during configure or build/make. If an external venv is activated, use that as default installation destination, if not, install system-wide per default (like for C++) ((_alternatively, default-install into an internal documented location like now but then `make install_python` should create the venv and install there and not `configure` or `make all`_))
2. Configure only configures (includes configuration of Python packages which depend on CMake information), and one can pass an installation path for the Python packages (if you don't want to install system-wide or in an external activated venv)
3. `make all` only builds libraries (no C++ tests, and no Python installation)
4. `make install_python_editable` installs Python packages into configured destination in editable mode using `pip`
5. `make install_python` installs Python packages into configured destination in normal mode using `pip`
6. `make install`? I have no idea what this currently does in combination with Python packages.
I think this would make things both simpler to implement, more transparent, consistent and expected.
__Question__:
Is the following workflow already possible (by setting some variables) but I just don't know it?:
1. disable creation of internal venv and provide path to external venv via CMake (that is not activated yet)
2. run `dunecontrol all` which _only_ configures and builds _without_ installing like in 2.8 with Python bindings enabled
3. run some install command that installs Python modules and depedencies in provided path (possibly in editable mode)
I think this is at least one reasonable workflow that should be supported.
__User perspective: suggestion vs status quo__:
I think in the minimal case the following would be possible: (Assuming I cloned all modules into a directory)
* Currently (master), to run a python script with dune is as simple as:
- `./dune-common/bin/dunecontrol --opts=my.opts all`
- `source ./dune-common/build-cmake/dune-env/bin/activate`
- `python my_python_test.py`
* With what is suggested here, I believe, the same could be achieved with:
- `./dune-common/bin/dunecontrol --opts=my.opts all`
- `./dune-common/bin/dunecontrol make install_python_editable`
- `python my_python_test.py`
__Related__:
* Even with DUNE_ENABLE_PYTHONBINDINGS=0 a virtual env is installed. Would be nice to have one variable to turn off the Python stuff. (also see #293)https://gitlab.dune-project.org/core/dune-common/-/issues/253What is a matrix?2023-10-17T15:32:05ZSimon PraetoriusWhat is a matrix?## What is a matrix?
While we have CRTP base class `DenseMatrix` specifying the interface for matrices, this base is incomplete or wrong documented and even inconsistent. The specification is also not applied to all matrix-like classes n...## What is a matrix?
While we have CRTP base class `DenseMatrix` specifying the interface for matrices, this base is incomplete or wrong documented and even inconsistent. The specification is also not applied to all matrix-like classes neither in dune-common nor in dune-istl. So, I wanted to open a specific discussion on what is a matrix?
### 1. Matrix is a 2d container of values
On Wikipedia, a matrix is defined as "a rectangular array of numbers (or other mathematical objects)". Thus we have the following requirements:
- (a) A matrix stores elements/entries of a specific type. Seeing matrices as collections, this would mean we have a single element-type. For collections this type is typically denoted by `value_type`.
- "The size of a matrix is defined by the number of rows and columns that it contains". Actually, this definition specifies three size informations:
1. (b) There is a type representing the sizes, typically denoted by `size_type`
2. (c) There is number of rows and number of columns. In Dune this is (for historical reasons) denoted by `N()` and `M()` respectively.
3. (d) There us the *total size* of a matrix. Since a matrix shape is denoted by `n x m`, I would suggest to define the size as the product `n*m`. This would allow to see single-row and single-column matrices as row-vector and column-vector, correspondingly. Compare, e.g., `numpy.matrix.size`, `mtl::size(matrix)`, or `Eigen::Matrix::size`.
- (e) The elements of a matrix should be accessible by indices. Since the row and column indices are in the range `[0,rows)` and `[0,cols)`, its index type should also be `size_type`.
- (f) Following the C-array syntax, elements are accessed by `matrix[i][j]`.
### 2. Matrix is an element of a vector space
Matrices over a field build an algebraic set with vector-space properties:
- matrices for an additive abelian group (commutative group)
- matrices are scalable by scalars of the field type
- Multiplication with scalar is distributive with respect to addition.
These mathematical concepts result in the following requirements for the matrix type `M`:
- There is a field type associate to the matrix, i.e. `Dune::FieldType<M>::field_type` exists.
- There are arithmetic operations for matrices `A` and `B`:
* (g) `A + B`, `A - B`
* (h) `A += B`, `A -= B`
* (i) `-A`, `+A` (optional)
- Multiplication with a scalar `s` of field type:
* (j) `A * s`, `s * A`
* (k) `A *= s`
* (l) `A / s`, `A /= s`
- (m) Additionally, we typically require compound operations: `A.axpy(s, B)`
#### 2.1. Normed space and inner-product space
By defining norms over matrices and inner products, the vector spcae could be turned into a normed
vector space or an inner-product space
- Typical norm for matrices include:
* (n) `frobenius_norm()`
* (o) `forbenius_norm2()` (squared frobenius norm)
* (p) `infinity_norm()`
* (q) `infinity_norm_real()` (simplified infinity norm)
- Inner-products:
* (r) frobenius inner-product `A:B = tr(A^H A)` (not implemented by any matrix)
### 3. Matrix is a linear map
A linear transformation or linear map between vector spaces can be represented by a matrix. The operations of a linear map are typically of the form of matrix-vector products:
- (s) `A.mv(x, y)`: `y = A*x`
- (t) `A.umv(x, y)`: `y+= A*x`
- (u) `A.mmv(x, y)`: `y-= A*x`
- (v) `A.usmv(alpha, x, y)`: `y+= alpha * A*x`
If the matrix additionally provide transposed operations:
- (w) `A.mtv(x, y)`: `y = A^T * x`
- (x) `A.umtv(x, y)`: `y+= A^T * x`
- (y) `A.mmtv(x, y)`: `y-= A^T * x`
- (z) `A.usmtv(alpha, x, y)`: `y+= alpha * A^T * x`
If the matrix additionally provide hermitian operations:
- (A) `A.mhv(x, y)`: `y = A^H * x` (currently not implemented by any matrix)
- (B) `A.umhv(x, y)`: `y+= A^H * x`
- (C) `A.mmhv(x, y)`: `y-= A^H * x`
- (D) `A.usmhv(alpha, x, y)`: `y+= alpha * A^H * x`
### 4. Other operations with matrices
- (E) Matrix multiplication: `A in R^nxm` and `B in R^mxk` then `A * B in R^nxk`
* Matrices must be compatible
* square matrices might form a ring under matrix addition and matrix multiplication
this would allow `A *= B`
- (F) The transposed of a matrix: `A in R^nxm` -> `A^T in R^mxn`.
* Either an operation `A.transpose()` or a free function `transpose(A)`?
* The transposed matrix is itself a matrix
- (G) Trace of a matrix `tr(A) = sum_i A_ii`
* Assumption: matrix is square
* Either an operation `A.trace()` or a free function `trace(A)`?
* Recursive definiton: `tr(A) = sum_i tr(A_ii)`
- (H) Determinant of a matrix
* Assumption: matrix is square
* Easy to define for small matrices
* Generalization using, e.g., Laplace expansion
* Special operation: gramian determinant
- (I) Inversion of a matrix or solution of a linear system
* Assumption: matrix is square
* easy to implement for small matrices
* LU decomposition (or other forms of factorization)
### 5. Matrices can be traversed
Visiting all the (stored) elements of a matrix is a useful operation to generalize some algorithms
- Whats is a proper call for choosing the way/order of traversal?
- It exists `begin()` and `end()` functions.
* Are not documented properly but typically mean traversal of rows.
- Additionally reverse traversal is given by `beforeEnd()` and `beforeBegin()`
* Why not using `rbegin()` and `rend()` as for standard containers?
- Missing functions in the interface:
* Traversal over rows first: e.g. `begin_rows()`, `end_rows()`
* Traversal over columns first: e.g. `begin_cols()`, `end_cols()`
### 6. Special matrix types with more properties
- Sparse matrices:
* Check whether elements exist in the store is required: `A.exists(i,j)`
* Number of stored elements (nonzeros): `A.nonzeroes()`
* Iterators only iterate over stored elements/nonzeros.
* Often direct element-access not implemented (neither const nor mutable)
- Banded/Diagonal matrices:
* Special type of sparse matrix
* Additional access to diagonal(s): `A.diagonal()` (current interface does not allow return off-diagonals)
* Direct access to diagonal entries: `A.diagonal(i)`
- Triangular matrices
- Tridiagonal matrices
## Examples in dune-common
### `DenseMatrix`
A `DenseMatrix` is the CRTB base class for some matrix implementation, but it lacks some of the properties from above. Also
its documentation is partially wrong.
- (c) There are multiple functions returning number of rows/columns: `A.N()` and `A.rows()`, `A.M()` and `A.cols()`, where
`rows() -> mat_rows()` and `cols() -> mat_cols()` call the implementation functions.
- (d) is defined differently. For some reason `A.size()` returns the number of rows.
- (g) arithmetic operations not completely available for matrices, `+A` not implemented (not required)
- (j) and (l) not implemented, except for `A /= s`
- (r) There is no inner product for matrices implemented
- (A) Missing lin-map operation `A.mhv(x,y)`
- (E) binary matrix-matrix product not implemented. For square matrices the mult-assign is implemented from left and
right, i.e., `A.leftmultiply(B)` and `A.rightmultiply(B)`. The general matrix-matrix multiplication is commented out.
- (G) Trace is not implemented.
### `FieldMatrix`
Is an implementation of `DenseMatrix` thus inherits its properties. Some additional comments
- (c) Here `rows` and `cols` are enum constants hiding the interface functions `rows()` and `cols()`
- (g) FieldMatrix implements `A + B` and `A - B`
- (j) and (l) are completely implemented
- (E) Multiplication is implemented as `A * B` and `leftmultiply[any]()` and `rightmultiply[any]()`. Additional operations
are implemented in the namespace `Dune::FMatrixHelp`
### `DynamicMatrix`
Is an implementation of `DenseMatrix` thus inherits its properties.
### `DiagonalMatrix`
- (d) is defined differently. For some reason `A.size()` returns the number of rows.
- (f) The element access is only implemented for diagonal entries, i.e. `matrix[i][j]` throws if `i != j`
- (g), (j) Operations `A + B`, `A - B`, and `-A` not implemented.
- (i) compound assignment operators implemented for scalars: `A += s` and `A -= s` meaning elementwise addition and
subtraction of that scalar.
- (m) Operation `axpy` not implemented.
- (r) There is no inner product for matrices implemented
- (A) Missing lin-map operation `A.mhv(x,y)`
- (E) No matrix-matrix products implemented
- (G) Trace is not implemented.
### `TransposedMatrixWrapper`
The wrapper is created by `transpose(matrix)`, but does not produce a matrix in any sense as described above. It
has just one single operation: `A * B^T`. Note, this wrapper only works for transposing a `FieldMatrix`.
## Test of the interface
There is a test in dune-common: `checkmatrixinterface.hh` but it does not test the complete matrix interface, but just the functions
that were already implemented. Thus, this test is probably written after the implementation of the `DenseMatrix` class and
not before.
An alternative test, that is implemented in terms of separating the various aspects of a matrix, is implemented in !953
in terms of c++20 concepts. These tests can run in the ci jobs for toolchains supporting this new c++ standard and can
report interface failures.
## Major problems with the current interface
The existing matrix implementations in dune-common (but also in dune-istl) only partially fulfil the interface definition
from above and are inconsistent. Major problems are:
- Element access is not a defined property of a matrix. It is only guaranteed to succeed in some matrix implementations.
This is mainly for performance reasons. This argument is a bit weak, because there are other ways to ensure performant
data access, e.g. using iterators or specialized access functions.
- Traversal of matrices is not properly specified and cannot easily be extended to column-major traversal. `begin()` and
`end()` have no knowledge about its traversal direction. There should be a better name for row-wise traversal and
column-wise traversal. Proposal: `begin_rows()` and `begin_cols()`. Then, the user has to explicitly use a range-generator
for the iteration, e.g., `rows(matrix).begin()` and `columns(matrix).begin()`
- The iterators itself are not properly specified. Is the dereferenced iterator again a range? What does it traverse?
- Some matrices implement a single index access `matrix[r]` as a row-view onto the matrix, i.e., this operation returns a
vector-like container representing the `r`th row of the matrix. This is not clearly specified. Is this a required property
of a matrix? What are the costs of this operation? Is the row a propery vector or just a proxy providing minimal access
to the row entries?
- Especially in the implementation of geometries, one needs matrix-matrix multiplication. While this is implemented for
`FieldMatrix`, all the other matrix types lack this operation. But, e.g., `YaspGrid` returns for its jacobian a
`DiagonalMatrix`. Other grid implementations maybe something else. We need `A * B`, but also `A^T * B` and `A * B^T`
for various combinations of matrices. This is partially implemented in some "Helper" namespaces in dune-geometry or
for `FieldMatrix` in `FMatrixHelper` namespace. Multiplication seems not properly designed!
- Sparse matrices are a special type of matrices. The interface should mirror this mathematical property. Actually, sparse
matrices are just matrices with a different storage type for its entries. Thus, sparse matrices should only extend the
matrix interface with methods providing fast access to the stored elements. Either by iterators or something else. Currently,
sparse matrices only implement a partial matrix interface and some methods even have different behaviour, e.g. `matrix[i][j]`
is not guaranteed to succeed, not even for const-access, although the matrix value is clearly specified.
- Views over matrices, like transposed-view or hermitian-view are not properly implemented. They do not provide a matrix
interface although this could easily be provided. One could discuss whether views are read-only or provide mutable
access to the matrix elements as well.
- Currently, we can only test that a matrix supports all the different aspects as described above. Not the individual aspect. But, often an algorithm just needs some properties, e.g., in iterative krylov methods, one needs the linear-map properties for a matrix, for lu-decomposition maybe fast element access. In geometry parametrizations we need matrix-matrix multiplication and trace/Gramian determinant computation. It is required to have individual concept/interface checks.
- Single-row and Single-column matrices are not vectors. This often leads to problems, e.g., there is no `two_norm()` implemented and also `dot()` and `axpy` with another vector does not work, even if the shape would allow this. This problem is that vectors are not matrices and also not compatible in its interface to matrices.https://gitlab.dune-project.org/core/dune-common/-/issues/246Use CMake components when installing2023-11-13T12:12:16ZSantiago Ospina De Los Ríossospinar@gmail.comUse CMake components when installing### Description
While modernising my CMake, I realised that dune-common dumps all the installation files into the same bucket. While this is practical because one does not have to think what to install and what not, it is much cleaner t...### Description
While modernising my CMake, I realised that dune-common dumps all the installation files into the same bucket. While this is practical because one does not have to think what to install and what not, it is much cleaner to split installation in different components. This way, the user has much more control over the objects that get installed on its system or package. Doing this is simple: Adding a `COMPONENT` argument to every `install` CMake call.
### Proposal
My proposal would be to have 4 components:
| Component | Description | Example
| ------ | ------ | ------ |
| `Runtime` | Final scripts or binaries product of the module | Executables from application modules |
| `Development` | Header files and build system mandatory to continue downstream development | Header files, cmake macros |
| `Library` | Objects needed for both `Development` and `Runtime` usage in downstream modules | libraries for UGGrid or `libdunecommon` |
| `Documentation` | Documentation files | man pages, Doxygen |
In CMake, objects not assigned to any component are set to `Unspecified` (current behaviour). With the proposed setting, nothing should be in this bucket.
### Usage
Assuming components are set, installation of files in a given component is as simple as:
```bash
# User doesn't want to develop, just to link a final application
cmake --install . --component Library
# User wants to develop a downstream module
cmake --install . --component Library --component Development
# User wants to use final binaries
cmake --install . --component Library --component Runtime
# User wants to use everything (current behaviour)
cmake --install .
```
### Is this backwards compatible?
**Yes!** If no component is given on the command line (current usage), all components are installed. This means that the change is backwards compatible with any installation script out there.
### What next?
Discussion... If there seems to be agreement/interest, I can push a MR.CMake Modernizationhttps://gitlab.dune-project.org/core/dune-common/-/issues/228Proposal: Single build directory build2023-11-13T12:23:50ZChristoph GrüningerProposal: Single build directory buildWe are still lacking a proper idea how to get rid of the repeated CMake configure runs, which is performed for each module and all its dependencies.
Thus, I want to discuss the idea of a single build directory: the single build director...We are still lacking a proper idea how to get rid of the repeated CMake configure runs, which is performed for each module and all its dependencies.
Thus, I want to discuss the idea of a single build directory: the single build directory build.
1. A single build directory.
2. Within the directory, all core modules are built.
3. CMake configure is run for each module, but only once. All CMake variables, targets, Find modules are available for all modules (without re-run or tricky import).
4. The dependency resolution is done before the built (make) starts. It would be enough to check the presence of a modules `dune.module` file.
5. All build artifacts are co-located, e.g., `lib/` contains libdunecommon, libdunegeometry, libdunegrid and so on.
6. Quasi-circular dependency like dune-geometry and dune-localfunctions are not a problem, because whatever module is built first knows from the dependency resolution whether the other module is present or not and can conditionally built features and tests.
7. As the configuration is done together, we can ensure that all modules have the same flags, the same libraries, the same C++ version and so on.
8. It would be possible to have multiple build flavors (different compilers, flags, external dependencies), one per build directory.
9. Distribution as RPM or Deb would just include the source code, similar to the tarball. The user have to set up a build directory and build libdunecommon, libdunegeometry and so on.CMake Modernizationhttps://gitlab.dune-project.org/core/dune-common/-/issues/107Group unit tests into "quick" and "expensive"2020-01-19T19:29:57ZJö Fahlkejorrit@jorrit.deGroup unit tests into "quick" and "expensive"I would like to group our tests in quick and extensive tests, to counter long
unit test times:
| dune-common | master | simd-branch |
| --- | --- | --- |
| `make -j4` | 5s | 5s ...I would like to group our tests in quick and extensive tests, to counter long
unit test times:
| dune-common | master | simd-branch |
| --- | --- | --- |
| `make -j4` | 5s | 5s |
| `make -j4 build_tests` | 52s | 9m2s |
| `ctest -j4` | 9s | 12s |
(These times were obtained with ccache disabled, no MPI, but with Vc)
The reason for the long build time is that I test the interface for many
combinations of types and operations. This has been useful when developing
the interface, but is not that useful on a day to day basis. Nevertheless,
the unit tests are a good place to keep those tests for when they are needed.
The **quick** tests would probably be used be the developer before pushing,
and possibly on every ous by the CI.
The **extensive** tests would be used before a release, or on a weekly or even
nightly basis in the CI. @ansgar even claimed that he could run the extensive
tests for every push in the CI.
- [ ] !432 adds labels to tests, and labels all current dune-common tests as `quick`
- [ ] joe/dune-docker!1 patches `duneci-standard-test` to support test labels
- [ ] ~~Modify `.gitlab-ci.yml` to take advantage of test labels, where appropriate.~~https://gitlab.dune-project.org/core/dune-common/-/issues/38Increase default `DUNE_MAX_TEST_CORES` to 4?2022-04-25T08:34:48ZAnsgar Burchardtansgar.burchardt@tu-dresden.deIncrease default `DUNE_MAX_TEST_CORES` to 4?I'm wondering whether it would be good to increase `DUNE_MAX_TEST_CORES` from 2 to 4. There are some interesting things that can happen on grids with more than two processes: in particular not every other process must be a direct neighbo...I'm wondering whether it would be good to increase `DUNE_MAX_TEST_CORES` from 2 to 4. There are some interesting things that can happen on grids with more than two processes: in particular not every other process must be a direct neighbor, vertices can be part of more processes (e.g. the center in a 2x2 grid). These cases should also be covered by the tests enabled by default, but cannot when parallel tests are only run with 2 ranks.
As an example dune-grid's `test-parallel-ug` seems to pass with 1 and 2 processes, but currently fails with more (but it isn't run in parallel by default yet).DUNE 2.9.0