dune-geometry issueshttps://gitlab.dune-project.org/core/dune-geometry/issues2020-05-27T08:31:20Zhttps://gitlab.dune-project.org/core/dune-geometry/issues/25Add static_assert to constructor of AxisAlignedCubeGeometry2020-05-27T08:31:20ZKilian WeishauptAdd static_assert to constructor of AxisAlignedCubeGeometryUsing `AxisAlignedCubeGeometry` with `dim != coorddim` and the constructor taking only two arguments leads
to potentially wrong results.
The constructor's comment actually hints to this issue:
```c++
/** \brief Constructor from a lower left and an upper right corner
\note Only for dim==coorddim
*/
AxisAlignedCubeGeometry(const Dune::FieldVector<ctype,coorddim> lower,
const Dune::FieldVector<ctype,coorddim> upper)
```
Since both `dim` and `coorddim` are known at compile time, a `static_assert` could prevent wrong usage.
Alternatively, this constructor could be disabled by SFINAE.
Is there any reason why this is not done yet?Using `AxisAlignedCubeGeometry` with `dim != coorddim` and the constructor taking only two arguments leads
to potentially wrong results.
The constructor's comment actually hints to this issue:
```c++
/** \brief Constructor from a lower left and an upper right corner
\note Only for dim==coorddim
*/
AxisAlignedCubeGeometry(const Dune::FieldVector<ctype,coorddim> lower,
const Dune::FieldVector<ctype,coorddim> upper)
```
Since both `dim` and `coorddim` are known at compile time, a `static_assert` could prevent wrong usage.
Alternatively, this constructor could be disabled by SFINAE.
Is there any reason why this is not done yet?https://gitlab.dune-project.org/core/dune-geometry/issues/24test-quadrature: test fails if LAPACK not enabled in dune-common2020-05-21T19:14:55ZAnsgar Burchardtansgar.burchardt@tu-dresden.detest-quadrature: test fails if LAPACK not enabled in dune-commonIf dune-common is built without support for LAPACK, the test-quadrature test fails:
```
6/11 Test #6: test-quadrature ..................Child aborted***Exception: 1.63 sec
NotImplemented [eigenValuesNonsymLapackCall:/build/dune-common-uJ8qLK/dune-common-2.7.0/dune/common/fmatrixev.cc:239]: eigenValuesNonsymLapackCall: LAPACK not found!
terminate called after throwing an instance of 'Dune::NotImplemented'
what(): NotImplemented [eigenValuesNonsymLapackCall:/build/dune-common-uJ8qLK/dune-common-2.7.0/dune/common/fmatrixev.cc:239]: eigenValuesNonsymLapackCall: LAPACK not found!
```
The test should probably skip that part at least when both dune-common and dune-geometry are built without LAPACK.If dune-common is built without support for LAPACK, the test-quadrature test fails:
```
6/11 Test #6: test-quadrature ..................Child aborted***Exception: 1.63 sec
NotImplemented [eigenValuesNonsymLapackCall:/build/dune-common-uJ8qLK/dune-common-2.7.0/dune/common/fmatrixev.cc:239]: eigenValuesNonsymLapackCall: LAPACK not found!
terminate called after throwing an instance of 'Dune::NotImplemented'
what(): NotImplemented [eigenValuesNonsymLapackCall:/build/dune-common-uJ8qLK/dune-common-2.7.0/dune/common/fmatrixev.cc:239]: eigenValuesNonsymLapackCall: LAPACK not found!
```
The test should probably skip that part at least when both dune-common and dune-geometry are built without LAPACK.https://gitlab.dune-project.org/core/dune-geometry/issues/22Volume method for multilinear geometries2019-01-29T22:43:50ZRené HeßVolume method for multilinear geometriesFor non-affine multilinear geometry mappings the volume method doesn't calculate the correct result. Instead of using numerical integration with higher order it uses midpoint rule as approximation:
/** \brief obtain the volume of the mapping's image
*
* \note The current implementation just returns
* \code
* integrationElement( refElement().position( 0, 0 ) ) * refElement().volume()
* \endcode
* which is wrong for n-linear surface maps and other nonlinear maps.
*/
ctype volume () const
{
return integrationElement( refElement().position( 0, 0 ) ) * refElement().volume();
}
Compare also to:
https://dune-project.org/doxygen/pdelab/master/group__GeometryReferenceElements.html
The question is how to fix it:
- Estimate the correct quadrature order and use an appropriate quadrature rule.
- Introduce a parameter that can be set by the user.
- Do not fix it at all since the error in the discretization is probably not too large.
I would be interested to hear some opinions.For non-affine multilinear geometry mappings the volume method doesn't calculate the correct result. Instead of using numerical integration with higher order it uses midpoint rule as approximation:
/** \brief obtain the volume of the mapping's image
*
* \note The current implementation just returns
* \code
* integrationElement( refElement().position( 0, 0 ) ) * refElement().volume()
* \endcode
* which is wrong for n-linear surface maps and other nonlinear maps.
*/
ctype volume () const
{
return integrationElement( refElement().position( 0, 0 ) ) * refElement().volume();
}
Compare also to:
https://dune-project.org/doxygen/pdelab/master/group__GeometryReferenceElements.html
The question is how to fix it:
- Estimate the correct quadrature order and use an appropriate quadrature rule.
- Introduce a parameter that can be set by the user.
- Do not fix it at all since the error in the discretization is probably not too large.
I would be interested to hear some opinions.https://gitlab.dune-project.org/core/dune-geometry/issues/16Multiplicative construction of simplex quadrature rules is suboptimal2018-01-14T04:25:16ZOliver SanderMultiplicative construction of simplex quadrature rules is suboptimalThere is code in `dune/geometry/quadraturerules/tensoproductquadrature.hh` that constructs quadrature rules for simplices by conically multiplying given 1d rules. However, this is suboptimal. Rules with higher order / less points can be constructed by using particular 1d Gauss-Jacobi rules. More information is given in the code itself. Quote:
* [snip] the 1D quadrature must be at least \f$\dim B\f$ orders higher
* than the quadrature for \f$B\f$.
*
* Question: If the polynomials are created via Duffy Transformation, do we
* really need a higher quadrature order?
*
* Answer (from OS): Not really. The official way is to use a Gauss-Jacobi
* rule with \f$ \alpha = \dim B, \beta = 0 \f$ for the 1D rule.
* That takes care of the term \f$ (1-z)^{\dim B} \f$ without needing
* additional orders. See for example A.H. Stroud, "Approximate Calculation
* of Multiple Integrals", Chapters 2.4 and 2.5 for details.
* If you want to use plain Gauss-Legendre you do need the additional orders.There is code in `dune/geometry/quadraturerules/tensoproductquadrature.hh` that constructs quadrature rules for simplices by conically multiplying given 1d rules. However, this is suboptimal. Rules with higher order / less points can be constructed by using particular 1d Gauss-Jacobi rules. More information is given in the code itself. Quote:
* [snip] the 1D quadrature must be at least \f$\dim B\f$ orders higher
* than the quadrature for \f$B\f$.
*
* Question: If the polynomials are created via Duffy Transformation, do we
* really need a higher quadrature order?
*
* Answer (from OS): Not really. The official way is to use a Gauss-Jacobi
* rule with \f$ \alpha = \dim B, \beta = 0 \f$ for the 1D rule.
* That takes care of the term \f$ (1-z)^{\dim B} \f$ without needing
* additional orders. See for example A.H. Stroud, "Approximate Calculation
* of Multiple Integrals", Chapters 2.4 and 2.5 for details.
* If you want to use plain Gauss-Legendre you do need the additional orders.https://gitlab.dune-project.org/core/dune-geometry/issues/11Add strongly typed dynamic codim and subentity indentifiers2018-01-16T16:11:19ZCarsten GräserAdd strongly typed dynamic codim and subentity indentifiersWe already have `Codim<i>` to statically wrap a codimension into a type because
this makes code much more readable. However, there are several places where a
codimension or a subentity (codimension+index) is specified dynamically
(e.g. in `ReferenceElement`, `IndexSet`). All these places suffer from
readability because they use plane indices, e.g. `referenceElement::subEntity(i,j,k,l)`.
Code using this functionality would be much more readable and less error prone,
if we use strong types to encode dynamic codimension or subentity information.
A quick (but functional) prototype is attached, where all functionality is provided by global functions:
[referenceelementutility.hh](/uploads/86b6b4d6d0051243bc01358842ad481f/referenceelementutility.hh)
The prototype also contains a helper that returns a range with the result of `ReferenceElement::subEntity(i,j,k,l)` for fixed `i,j,l` and all suitable `i` (guess what `i` is ;-) ). This is IMHO a big improvement because you normally need the whole range anyway. Being able to ask for individual values gives the false impression of subsubentity consistency of the grid and the reference elements. This misconception leads to many discussions and people continue to ask for this or report (false) bugs.
As an example, marking all boundary vertices using this prototype could be implemented as follows:
```cpp
const auto& indexSet = gridView.indexSet();
for(const auto& element: elements(gridView))
for(const auto& intersection: intersections(gridView, element))
if (intersection.boundary())
for(const auto& vertex: subVertices(referenceElement(element), insideFacet(intersection)))
isBoundary[subIndex(indexSet, element, vertex)] = true;
```We already have `Codim<i>` to statically wrap a codimension into a type because
this makes code much more readable. However, there are several places where a
codimension or a subentity (codimension+index) is specified dynamically
(e.g. in `ReferenceElement`, `IndexSet`). All these places suffer from
readability because they use plane indices, e.g. `referenceElement::subEntity(i,j,k,l)`.
Code using this functionality would be much more readable and less error prone,
if we use strong types to encode dynamic codimension or subentity information.
A quick (but functional) prototype is attached, where all functionality is provided by global functions:
[referenceelementutility.hh](/uploads/86b6b4d6d0051243bc01358842ad481f/referenceelementutility.hh)
The prototype also contains a helper that returns a range with the result of `ReferenceElement::subEntity(i,j,k,l)` for fixed `i,j,l` and all suitable `i` (guess what `i` is ;-) ). This is IMHO a big improvement because you normally need the whole range anyway. Being able to ask for individual values gives the false impression of subsubentity consistency of the grid and the reference elements. This misconception leads to many discussions and people continue to ask for this or report (false) bugs.
As an example, marking all boundary vertices using this prototype could be implemented as follows:
```cpp
const auto& indexSet = gridView.indexSet();
for(const auto& element: elements(gridView))
for(const auto& intersection: intersections(gridView, element))
if (intersection.boundary())
for(const auto& vertex: subVertices(referenceElement(element), insideFacet(intersection)))
isBoundary[subIndex(indexSet, element, vertex)] = true;
```https://gitlab.dune-project.org/core/dune-geometry/issues/9Redesign of quadrature rules2018-06-26T13:24:19ZSimon PraetoriusRedesign of quadrature rulesCurrently there are some different ways where and how quadrature rules are defined and implemented. We have 1d rules, generated by *Maxima* scripts `xxx.mac` from analytic multi-precision expressions, using a template for the `InitHelper` class, provided in two ways, values as double and values as string. We have simplex quadrature rules, provided as double values and manually listed in the `SimplexQuadratureRule` up to *low* order. And there are tensor-product rules constructed from lower dimensional rules, e.g. a tensor-product of 1d rules for all dimensions. These can be transformed to other geometries than cubes, resulting in very expensive rules (many points).
This is the current status. I propose a redesign of these rules: First of all, except for the 1d rules (and correspondingly the tensor-product rules based on these 1d rules) all rules must be provided as tabulated values of coordinates and quadrature weights. It is really difficult to write down analytic expressions for higher order rules on arbitrary geometry elements. Every few years a research group therefor publishes a new set of rules for a specific geometry, that may be better in the sense of symmetry of complexity, compared to older rules.
- To extend our quadrature rules frequently it would be nice to have a way to generate an implementation directly from **tabulated values**
- Providing **high precision** rules in text format allows to generate rules for any data type (up to the printed precision)
- There are tools available to extend the precision of quadrature rules (preserving symmetry constraints), based on tabulated values.
Thus, I propose to add a "library" of tabulated quadrature rules for all our geometry types and a converter (similar to the one implemented in the Maxima script for 1d) that **generates rules in a uniform way**. Additionally I propose to implement the rules using a **string representation of the values** and a string-to-float converter, like `std::stringstream` to generate the concrete values on construction of the quadrature rule. Rules are implemented as singleton, thus a construction takes place only one and the additional performance consts are negligible.
The tabulated values should be given in a defined format, e.g.
```
# optional comment lines, to add reference to the publications where the rules were found
dim order
x0 y0 z0 weight0
x1 y1 z1 weight1
```
(see a set of rules in `utility/rules`) 1d rules can be generated using the Maxima-Script as before, but saved in this tabulated format instead of generating c++ code directly.
I have already implemented a generator, based on *Mako* python templates (see `utility/make_rules.py`, but it can simply be implemented in any language. The main difficulty is, that every publication provides values in a different format, for a different reference geometry, in barycentric coordinates or local coordinates, or .... Thatwhy I have additionally implemented a conversion script to read values given from a publication for one reference element and transform it to the Dune reference element and coordinate format.
This is a follow up issue of issue #6 . See the merge request !27.
## Update
### Discussion from Dune Developer Meeting 2017
- all agree that the proposed changes are nice, as long as Simon does the changes
- the MR should be split up into smaller parts
- it is still up for dicussion, which intermediate format one should use
- it is a good idea to only use string representations and generate instances of float like data from this string representation
- in a future version it should be possible to honor further properties of a quadrature rule
### Workflow
1. Cleanup of quadrature rules
2. Return coefficients by cast from string representation
3. Tabulation of quadrature rules
It was discussed whether to provide the tabulation in a way that other libraries can benefit from.
**Suggestion:** I create a new dune module `dune-quadrature` that is responsible for the tabulation of quadrature rules and provides generator functions to produce C++ source files that can be included in `dune-geometry` directly.
Currently there are some different ways where and how quadrature rules are defined and implemented. We have 1d rules, generated by *Maxima* scripts `xxx.mac` from analytic multi-precision expressions, using a template for the `InitHelper` class, provided in two ways, values as double and values as string. We have simplex quadrature rules, provided as double values and manually listed in the `SimplexQuadratureRule` up to *low* order. And there are tensor-product rules constructed from lower dimensional rules, e.g. a tensor-product of 1d rules for all dimensions. These can be transformed to other geometries than cubes, resulting in very expensive rules (many points).
This is the current status. I propose a redesign of these rules: First of all, except for the 1d rules (and correspondingly the tensor-product rules based on these 1d rules) all rules must be provided as tabulated values of coordinates and quadrature weights. It is really difficult to write down analytic expressions for higher order rules on arbitrary geometry elements. Every few years a research group therefor publishes a new set of rules for a specific geometry, that may be better in the sense of symmetry of complexity, compared to older rules.
- To extend our quadrature rules frequently it would be nice to have a way to generate an implementation directly from **tabulated values**
- Providing **high precision** rules in text format allows to generate rules for any data type (up to the printed precision)
- There are tools available to extend the precision of quadrature rules (preserving symmetry constraints), based on tabulated values.
Thus, I propose to add a "library" of tabulated quadrature rules for all our geometry types and a converter (similar to the one implemented in the Maxima script for 1d) that **generates rules in a uniform way**. Additionally I propose to implement the rules using a **string representation of the values** and a string-to-float converter, like `std::stringstream` to generate the concrete values on construction of the quadrature rule. Rules are implemented as singleton, thus a construction takes place only one and the additional performance consts are negligible.
The tabulated values should be given in a defined format, e.g.
```
# optional comment lines, to add reference to the publications where the rules were found
dim order
x0 y0 z0 weight0
x1 y1 z1 weight1
```
(see a set of rules in `utility/rules`) 1d rules can be generated using the Maxima-Script as before, but saved in this tabulated format instead of generating c++ code directly.
I have already implemented a generator, based on *Mako* python templates (see `utility/make_rules.py`, but it can simply be implemented in any language. The main difficulty is, that every publication provides values in a different format, for a different reference geometry, in barycentric coordinates or local coordinates, or .... Thatwhy I have additionally implemented a conversion script to read values given from a publication for one reference element and transform it to the Dune reference element and coordinate format.
This is a follow up issue of issue #6 . See the merge request !27.
## Update
### Discussion from Dune Developer Meeting 2017
- all agree that the proposed changes are nice, as long as Simon does the changes
- the MR should be split up into smaller parts
- it is still up for dicussion, which intermediate format one should use
- it is a good idea to only use string representations and generate instances of float like data from this string representation
- in a future version it should be possible to honor further properties of a quadrature rule
### Workflow
1. Cleanup of quadrature rules
2. Return coefficients by cast from string representation
3. Tabulation of quadrature rules
It was discussed whether to provide the tabulation in a way that other libraries can benefit from.
**Suggestion:** I create a new dune module `dune-quadrature` that is responsible for the tabulation of quadrature rules and provides generator functions to produce C++ source files that can be included in `dune-geometry` directly.
https://gitlab.dune-project.org/core/dune-geometry/issues/7multilineargeometry with coordinate type that uses expression templates2017-12-22T16:30:47ZSimon Praetoriusmultilineargeometry with coordinate type that uses expression templatesWhen using a coordinate type `ct` in the class `MultiLinearGeometry<ct, dim, dow>` that is implemented with the help of expression templates, e.g. using a multiprecision type `boost::multiprecision::number<cpp_dec_float<N> >`, an internal operation **fails to compile**: `multilineargeometry.hh:721` *error: operands to ?: have different types*
```c++
template <class ct, ...>
template <bool add, ...>
void MultiLinearGeometry<ct,...>::global(...) {
// ...
y[ i ] = (add ? y[ i ] + rf*origin[ i ] : rf*origin[ i ]); // line 721
}
```
The problem is, that in the triple operator `?:` both value expression are complex types coming from the expressions-template context and are different and not convertible into each other. Thus, the operator can not decide what's the correct common type.
Two solutions for this problem can be implemented:
1. Add an explicit cast to type `ct` inside of `?:`:
```c++
y[ i ] = (add ? ct(y[ i ] + rf*origin[ i ]) : ct(rf*origin[ i ]));
```
2. Implement an assigner that is specialized for the template parameter `add`:
```c++
template <bool add>
struct Assigner {};
template <>
struct Assigner<true> {
template <class T1, class T2>
void apply(T1& out, T2 const& in) { out += in; }
};
template <>
struct Assigner<false> {
template <class T1, class T2>
void apply(T1& out, T2 const& in) { out = in; }
};
// ...
// multilineargeometry.hh:721
Assigner<add>::apply(y[ i ], rf*origin[ i ]);
```
I would prefer the second solution. It is more flexible and maybe also more efficient.
I have not yet tested all geometries. Maybe there is another such error in one of the other files.When using a coordinate type `ct` in the class `MultiLinearGeometry<ct, dim, dow>` that is implemented with the help of expression templates, e.g. using a multiprecision type `boost::multiprecision::number<cpp_dec_float<N> >`, an internal operation **fails to compile**: `multilineargeometry.hh:721` *error: operands to ?: have different types*
```c++
template <class ct, ...>
template <bool add, ...>
void MultiLinearGeometry<ct,...>::global(...) {
// ...
y[ i ] = (add ? y[ i ] + rf*origin[ i ] : rf*origin[ i ]); // line 721
}
```
The problem is, that in the triple operator `?:` both value expression are complex types coming from the expressions-template context and are different and not convertible into each other. Thus, the operator can not decide what's the correct common type.
Two solutions for this problem can be implemented:
1. Add an explicit cast to type `ct` inside of `?:`:
```c++
y[ i ] = (add ? ct(y[ i ] + rf*origin[ i ]) : ct(rf*origin[ i ]));
```
2. Implement an assigner that is specialized for the template parameter `add`:
```c++
template <bool add>
struct Assigner {};
template <>
struct Assigner<true> {
template <class T1, class T2>
void apply(T1& out, T2 const& in) { out += in; }
};
template <>
struct Assigner<false> {
template <class T1, class T2>
void apply(T1& out, T2 const& in) { out = in; }
};
// ...
// multilineargeometry.hh:721
Assigner<add>::apply(y[ i ], rf*origin[ i ]);
```
I would prefer the second solution. It is more flexible and maybe also more efficient.
I have not yet tested all geometries. Maybe there is another such error in one of the other files.https://gitlab.dune-project.org/core/dune-geometry/issues/6Higher order quadrature rules on simplex geometry [extension]2017-08-17T21:29:58ZSimon PraetoriusHigher order quadrature rules on simplex geometry [extension]I have seen that the simplex quadrature rules in 3D are restricted to the polynomial order range 0-5. This is fine for lower order lagrange finite elements, for example. In the case of an order 4 lagrange basis and assembling of a mass-matrix it might not be sufficient any more. This is not an exotic case and might be of interest to some users.
I propose to extend the quadrature rules up to order 12 similar to the 2D case, and reduce the nr. of quadrature points for some of the implemented rules. Following the publication
A SET OF SYMMETRIC QUADRATURE RULES ON TRIANGLES AND TETRAHEDRA, L. Zhang and T. Cui and H. Liu, (2009)
symmetric quadrature rules with positive weights can be constructed up to high order. The authors have provided a solver library, to solve the corresponding non-linear least-squares problem and have provided high precision rules in their finite element library *PHG*.
Calculating symmetric rules by myself I found the following combinations:
**Current dune implementation (3D):**
| order | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| ----------- | --- | --- | --- | --- | --- | --- | ---- |
| num points | 15 | 15 | 60 | 60 | 96 | 114 | 175 |
(rules for order > 5 are constructed by tensor-product approaches)
**Proposed rules:**
| order | 5 | 6 | 7 | 8 | 9 | 10 |
| ---------- | --- | --- | --- | --- | --- | --- |
| num points | 14 | 24 | 35 | 48 | 61 | 95 |
But you could also construct even better rules (for the higher orders) with less points. See, e.g., the file `src/quad.c` in the [PHG-Library](http://lsec.cc.ac.cn/phg/index_en.htm) (LGPL-License)
For the 2D case, the rules are all fine. One could extend the rules to higher order, easily, if necessary. The only exception is the rule for order 3. There, currently also negative quadrature weights are present in the dune implementation. If this is a problem for someone, one could replace this with a positive weighted rule, but with a more quadrature points.
In the mentioned publication above, the rules are formulated in terms of so called *permutation stars*. This is quite nice and would also allow to group the quadrature rule, since points in a permutation star have all the same weight and thus maybe a reduction of the the number of arithmetic operations could be achieved.I have seen that the simplex quadrature rules in 3D are restricted to the polynomial order range 0-5. This is fine for lower order lagrange finite elements, for example. In the case of an order 4 lagrange basis and assembling of a mass-matrix it might not be sufficient any more. This is not an exotic case and might be of interest to some users.
I propose to extend the quadrature rules up to order 12 similar to the 2D case, and reduce the nr. of quadrature points for some of the implemented rules. Following the publication
A SET OF SYMMETRIC QUADRATURE RULES ON TRIANGLES AND TETRAHEDRA, L. Zhang and T. Cui and H. Liu, (2009)
symmetric quadrature rules with positive weights can be constructed up to high order. The authors have provided a solver library, to solve the corresponding non-linear least-squares problem and have provided high precision rules in their finite element library *PHG*.
Calculating symmetric rules by myself I found the following combinations:
**Current dune implementation (3D):**
| order | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| ----------- | --- | --- | --- | --- | --- | --- | ---- |
| num points | 15 | 15 | 60 | 60 | 96 | 114 | 175 |
(rules for order > 5 are constructed by tensor-product approaches)
**Proposed rules:**
| order | 5 | 6 | 7 | 8 | 9 | 10 |
| ---------- | --- | --- | --- | --- | --- | --- |
| num points | 14 | 24 | 35 | 48 | 61 | 95 |
But you could also construct even better rules (for the higher orders) with less points. See, e.g., the file `src/quad.c` in the [PHG-Library](http://lsec.cc.ac.cn/phg/index_en.htm) (LGPL-License)
For the 2D case, the rules are all fine. One could extend the rules to higher order, easily, if necessary. The only exception is the rule for order 3. There, currently also negative quadrature weights are present in the dune implementation. If this is a problem for someone, one could replace this with a positive weighted rule, but with a more quadrature points.
In the mentioned publication above, the rules are formulated in terms of so called *permutation stars*. This is quite nice and would also allow to group the quadrature rule, since points in a permutation star have all the same weight and thus maybe a reduction of the the number of arithmetic operations could be achieved.