dune-fem issueshttps://gitlab.dune-project.org/dune-fem/dune-fem/-/issues2024-03-27T13:50:28Zhttps://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/203YaspGrid not working with fem.globalRefine.2024-03-27T13:50:28ZRobert KYaspGrid not working with fem.globalRefine.The following code does not work, i.e. the grid is not refined, and we should at least raise an Exception that this does not work. Or is it simply a bug.
```
from dune.grid import cartesianDomain
from dune.grid import yaspGrid
from dune...The following code does not work, i.e. the grid is not refined, and we should at least raise an Exception that this does not work. Or is it simply a bug.
```
from dune.grid import cartesianDomain
from dune.grid import yaspGrid
from dune.fem.view import adaptiveLeafGridView
from dune.fem.space import finiteVolume
from dune.fem import globalRefine as globalRefine
domain = cartesianDomain([0,0], [1,1], [4, 4])
view = adaptiveLeafGridView( yaspGrid( domain ) )
spc = finiteVolume(view)
uh = spc.interpolate(1, name="uh")
globalRefine(1, uh)
uh.plot()
```https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/201How to correctly use BoundaryID in conditionals2024-02-29T16:48:51ZBenjamin TerschanskiHow to correctly use BoundaryID in conditionalsI am trying to move towards more complex meshes created with gmsh and converted to dgf.
Because I am generally enforcing Dirichlet constraints weakly I am not using the `DirichletBC` construct, instead I am usually writing
conditional...I am trying to move towards more complex meshes created with gmsh and converted to dgf.
Because I am generally enforcing Dirichlet constraints weakly I am not using the `DirichletBC` construct, instead I am usually writing
conditionals describing the boundary in question. I have a .dgf mesh that (after some initial hickups) has the right boundary Ids, as confirmed per [projectBoundaryIds](https://dune-project.org/sphinx/content/sphinx/dune-fem/gmsh2dgf_nb.html) method. Is there a recommended way to use BoundaryId in logical multipliers / conditionals? I tried something like
`is_inflow_boundary = conditional(eq(BoundaryId(space),inflow_boundary_id), 1, 0) ` and
`my_boundary_terms*is_inflow_boundary*ds`
but it is giving different / wrong results compared to manually defining `is_inflow_boundary` from geometric conditionals. I saw that the [changelog](https://dune-project.org/sphinx/content/sphinx/dune-fem/changelog290.html) mentions multiplying with dune.ufl.BoundaryId(id) but just naively trying that didn't work, not sure if I used the right syntax.
I just wanted to briefly check with you before I investigate further. Best, Benhttps://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/199Incorrect Newton residuals with PETSc backend2024-02-29T16:58:08ZBenjamin TerschanskiIncorrect Newton residuals with PETSc backendHey :smile: After a hint from Robert I am playing with the `petsc` backend and petsc4py again.
I was then remembered of an issue that was already in another issue, but I couldn't find it:
When using the petsc backend, Newton residua...Hey :smile: After a hint from Robert I am playing with the `petsc` backend and petsc4py again.
I was then remembered of an issue that was already in another issue, but I couldn't find it:
When using the petsc backend, Newton residuals seem to be wrong in the `solve` method of schemes. The following problem setup
is taken from the stokes tutorial example, but I tried to built on the saddle point preconditioners with `kspoptions`. The linear
solver works as expected (e.g. I get the same results when I set up petsc4py), but I get a weird random Newton residual which keeps
the solver from converging, even though the problem is linear.
I am attaching this
[example file](/uploads/f51eb11c946bd7cdfd5b32624539d8bc/fieldsplitStokes_scheme.py) and a
[reference file](/uploads/5bdb98e253cea9f6139e968c4d032398/fieldsplitStokes_petsc4py.py)
where the same linear solver is called directly in petsc4py.
As a side note, somehow all direct solvers I tried (MUMPS, SuperLU, PETSc LU) give random results or fail, at least in my trial runs where I interfaced them through PETSc.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/197Composite spaces fails for certain combinations of spaces.2024-02-29T16:50:59ZRobert KComposite spaces fails for certain combinations of spaces.The following code works for `lagrange` and `dglagrange` but fails for other spaces, e.g. `dgonb` or `dglegendre`.
```Python
from dune.grid import structuredGrid
from dune.fem.space import lagrange, dgonb, composite, finiteVolume, dglag...The following code works for `lagrange` and `dglagrange` but fails for other spaces, e.g. `dgonb` or `dglegendre`.
```Python
from dune.grid import structuredGrid
from dune.fem.space import lagrange, dgonb, composite, finiteVolume, dglagrange
view = structuredGrid([0, 0], [1, 1], [7, 7])
# spatial_coordinate = SpatialCoordinate(cell(dim_space))
def test_composite(space):
spc_1 = space(view, order=2)
spc_2 = space(view, order=1)
comp = composite(spc_1, spc_2, components=["1", "2"])
u = comp.interpolate([0]*comp.dimRange, name="u")
test_composite(lagrange)
print("Continuous Lagrange worked!")
test_composite(dglagrange)
print("Discontinuous Lagrange worked!")
## Currently: this fails with a very strange error
test_composite(dgonb)
print("DG-ONB worked!")
```https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/196dune-fem Capabilities and grid specific specializations.2023-11-20T15:11:15ZRobert Kdune-fem Capabilities and grid specific specializations.While making P4estGrid work with dune-fem I realized that making a new grid available in dune-fem is not as easy as it should be, mainly because the necessary specialization that need to be carried out are spread over multiple files, e.g...While making P4estGrid work with dune-fem I realized that making a new grid available in dune-fem is not as easy as it should be, mainly because the necessary specialization that need to be carried out are spread over multiple files, e.g.
In dune/fem/misc:
- griddeclaration.hh
- griddeclaration.hh
- gridobjectstreams.hh
- boundaryidprovider.hh
Then there are further capabilities for gridparts and discrete spaces.
We should try to reduce the usage of those since it is complicated to extend.
Some of the specializations could be avoided by, for example, checking if a certain method was implemented, like `intersection.impl().boundaryId()`.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/193Open MRs2023-11-29T15:25:46ZAndreas DednerOpen MRs- !639 non-linear (symmetry) flag in model to optimize default solver and Newton method
- !621 issue with virtualized integranda (see also #136 and !476)
- !644 simple improvement of 'help' message for parameters. We could implement a re...- !639 non-linear (symmetry) flag in model to optimize default solver and Newton method
- !621 issue with virtualized integranda (see also #136 and !476)
- !644 simple improvement of 'help' message for parameters. We could implement a real 'help string' approach perhaps?
- #196 Grid specific specializations of Capabilitites.
- !648 New ufl version breaks dune-fem codehttps://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/192periodic boundary conditions2023-10-21T09:19:42ZAndreas Dednerperiodic boundary conditionsAdd a test case for adaptive periodic boundary conditions and see what works and what doesn't, perhaps
fix the issue with the conforming refinement in alu.Add a test case for adaptive periodic boundary conditions and see what works and what doesn't, perhaps
fix the issue with the conforming refinement in alu.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/191improve quadrature choice2023-10-21T09:20:23ZAndreas Dednerimprove quadrature choiceSee !635
Possible things to do
- [ ] not available on `operator`
- [ ] general spectral quadrature instead of current 'corner' approach
- [ ] set order in `dx`
- [ ] pass in a quadrature with numpy arraysSee !635
Possible things to do
- [ ] not available on `operator`
- [ ] general spectral quadrature instead of current 'corner' approach
- [ ] set order in `dx`
- [ ] pass in a quadrature with numpy arrayshttps://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/190improve implementation of boundary ids2023-11-05T11:51:46ZAndreas Dednerimprove implementation of boundary ids1. we could use the fact that `domain["boundary"][i]` is directly passed through to `dune-alugrid's` grid factory as integer vector. So it would be easy at least for alu to add an additional entry to the vector for the boundary id, so `d...1. we could use the fact that `domain["boundary"][i]` is directly passed through to `dune-alugrid's` grid factory as integer vector. So it would be easy at least for alu to add an additional entry to the vector for the boundary id, so `domain["boundary"][i] = [p1,p2,p3,id]` for a triangle boundary segment in a 3d grid.
2. we could export some functionality on the `boundaryIdProvider` instance, i.e.,
`dune.fem.boundaryDomain(gv,[[lower],[upper]],id)` to describe a boundary domain similar to dgf's `boundaryDomain`. To information could be stored in the `boundaryIdProvided` which is a singleton over the grid or on the python `gridView` instance. A `dune.fem.boundaryDefault(gv,id)` could then also be provided and this allows the same control over the boundary ids as we have in dgf.
See also #118https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/188deprecate 'schemes' in python2023-10-21T09:22:15ZAndreas Dednerdeprecate 'schemes' in pythonAt the moment there is a lot of code duplication in `dune.fem.operator` and `dune.fem.scheme` (see for example current MR on mass lumping).
We could just provide `operator' (or `scheme` but that name is not so nice I think).
I see two op...At the moment there is a lot of code duplication in `dune.fem.operator` and `dune.fem.scheme` (see for example current MR on mass lumping).
We could just provide `operator' (or `scheme` but that name is not so nice I think).
I see two options:
1. check if the ufl argument provides is an equation or not. If it is an equation, we check that the spaces are the same and setup a `scheme` in the current sense. If it is not an equation we setup an `operator`.
2. make it even simpler by always setting up a scheme is the spaces are the same even if no ufl equation is provided and an operator if not.
Deprecation could be quite straightforward by defining deprecated versions of the public functions in `dune.fem.scheme._scheme` in `dune.fem.scheme.__init__.py`. The original functions in `dune.fem.scheme._scheme` could then still be called in `dune.fem.operator` for now but slowly the additional parts required for `schemes` could be moved over to `operator`.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/185Newton Relaxation2024-02-29T16:49:28ZRobert KNewton Relaxation@benjamin.terschanski: Take a look at https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/solver/newtoninverseoperator.hh?ref_type=heads#L116 how to add a parameter, and then https://gitlab.dune-project.org/dune-fem/...@benjamin.terschanski: Take a look at https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/solver/newtoninverseoperator.hh?ref_type=heads#L116 how to add a parameter, and then https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/solver/newtoninverseoperator.hh?ref_type=heads#L372 how to use it.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/182DBCs for TupleSpace<RT,RT> and similar spaces2023-08-23T09:27:24ZAndreas DednerDBCs for TupleSpace<RT,RT> and similar spacesSee https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/181#note_129139
The issue is that in the tuple space mapper we do not have access to the range dimensions of the subspaces and so do not know that `RT` has a higher range di...See https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/181#note_129139
The issue is that in the tuple space mapper we do not have access to the range dimensions of the subspaces and so do not know that `RT` has a higher range dimension although the block size is 1. So we can't provide the information which component a dof belongs to which is needed in the `dirichletconstraints` class to manage `DirichletBC` setups involving `None`.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/178RaviartThomas space for simplices.2023-06-29T11:05:40ZRobert KRaviartThomas space for simplices.Looking at dune-localfunctions there seems to exist a RaviartThomas basis for arbirary order and dimension for simplices, yet in dune-fem we use the hadn coded and only provide this for `order=0,1`. What is the reason for this?Looking at dune-localfunctions there seems to exist a RaviartThomas basis for arbirary order and dimension for simplices, yet in dune-fem we use the hadn coded and only provide this for `order=0,1`. What is the reason for this?https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/176local space (on python side)2023-08-23T09:29:16ZAndreas Dednerlocal space (on python side)We now export the mapper and methods to evaluate the basis function on the space. The approach is not very consistent.
With the newest change, one gets a mapper object bound to an entity while 'evaluateBasis, javobisnBasis' are exported ...We now export the mapper and methods to evaluate the basis function on the space. The approach is not very consistent.
With the newest change, one gets a mapper object bound to an entity while 'evaluateBasis, javobisnBasis' are exported on the space taking an element.
A nicer option would be a 'localSpace' with a 'bind/unbind' method.
That object could also be 'bound' to a quadrature and provide numpy storage to the basis set evaluation.
We actually have the cache already on the python side which I believe is a raw piece of memory that we could perhaps map into a numpy array which could be reasonably efficiently used on the Python side.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/175DofMapper interface inconsistent.2023-01-10T20:17:20ZRobert KDofMapper interface inconsistent.Problems:
- [ ] The Interface classes for `DofMapper` exist but are not used everywhere.
- [ ] Some mappers offer the methods `map` and sometimes `mapIndices`. This is used but not all mappers implement this.
- [ ] A general cleanup is ...Problems:
- [ ] The Interface classes for `DofMapper` exist but are not used everywhere.
- [ ] Some mappers offer the methods `map` and sometimes `mapIndices`. This is used but not all mappers implement this.
- [ ] A general cleanup is needed.https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/171parallel overlap in fempy2022-11-21T13:32:31ZChristian Engwerparallel overlap in fempyWe tried to write a simple overlapping Schwarz method in python using fempy.
I'm not 100% sure, but I couldn't find any way to code to correctly assemble the local matrices. We used a 1st order Lagrange space. What seems to be missing i...We tried to write a simple overlapping Schwarz method in python using fempy.
I'm not 100% sure, but I couldn't find any way to code to correctly assemble the local matrices. We used a 1st order Lagrange space. What seems to be missing is:
- boundary conditions on the processor boundary (the "front" nodes are constraint 0)
- matrix assembly in the overlap (the default partition type in dune-fem C++ is `interiorborder` and I couldn't find any way to tell dune-fem to assemble also in the overlap)
What we did now to work around the problem is:
- patch the C++ code (threaditerator.hh) to use the `all` as default partition type
- post-process the `scipy` matrix to explicitly eliminate rows corresponding to a processor boundary
I believe that this is really a bug in the python bindings, or did I miss something?https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/170AdaptiveLeafGridPart::begin() returns iterator only over Dune::InteriorBorder...2022-10-17T09:52:42ZMarkus BlattAdaptiveLeafGridPart::begin() returns iterator only over Dune::InteriorBorder_PartitionI am trying to port code to 2.9.
One thing that struck me is that when using AdaptiveLeafGridPart `gridView_.template begin</*codim=*/ 0>();` returns an iterator only over `Dune::InteriorBorder_Partition`. Is that intentional? At least...I am trying to port code to 2.9.
One thing that struck me is that when using AdaptiveLeafGridPart `gridView_.template begin</*codim=*/ 0>();` returns an iterator only over `Dune::InteriorBorder_Partition`. Is that intentional? At least this seems to be different behavior than expected when the part is actually the whole grid. When using previous versions that did not seem to be the case (back then GridPart2GridView was used). But I might be wrong here.
I suspect that this behavior might have unexpected/unwanted side effects at least in our code. Am I missing something?
[Issue in opm-simulators](ttps://github.com/OPM/opm-simulators/issues/4170)https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/167Cleanup large list of available dg spaces2023-10-21T09:22:31ZBenjamin TerschanskiCleanup large list of available dg spacesDuring the setup of a PDE with some piecewise defined nonlinear coefficients I noticed that calling DGSpace.interpolate() gives me an interpolation function that I didn't expect.
My example to reproduce this would be `u_h_dg = spaceLag...During the setup of a PDE with some piecewise defined nonlinear coefficients I noticed that calling DGSpace.interpolate() gives me an interpolation function that I didn't expect.
My example to reproduce this would be `u_h_dg = spaceLagrangeDG.interpolate(conditional(x[0] > x[1], 1.0, 0.0), name="u_h_dg")` on a [0,1]x[0,1] square (_linear lagrange DG space, full example attached_). The resulting discrete function `u_h_dg` has some entries in the DOF vector that are not in the [0,1] range, which is strange to me because I thought the above conditional could only yield 1 and 0 by definition. If elementwise interpolation is performed I would expect the result I get when I use standard Lagrange polynomials or am I again overlooking some DG property here?
_This example illustrates my confusion:_
```
from ufl import SpatialCoordinate
from dune.grid import structuredGrid as leafGridView
from dune.fem.space import dglagrange as dgLagrangeSpace
from dune.fem.space import lagrange as lagrangeSpace
from ufl import conditional
gridView = leafGridView([0, 0],
[1, 1], [2, 2])
dimOrder = 1
spaceLagrangeDG = dgLagrangeSpace(gridView, order=dimOrder)
spaceLagrange = lagrangeSpace(gridView, order=dimOrder)
x = SpatialCoordinate(spaceLagrangeDG)
u_h_dg = spaceLagrangeDG.interpolate(conditional(x[0] > x[1], 1.0, 0.0), name="u_h_dg")
u_h_lagrange = spaceLagrange.interpolate(conditional(x[0] > x[1], 1.0, 0.0), name="u_h_lagrange")
u_h_dg.plot()
u_h_lagrange.plot()
# The following also gives me the result that I would have expected in the first place:
u_h_dg.interpolate(u_h_lagrange)
u_h_dg.plot()
```
Is this expected behaviour or did I overlook something?https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/166space.order --> space.degree or similar.2022-09-21T11:58:44ZRobert Kspace.order --> space.degree or similar.I propose to deprecate the `order` method on spaces. It's confusing, because often order is something different and what is meant here is the polynomial degree of the basis functions. I suggest to name it `degree`. Any thoughts?I propose to deprecate the `order` method on spaces. It's confusing, because often order is something different and what is meant here is the polynomial degree of the basis functions. I suggest to name it `degree`. Any thoughts?https://gitlab.dune-project.org/dune-fem/dune-fem/-/issues/161LocalFiniteElementShapeFunctionSet is not thread safe.2022-08-10T16:13:24ZRobert KLocalFiniteElementShapeFunctionSet is not thread safe.The evaluateEach (https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/space/localfiniteelement/shapefunctionset.hh#L81), jacobianEach, hessianEach are not thread safe. It does not seem enough to make the local `value...The evaluateEach (https://gitlab.dune-project.org/dune-fem/dune-fem/-/blob/master/dune/fem/space/localfiniteelement/shapefunctionset.hh#L81), jacobianEach, hessianEach are not thread safe. It does not seem enough to make the local `values_` variable thread safe. There is also an issue with localBasis. It is very difficult to find out which code in dune-localfunctions actually is called here.
This becomes a problem when `threading` is used with `ElementQuadrature`.