When assembling a mass matrix using a Powerbasis of dimension N as the ansatz and trial, one cannot work with 1x1 block matrices but needs to use NxN block matrices. The mass matrix then contains NxN block matrices containing the same value on the N diagonal entries.
A better solution would be to use a ScaledIdentityMatrix for this.
Edited
Designs
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
This issue is not new. In fact it came up, when we designed the global basis interface in dune-functions, but was postponed and not solved so far. However, some thoughts have been spend on this issue already:
The first statement suggests that you would like to use 1x1 matrices. But this would conceptually be wrong.
A proper solution would be to use special matrix block types like DiagonalMatrix or ScaledIdentityMatrix.
Currently this is not possible for various reasons.
There are multiple 'levels' of support for this feature, that one can aim at (with increasing complexity)
A: Be able to use sparse matrices as blocks (e.g. DiagonalMatrix)
A1: Make this work at all, by ensuring that only existing values are written.
A2: Make this work with (semi) optimal complexity. This requires a way to make the global assembler aware of the local pattern
B: Support proxy (sparse) matrices (e.g. ScaledDiagonalMatrix) where not all existing entries are stored.
B1: Support matrices where the stored entries are written using standard methods.
B2: Support matrices where the stored entries need to be written with special methods.
C: Support all of this with optimal complexity.
D: Support proxy matrices, where the stored data is not an actual matrix entry but some compressed representation.
This mainly concerns how local matrix entries are written to global ones.
With the old dune-fufem bases and assemblers, we had A-C. But this was only possible due to other restrictions: Knowing that indices and matrices always have exactly two nesting levels, where the second one comes from a power-ansatz, this could be organized around the matrix type.
In dune-functions this probably not possible using a similar approach. A1 should be easy if we check all entries before they are written. A2 maybe requires a mechanism to additionally pass a pattern from the local to the global assembler. Then the global assembler can write only these entries and the local one only needs to set these. The easiest way would possibly be to let the local assembler return a "container" with existing index pairs. This could be a simple std::vector but to increase efficiency specialized types would also be possible. However, the global assembler still zero-initialized the whole local matrix which spoils optimal complexity.
Using such an approach we can maybe even achieve B1. B2 may additionally require to adjust the matrix-backend. Unfortunately I don't see a way to get C currently, because the local view always caches all indices. Hence the global assembler will always be in O(n) and we cannot achieve O(N) in cases where n=NM.
D should probably be solved differently using MappedMatrix. I once had a student who successfully used this for a phase field with many components and a degenerate differential operator where local matrix blocks are derived on the fly from a precomputed matrix for a scalar Laplacian. ;-)
While we think about this, should we implement specializations of the assembler for BCRSMatrix<ScaledIdentityMatrix> and BCRSMatrix<DiagonalMatrix>, because these two are common cases?
As outlined above, the problem is, that this is not easily possible. The reason is, that a global assembler cannot know, which entry local index corresponds to a non-existing entry. What should be possible, is that the global assembler loops over all entries and checks if the corresponding matrix entry exists, before accessing it.
This could be implemented by a specialization. But it's probably not more complicated to extend the ISTLMatrixBackend by a method entryExists(row,column) and let the global assembler query this before accessing an writing. This could be specialized for the desired matrix type, while the default return value is true.
To be more precise the method could be called entryMightExist(row,column) and the semantic should be as follows: If it return false, the entry does not exist. If it returns true there is no guarantee, that the entry exists. This way returning true is allowed (no expensive check with BCRSMatrix) but you can avoid accessing entries for ScaledIdentity and DiagonalMatrix.
This is what I had in mind for A1. The problem is, that this has suboptimal complexity, because the global assembler still needs to loop over all local matrix entries.
!149 (merged) cleans up the matrix backend and multi-index access code. On top of this one could implement a method matrixBackend.addToEntry(row,column,value) that simply ignores non-existing values by some internal specialization for DiagonalMatrix. Then this could be called in the global assembler instead of matrixBackend(row,columns) += value. This would be A1 from above and should make the assembler work for matrices containing DiagonalMatrix.
@lisa_julia.nebel However, depending on your application, there may be another problem: The MatrixBuilder is currently only specialized for a few very specific matrix types that may or may not incorporate your use case.
Probably yes. As far as I can see, you're trying to use FieldMatrix<field_type,1,1> instead of FieldMatrix<field_type,block_size,block_size>. As outlined above, you must still use a matrix type that fits to your basis (which is not the case if the old type worked). Please try ScaledIndetityMatrix<field_type,block_size> instead.
@lisa_julia.nebel It also looks like you use the global assembler with a local matrix. I now extended istlmatrixbackendtest.cc to also try using Dune::Fufem::MassAssembler and Dune::Fufem::LaplaceAssembler with BCRSMatrix<ScaledIdentityMatrix<k,n>> which seems to work fine.