Skip to content
Snippets Groups Projects
Commit 49a19b81 authored by Santiago Ospina De Los Ríos's avatar Santiago Ospina De Los Ríos
Browse files

[!17] Resolve "Add special membranes for species that only pass through to it but not diffuse"

Merge branch '20-add-special-membranes-for-species-that-only-pass-through-to-it-but-not-diffuse' into 'master'

ref:copasi/dune-copasi

### What does this MR do?

-   [x] Add a Finite Volume (FV) local operator.
-   [x] Re-structure the pdelab tree to allow both, Finite Volume as well as
    Conforming Finite Elements.
    -   [x] Make stand-alone diffusion-reaction model for CG work again
    -   [x] Make a stand-alone diffusion-reaction model for FV
        -   [x] Add similar test to what is already in multidomain (or improve
            them e.g. test agains analytic functions with an L2 error
            integration)
        -   [x] Write cell data by dafault
    -   [x] Make a multidomain diffusion reaction model with only FV
    -   [ ] ~~Manage domains with no variables by mapping domains to gfs index
        (yet another level of indirection)~~
    -   [ ] ~~Make a stand-alone diffusion-reaction model with composite FV
        and CG~~
    -   [ ] ~~Make a multidomain diffusion-reaction model with composite FV
        and CG~~
    -   [x] Use the virtual interface of local finite elements to choose
        between FV and CG discretization
-   [x] Allow selective refinement per compartment i.e. anisotropic cube
    refinement.

### Is there something that needs to be double checked?

<!-- Is there something a reviewer should look out for _especially_? -->

*Fill this in*

### Can this MR be accepted?

-   [x] Implemented
-   [x] Added/Updated tests:
-   [x] Added/Updated documentation
-   [x] Pipelines passing <!-- please check for new warnings -->
     <!-- change all occurences of <branch> for your branch name -->

    -   [x] [![Build Status]]
    -   [x] [![Build Status][1]]
    -   [x] [![Build status][2]]

-   [x] Delete branch option set <!-- unless there's a good reason -->
-   [x] Added entry to CHANGELOG.md

### Related issues

Closes [#20]

<!-- For automatic closing, do not forget the commas between issue numbers-->

<!--
PLEASE READ THIS!

A Merge Request should be associated to a certain task or issue.
Its changes are supposed to be merged into the master branch.

Briefly explain __how__ you achieved the proposal of the task.

IMPORTANT: Make sure to set the merge request WIP if you are not finished yet.
-->

See merge request [!17]

  [Build Status]: https://gitlab.dune-project.org/copasi/dune-copasi/badges/20-add-special-membranes-for-species-that-only-pass-through-to-it-but-not-diffuse/pipeline.svg
  [![Build Status]]: https://gitlab.dune-project.org/copasi/dune-copasi/pipelines
  [1]: https://travis-ci.org/SoilRos/dune-copasi.svg?branch=20-add-special-membranes-for-species-that-only-pass-through-to-it-but-not-diffuse
  [![Build Status][1]]: https://travis-ci.org/SoilRos/dune-copasi/branches
  [2]: https://ci.appveyor.com/api/projects/status/6605joy2w17qvca8/branch/20-add-special-membranes-for-species-that-only-pass-through-to-it-but-not-diffuse?svg=true
  [![Build status][2]]: https://ci.appveyor.com/project/SoilRos/dune-copasi/history
  [#20]: gitlab.dune-project.org/NoneNone/issues/20
  [!17]: gitlab.dune-project.org/copasi/dune-copasi/merge_requests/17


Closes #20
parents 8776eaea 25b1863e
No related branches found
No related tags found
1 merge request!17Resolve "Add special membranes for species that only pass through to it but not diffuse"
Pipeline #24659 failed
Showing
with 539 additions and 406 deletions
......@@ -7,6 +7,7 @@ echo "MSYSTEM: $MSYSTEM"
echo "DUNECONTROL: ${DUNECONTROL}"
echo "DUNE_OPTIONS_FILE: ${DUNE_OPTIONS_FILE}"
cat ${DUNE_OPTIONS_FILE}
echo "PWD: $PWD"
which g++
which python
......
diff --git a/dune/common/std/CMakeLists.txt b/dune/common/std/CMakeLists.txt
index 40004d3c..94a2d7ac 100644
--- a/dune/common/std/CMakeLists.txt
+++ b/dune/common/std/CMakeLists.txt
@@ -1,6 +1,7 @@
install(
FILES
apply.hh
+ functional.hh
make_array.hh
memory.hh
optional.hh
# dependencies setup script for Travis and AppVeyor CI
DUNE_VERSION="master"
DUNE_VERSION="2.7"
# make sure we get the right mingw64 version of g++ on appveyor
PATH=/mingw64/bin:$PATH
......@@ -9,6 +9,7 @@ echo "MSYSTEM: $MSYSTEM"
echo "DUNECONTROL: ${DUNECONTROL}"
echo "DUNE_OPTIONS_FILE: ${DUNE_OPTIONS_FILE}"
cat ${DUNE_OPTIONS_FILE}
echo "PWD: $PWD"
which g++
g++ --version
......@@ -33,13 +34,13 @@ cmake --version
# download Dune dependencies
for repo in dune-common dune-typetree dune-pdelab dune-multidomaingrid
for repo in core/dune-common core/dune-geometry core/dune-grid core/dune-istl core/dune-localfunctions staging/dune-functions staging/dune-uggrid
do
git clone -b support/dune-copasi --depth 1 --recursive https://gitlab.dune-project.org/santiago.ospina/$repo.git
git clone -b releases/$DUNE_VERSION --depth 1 --recursive https://gitlab.dune-project.org/$repo.git
done
for repo in core/dune-geometry core/dune-grid core/dune-istl core/dune-localfunctions staging/dune-functions staging/dune-uggrid staging/dune-logging
for repo in dune-logging dune-typetree dune-pdelab dune-multidomaingrid
do
git clone -b $DUNE_VERSION --depth 1 --recursive https://gitlab.dune-project.org/$repo.git
git clone -b support/dune-copasi --depth 1 --recursive https://gitlab.dune-project.org/copasi/$repo.git
done
# python virtual environment does not work in windows yet
......@@ -76,6 +77,8 @@ cd dune-common
wget https://gist.githubusercontent.com/lkeegan/059984b71f8aeb0bbc062e85ad7ee377/raw/e9c7af42c47fe765547e60833a72b5ff1e78123c/cmake-patch.txt
echo '' >> cmake-patch.txt
git apply cmake-patch.txt
# another patch for missing header in cmake install list
git apply ../dune-copasi/.ci/dune-common.patch
cd ../
cd dune-logging
......
......@@ -7,6 +7,7 @@ echo "MSYSTEM: $MSYSTEM"
echo "DUNECONTROL: ${DUNECONTROL}"
echo "DUNE_OPTIONS_FILE: ${DUNE_OPTIONS_FILE}"
cat ${DUNE_OPTIONS_FILE}
echo "PWD: $PWD"
which g++
which python
......@@ -15,5 +16,5 @@ g++ --version
gcc --version
cmake --version
${DUNECONTROL} --opts=${DUNE_OPTIONS_FILE} --only=dune-copasi bexec make build_system_tests -j4
${DUNECONTROL} --opts=${DUNE_OPTIONS_FILE} --only=dune-copasi make --target build_system_tests
${DUNECONTROL} --opts=${DUNE_OPTIONS_FILE} --only=dune-copasi bexec ctest -j4 -L "DUNE_SYSTEMTEST" --output-on-failure
\ No newline at end of file
......@@ -7,6 +7,7 @@ echo "MSYSTEM: $MSYSTEM"
echo "DUNECONTROL: ${DUNECONTROL}"
echo "DUNE_OPTIONS_FILE: ${DUNE_OPTIONS_FILE}"
cat ${DUNE_OPTIONS_FILE}
echo "PWD: $PWD"
which g++
which python
......
---
BasedOnStyle: Mozilla
\ No newline at end of file
......@@ -31,22 +31,20 @@ before_install:
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 100; fi
- gcc --version
- g++ --version
- cd ..
# download muparser, gmp and libtiff as static libraries
- wget "https://github.com/lkeegan/libsbml-static/releases/latest/download/libsbml-static-$TRAVIS_OS_NAME.tgz"
- tar xzvf libsbml-static-$TRAVIS_OS_NAME.tgz
- sudo tar xzvf libsbml-static-$TRAVIS_OS_NAME.tgz -C /
- cd ..
install:
- echo 'CMAKE_FLAGS+=" -G '"'"'Unix Makefiles'"'"' "' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DCMAKE_CXX_STANDARD=17"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_SETUP=1 -DDUNE_PYTHON_ALLOW_GET_PIP=1 "' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_PATH='"$PWD"'/dune-python-venv"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DGMPXX_INCLUDE_DIR:PATH='"$PWD"'/gmp/include -DGMPXX_LIB:FILEPATH='"$PWD"'/gmp/lib/libgmpxx.a -DGMP_LIB:FILEPATH='"$PWD"'/gmp/lib/libgmp.a"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_PATH='"$PWD"'/ext/dune-python-venv"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DCMAKE_DISABLE_FIND_PACKAGE_QuadMath=TRUE -DBUILD_TESTING=OFF"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DDUNE_USE_ONLY_STATIC_LIBS=ON -DCMAKE_BUILD_TYPE=Release"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DCMAKE_PREFIX_PATH='"$PWD"' -DF77=true "' >> dune.opts
- echo 'CMAKE_FLAGS+=" -Dmuparser_ROOT='"$PWD"'/muparser "' >> dune.opts
- echo 'CMAKE_FLAGS+=" -DTIFF_ROOT='"$PWD"'/libtiff "' >> dune.opts
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then echo 'CMAKE_FLAGS+="-DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python3"' >> dune.opts; fi
- echo 'CMAKE_FLAGS+=" -DF77=true -DCMAKE_PREFIX_PATH=/opt/libs"' >> dune.opts
- echo 'CMAKE_FLAGS+=" -Dmuparser_INCLUDE_DIR:PATH=/opt/libs/include -Dmuparser_LIBRARIES:FILEPATH=opt/libs/lib/libmuparser.a"' >> dune.opts
- if [ "$TRAVIS_OS_NAME" = "linux" ]; then echo 'CMAKE_FLAGS+=" -DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python3"' >> dune.opts; fi
- echo 'MAKE_FLAGS="-j2 VERBOSE=1"' >> dune.opts
- export DUNE_OPTIONS_FILE="dune.opts"
- export DUNECONTROL=./dune-common/bin/dunecontrol
......
......@@ -4,7 +4,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
<!--
<!--
Guiding Principles
Changelogs are for humans, not machines.
......@@ -27,9 +27,26 @@ Types of changes
## [Unreleased]
### Added
- Move and rename header files
- Code documentation
- Data Context concept for factories
- Factory concept for arbitrary object instantiation
- Add factories for finite element and finite element mas
- Brief installation instructions
- Models can interpolate grid functions
- Grid utilities to recognize and mark tripes of entities
- A finite volume loca operator
- Grid function getters for external use
- Single domain executable
### Changed
- Move and rename header files
- Multidomain finite element was split into a multidomain and a dynamic power finite element
- Code DUNE dependencies are set to release 2.7 (See README.md)
- Other DUNE dependencies are set to COPASI/support/dune-copasi (See README.md)
- Executable is an optional build
- Library is optional and is split into smaller libraries
- Bump version utility updated to python3
### Fixed
- Dirichlet-Dirichlet condition at interfaces was being computed twice
## [0.1.0] - 2019-10-11
### Added
......
......@@ -35,4 +35,4 @@ add_subdirectory(test)
add_subdirectory(cmake/modules)
# finalize the dune project, e.g. generating config.h etc.
finalize_dune_project(GENERATE_CONFIG_H_CMAKE)
finalize_dune_project(GENERATE_CONFIG_H_CMAKE)
\ No newline at end of file
......@@ -11,22 +11,22 @@ Solver for reaction-diffusion systems in multiple compartments
* Neumann flux at the interface of compartments for variables with
the same name on the two compartments
* Easy to modify configuration file
* Initial conditions might be either TIFF file or a math expression.
* Solved using the finite element method
* Initial conditions can be a TIFF file or a math expression
* Solved using the finite element or finite volume method
* Output in the VTK format
* Currently it only supports 2D simulations
This project is made under the umbrella of the
This project is made under the umbrella of the
[*Distributed and Unified Numerics Environment* `DUNE`](https://www.dune-project.org/) and the
[*Biochemical System Simulator* `COPASI`](http://copasi.org/).
Altought the rationale of the design is always driven by biochemical process (e.g. cell biology),
[*Biochemical System Simulator* `COPASI`](http://copasi.org/).
Altought the rationale of the design is always driven by biochemical process (e.g. cell biology),
this software is not limited to this scope and can be used for other processes involving reaction-diffusion systems.
## Graphical User Interface for SMBL files
## Graphical User Interface for SBML files
For those working in bio-informatics there exist a grafical user interface for SMBL files!
The GUI is able to convert non-spatial SBML models of bio-chemical reactions into
2d spatial models, and to simulate them with `dune-copasi`.
For those working in bio-informatics there exist a grafical user interface for [`SBML`](https://en.wikipedia.org/wiki/SBML) files!
The GUI is able to convert non-spatial SBML models of bio-chemical reactions into
2D spatial models, and to simulate them with `dune-copasi`:
https://github.com/lkeegan/spatial-model-editor
......@@ -37,20 +37,20 @@ This requires that you have installed the following packages before the actual i
| Software | Version/Branch | Comments |
| ---------| -------------- | -------- |
| [CMake](https://cmake.org/) | 3.1 |
| C++ compiler | [C++17](https://en.wikipedia.org/wiki/List_of_compilers#C++_compilers) |
| C++ compiler | [C++17](https://en.wikipedia.org/wiki/List_of_compilers#C++_compilers) |
| [libTIFF](http://www.libtiff.org/) | 3.6.1 |
| [muParser](https://beltoforion.de/article.php?a=muparser) | 2.2.5 |
| [dune-common](https://gitlab.dune-project.org/santiago.ospina/dune-common) | support/dune-copasi | https://gitlab.dune-project.org/santiago.ospina/dune-common
| [dune-logging](https://gitlab.dune-project.org/staging/dune-logging) | master (recursive) | https://gitlab.dune-project.org/staging/dune-logging
| [dune-geometry](https://gitlab.dune-project.org/core/dune-geometry) | master | https://gitlab.dune-project.org/core/dune-geometry
| [dune-grid](https://gitlab.dune-project.org/core/dune-grid) | master | https://gitlab.dune-project.org/core/dune-grid
| [dune-uggrid](https://gitlab.dune-project.org/staging/dune-uggrid) | master | https://gitlab.dune-project.org/staging/dune-uggrid
| [dune-istl](https://gitlab.dune-project.org/core/dune-istl) | master | https://gitlab.dune-project.org/core/dune-istl
| [dune-localfunctions](https://gitlab.dune-project.org/core/dune-localfunctions) | master | https://gitlab.dune-project.org/core/dune-localfunctions
| [dune-functions](https://gitlab.dune-project.org/staging/dune-functions) | master | https://gitlab.dune-project.org/staging/dune-functions
| [dune-typetree](https://gitlab.dune-project.org/santiago.ospina/dune-typetree) | support/dune-copasi | https://gitlab.dune-project.org/santiago.ospina/dune-typetree
| [dune-pdelab](https://gitlab.dune-project.org/santiago.ospina/dune-pdelab) | support/dune-copasi | https://gitlab.dune-project.org/santiago.ospina/dune-pdelab
| [dune-multidomaingrid](https://gitlab.dune-project.org/santiago.ospina/dune-multidomaingrid) | support/dune-copasi | https://gitlab.dune-project.org/santiago.ospina/dune-multidomaingrid
| [dune-common](https://gitlab.dune-project.org/copasi/dune-common) | releases/2.7 | https://gitlab.dune-project.org/core/dune-common
| [dune-geometry](https://gitlab.dune-project.org/core/dune-geometry) | releases/2.7 | https://gitlab.dune-project.org/core/dune-geometry
| [dune-grid](https://gitlab.dune-project.org/core/dune-grid) | releases/2.7 | https://gitlab.dune-project.org/core/dune-grid
| [dune-uggrid](https://gitlab.dune-project.org/staging/dune-uggrid) | releases/2.7 | https://gitlab.dune-project.org/staging/dune-uggrid
| [dune-istl](https://gitlab.dune-project.org/core/dune-istl) | releases/2.7 | https://gitlab.dune-project.org/core/dune-istl
| [dune-localfunctions](https://gitlab.dune-project.org/core/dune-localfunctions) | releases/2.7 | https://gitlab.dune-project.org/core/dune-localfunctions
| [dune-functions](https://gitlab.dune-project.org/staging/dune-functions) | releases/2.7 | https://gitlab.dune-project.org/staging/dune-functions
| [dune-logging](https://gitlab.dune-project.org/staging/dune-logging) | support/dune-copasi (recursive) | https://gitlab.dune-project.org/copasi/dune-logging
| [dune-typetree](https://gitlab.dune-project.org/copasi/dune-typetree) | support/dune-copasi | https://gitlab.dune-project.org/copasi/dune-typetree
| [dune-pdelab](https://gitlab.dune-project.org/copasi/dune-pdelab) | support/dune-copasi | https://gitlab.dune-project.org/copasi/dune-pdelab
| [dune-multidomaingrid](https://gitlab.dune-project.org/copasi/dune-multidomaingrid) | support/dune-copasi | https://gitlab.dune-project.org/copasi/dune-multidomaingrid
The first four can be obtained by your prefered package manager in unix-like operating systems. e.g.
......@@ -65,24 +65,24 @@ brew update
brew install cmake gcc libtiff muparser git
```
Now, the dune modules (including `dune-copasi`) can be all checkout in a same folder and be installed in one go.
Now, the dune modules (including `dune-copasi`) can be all checkout in a same folder and be installed in one go.
```bash
# prepare a folder to download and build dune modules
mkdir ~/dune-modules && cd ~/dune-modules
# fetch dependencies & dune-copasi in ~/dune-modules folder
git clone -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-common
git clone -b master --recursive https://gitlab.dune-project.org/staging/dune-logging
git clone -b master https://gitlab.dune-project.org/core/dune-geometry
git clone -b master https://gitlab.dune-project.org/core/dune-grid
git clone -b master https://gitlab.dune-project.org/staging/dune-uggrid
git clone -b master https://gitlab.dune-project.org/core/dune-istl
git clone -b master https://gitlab.dune-project.org/core/dune-localfunctions
git clone -b master https://gitlab.dune-project.org/staging/dune-functions
git clone -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-typetree
git clone -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-pdelab
git clone -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-multidomaingrid
git clone -b releases/2.7 https://gitlab.dune-project.org/core/dune-common
git clone -b releases/2.7 https://gitlab.dune-project.org/core/dune-geometry
git clone -b releases/2.7 https://gitlab.dune-project.org/core/dune-grid
git clone -b releases/2.7 https://gitlab.dune-project.org/staging/dune-uggrid
git clone -b releases/2.7 https://gitlab.dune-project.org/core/dune-istl
git clone -b releases/2.7 https://gitlab.dune-project.org/core/dune-localfunctions
git clone -b releases/2.7 https://gitlab.dune-project.org/staging/dune-functions
git clone -b support/dune-copasi --recursive https://gitlab.dune-project.org/copasi/dune-logging
git clone -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-typetree
git clone -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-pdelab
git clone -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-multidomaingrid
git clone -b master https://gitlab.dune-project.org/copasi/dune-copasi
# configure and build dune modules
......@@ -99,22 +99,22 @@ git clone -b master https://gitlab.dune-project.org/copasi/dune-copasi
cd .. && rm -r ~/dune-modules
```
For further info on dune module installation process, please check out
For further info on dune module installation process, please check out
the [dune-project web page](https://www.dune-project.org/doc/installation/)
## Usage
## Usage
If you installed `dune-copasi` system-wide, you should be able to call the program
`dune_copasi` from your command line accompained with a configuration file.
`dune_copasi_md` from your command line accompained with a configuration file.
```bash
dune_copasi config.ini
dune_copasi_md config.ini
```
### Configuration File
The configuration file follows [INI file format](https://en.wikipedia.org/wiki/INI_file).
It should contain at least two sections: `grid`, and `model`, whereas a third section
It should contain at least two sections: `grid`, and `model`, whereas a third section
`logging` is optional.
#### Grid
......@@ -128,10 +128,19 @@ file = my_gmsh_file.msh
initial_level = 1
```
The grid should be formed by 2D traingles and squares where each *physical group* can only
only be formed by one of these type. That is, each *physical group* can have different types of
geometries, but whithin each *physical group* there cannot be more than one type. The GMSH file
should be v2 and it should not contain the *physical group* identifiers at the begining of the file
(this is a bug in the `dune-grid` reader).
#### Model
The model section starts with the definitions of the simulation time interval
and the polynomal order of the finite element method (currently only supports `1` and `2`).
The model section starts with the definitions of the simulation time interval
and the polynomal order of the local finite element (currently only supports `0`, `1` and `2`),
where `0` refers for finite volume and higher for continuous galerking methods. This order refers
to the domains composed with triangles. Domains compose with squares are always solved with the
finite volume method.
```ini
[model]
......@@ -141,13 +150,13 @@ time_step = 0.1
order = 1
```
The following is the definition of the compartments of the model.
The following is the definition of the compartments of the model.
Each compartment corresponds to a *physical group* in the gmsh jargon.
Although the gmsh format allows you to name such physical groups,
Although the gmsh format allows you to name such physical groups,
we still need to assign them to a `dune-copasi` compartmet and for that
we use the *physical group* index. Notice that uses 0-based indices while
`gmsh` uses 1-based indices. In other words,
`<gmsh_physical_group> = <dune_copasi_compartment> - 1`.
we use the *physical group* index. Notice that `dune-copasi` uses 0-based
indices while `gmsh` uses 1-based indices. In other words,
`<gmsh_physical_group> = <dune_copasi_compartment> + 1`.
Let's say for example that there is two *physical groups* in our gmsh file
and we are going to use them as `nucleous` and `cytoplasm` compartments:
......@@ -162,25 +171,25 @@ cytoplasm = 0
Now, each of these compartments will define its own initial conditions,
its diffusion-reaction system, and its vtk writter. For that, you have to expand
the `model` section with the defined compartments, e.g. `model.nucleous` or `model.cytoplasm`.
The subsection `initial`, `reaction`, `diffusion` and `operator` define the system variables
and its properties. You can put as many variables as desired as long as they are the same
in this three subsections. Notice that each variable defines a new diffusion-reaction partial
differential equation associated with it.
The subsection `initial`, `reaction`, `diffusion` and `operator` define the system variables
and its properties. You can put as many variables as desired as long as they are the same
in this three subsections. Notice that each variable defines a new diffusion-reaction partial
differential equation associated with it.
* The `initial` subsection allow the initialization of each of the variables.
* The `diffusion` subsection defines the diffusion coefficient math
* The `initial` subsection allow the initialization of each of the variables.
* The `diffusion` subsection defines the diffusion coefficient math
expression associated with each variable. It may only depend on the grid coordinates `x` and `y`.
* The `reaction` subsection defines the reaction part of each equation in the PDE.
* The `reaction` subsection defines the reaction part of each equation in the PDE.
Since this is the souce of non-linearities, it allows to be dependent on other defined variables
within the compartment. This section has to include yet another subsection called `jacobian`.
* The `reaction.jacobian` subsection must describe the
* The `reaction.jacobian` subsection must describe the
[jacobian matrix](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant)
of the `reaction` part. It must follow the syntax of `d<var_i>_d<var_j>`, which
reads as the *partial derivate of `<var_i>` with respect to `<var_j>`*.
* The `operator` subsection is an experimental feature and we recommend to set all variables to the
same index, e.g. 0.
* Finally, the subsection `writer` will define the file name for the vtk output.
of the `reaction` part. It must follow the syntax of `d<var_i>_d<var_j>`, which
reads as the *partial derivate of `<var_i>` with respect to `<var_j>`*.
* The `operator` subsection is an experimental feature and we recommend to set all variables to the
same index, e.g. 0.
* Finally, the subsection `writer` will define the file name for the vtk output.
For example, the following `mode.nucleous` section defines a [Gray-Scott
model with `F=0.0420` and `k=0.0610`](http://mrob.com/pub/comp/xmorphia/F420/F420-k610.html):
......@@ -210,15 +219,15 @@ v = 0
[model.nucleous.writer]
file_name = nucleous_output
```
The `model.cytoplasm` would have to be defined in similar way. An important aspect when working
with different compartments is the interface fluxes. In `dune-copasi` thex fluxes are set
The `model.cytoplasm` would have to be defined in similar way. An important aspect when working
with different compartments is the interface fluxes. In `dune-copasi` thex fluxes are set
automatically to [dirichlet-dirichlet](https://en.wikipedia.org/wiki/Dirichlet_boundary_condition)
boundary conditions iff the variable is shared between the two intersecting domains. Further
boundary conditions iff the variable is shared between the two intersecting domains. Further
improvements will come for interface treatment.
#### Logging
The logging settings are directly forwarded to the `dune-logging` module. Please check
The logging settings are directly forwarded to the `dune-logging` module. Please check
its doxygen documentation for detailed information. A simple configuration is the following:
```ini
......
......@@ -5,24 +5,32 @@ image:
clone_folder: C:\msys64\home\appveyor\dune-copasi
install:
- dir "C:\msys64\home\appveyor\dune-copasi" #tmp
- set PATH=C:\msys64\usr\bin;%PATH%
# download muparser, gmp and libtiff as static libraries
# download muparser, gmp and libtiff as static libraries: install to C:\libs
- mkdir temp
- cd temp
- appveyor DownloadFile "https://github.com/lkeegan/libsbml-static/releases/latest/download/libsbml-static-windows.zip"
- 7z x libsbml-static-windows.zip
- mv muparser C:\msys64\home\appveyor\.
- mv gmp C:\msys64\home\appveyor\.
- mv libtiff C:\msys64\home\appveyor\.
- mkdir C:\libs
- mv include C:\libs\.
- mv lib C:\libs\.
- dir "C:\libs"
- dir "C:\msys64\home\appveyor" #tmp
- dir "C:\msys64\home\appveyor\dune-copasi" #tmp
- cd C:\msys64\home\appveyor
before_build:
- set DEPS_DIR=/home/appveyor
- echo CMAKE_FLAGS="-G 'Unix Makefiles' -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=1 " > C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_SETUP=0 -DDUNE_PYTHON_ALLOW_GET_PIP=0" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_PATH=%DEPS_DIR%/dune-python-venv" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DGMPXX_INCLUDE_DIR:PATH=%DEPS_DIR%/gmp/include -DGMPXX_LIB:FILEPATH=%DEPS_DIR%/gmp/lib/libgmpxx.a" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DGMP_LIB:FILEPATH=%DEPS_DIR%/gmp/lib/libgmp.a -DCMAKE_DISABLE_FIND_PACKAGE_QuadMath=TRUE" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DBUILD_TESTING=OFF -DDUNE_USE_ONLY_STATIC_LIBS=ON" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DCMAKE_PREFIX_PATH=%DEPS_DIR% -Dmuparser_ROOT=%DEPS_DIR%/muparser -DTIFF_ROOT=%DEPS_DIR%/libtiff " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DCMAKE_DISABLE_FIND_PACKAGE_QuadMath=TRUE" >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DBUILD_TESTING=OFF -DDUNE_USE_ONLY_STATIC_LIBS=ON " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DF77=true -DCMAKE_CXX_FLAGS='-Wa,-mbig-obj -static -static-libgcc -static-libstdc++' " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DGMPXX_INCLUDE_DIR:PATH=C:/libs/include " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DGMPXX_LIB:FILEPATH=C:/libs/lib/libgmpxx.a " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DGMP_LIB:FILEPATH=C:/libs/lib/libgmp.a " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -DCMAKE_PREFIX_PATH=C:/libs " >> C:\msys64\home\appveyor\dune.txt
- echo CMAKE_FLAGS+=" -Dfmt_ROOT=C:/libs " >> C:\msys64\home\appveyor\dune.txt
- echo MAKE_FLAGS="-j2 VERBOSE=1" >> C:\msys64\home\appveyor\dune.txt
- type C:\msys64\home\appveyor\dune.txt
- set DUNE_OPTIONS_FILE=dune.txt
......
......@@ -7,15 +7,15 @@ RUN ln -s /duneci/toolchains/${TOOLCHAIN} /duneci/toolchain
RUN echo 'CMAKE_FLAGS+=" -DDUNE_PYTHON_VIRTUALENV_SETUP=1 -DDUNE_PYTHON_VIRTUALENV_PATH=/duneci/modules/dune-python-venv"' >> /duneci/cmake-flags/enable_virtualenv
RUN echo 'CMAKE_FLAGS+=" -DCMAKE_GENERATOR="Ninja' >> /duneci/cmake-flags/cmake_generator
RUN duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-common.git \
&& duneci-install-module --recursive https://gitlab.dune-project.org/staging/dune-logging.git \
&& duneci-install-module -b feature/allow-multidomain-vtk-compare-to-have-same-thresholds https://gitlab.dune-project.org/quality/dune-testtools.git \
&& duneci-install-module https://gitlab.dune-project.org/core/dune-geometry.git \
&& duneci-install-module https://gitlab.dune-project.org/staging/dune-uggrid.git \
&& duneci-install-module https://gitlab.dune-project.org/core/dune-grid.git \
&& duneci-install-module https://gitlab.dune-project.org/core/dune-istl.git \
&& duneci-install-module https://gitlab.dune-project.org/core/dune-localfunctions.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-typetree.git \
&& duneci-install-module https://gitlab.dune-project.org/staging/dune-functions.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-pdelab.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/santiago.ospina/dune-multidomaingrid.git
\ No newline at end of file
RUN duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/core/dune-common.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/core/dune-geometry.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/staging/dune-uggrid.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/core/dune-grid.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/core/dune-istl.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/core/dune-localfunctions.git \
&& duneci-install-module -b releases/2.7 https://gitlab.dune-project.org/staging/dune-functions.git \
&& duneci-install-module -b support/dune-copasi --recursive https://gitlab.dune-project.org/copasi/dune-logging.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-typetree.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-pdelab.git \
&& duneci-install-module -b support/dune-copasi https://gitlab.dune-project.org/copasi/dune-multidomaingrid.git
&& duneci-install-module -b feature/allow-multidomain-vtk-compare-to-have-same-thresholds https://gitlab.dune-project.org/quality/dune-testtools.git \
\ No newline at end of file
add_subdirectory(common)
add_subdirectory(concepts)
add_subdirectory(finite_element)
add_subdirectory(finite_element_map)
add_subdirectory(local_operator)
add_subdirectory(grid)
add_subdirectory(model)
install(FILES coefficient_mapper.hh
enum.hh
muparser_data_handler.hh
pdelab_expression_adapter.hh
tiff_grayscale.hh
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/common")
\ No newline at end of file
install(FILES coefficient_mapper.hh
data_context.hh
enum.hh
muparser_data_handler.hh
pdelab_expression_adapter.hh
factory.hh
tiff_grayscale.hh
COMPONENT Development
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/common")
#ifndef DUNE_COPASI_CONTEXT_BASE_HH
#define DUNE_COPASI_CONTEXT_BASE_HH
#include <type_traits>
/**
@defgroup DataContext Data Context
@{
@brief Generalized and extensible static polymorphic data holder.
A data context is a generic form to pass around arbitrary unique amount of data following
a defined interface. Moreover, a data context is always extensible to hold more data
at compile time. It has a very similar concept and use cases as `**kwargs` in python.
It is called context because the content of the object depends on the context where it was called.
A data context knows at comile-time whether it contains a type identified
with certain signature.
@code{.cpp}
template<class Ctx>
void foo(Ctx&& ctx)
{
using GV = ...;
// check whether context contains the type GV
constexpr bool result = Ctx::has( Context::Tag<GV>{} );
// Get a view on the value GV contained in the context
const auto& view = ctx.view( Context::Tag<GV>{} );
// Create a new context containing a value 10 for the type `int`
// Notice that the old context can be moved into the new one if possible
auto new_ctx = Context::DataContext<Ctx,int>{10,std::forward<Ctx>(ctx))};
// move the value contained in `int` outside of the context
auto value = new_ctx.get( Context::Tag<int>{} );
// If not sure what types a data context contain, one can print them with
// Notice that it does not give information whether the data is valid or not
std::cout << "Output: " << new_ctx << std::endl
// Output: <data contained in ctx>, int;
}
@endcode
It's important to notice that this is also possible with variadic templates or
tuples, however, they usually involve a lot of template metaprogramming to identify
types and to add more types to it.
An easy way to create a data context with different values is using the method Context::data_context
@code{.cpp}
auto ctx = Context::data_context(int(1), double(2.0), std::string("I'm stored in a context"));
@endcode
@}
*/
namespace Dune::Copasi::Context {
/**
* @brief Tags any type with a unique default constructible struct
* @ingroup DataContext
*/
template<class T>
struct Tag {};
/**
* @brief Base data context context
* @ingroup DataContext
*/
struct DataContextBase
{
//! Default constructor
DataContextBase()
{}
/**
* @brief Method to check if context contains a type
*
* @param[in] <unnamed> Tagged version of the type to check
*
* @tparam T The type to check
*/
template<class T>
static bool constexpr has(Tag<T>)
{
return false;
}
/**
* @brief Gets a view on the stored value for type T
*
* @param[in] <unnamed> Tagged version of the type to view
*
* @tparam T The type to view
*
* @return A const reference to the stored value
*/
template<class T>
const T& view(Tag<T>) const
{
static_error(Dune::AlwaysFalse<T>::value, "Data context does not contain type T");
}
/**
* @brief Gets the ownership of the stored value for type T
*
* @param[in] <unnamed> Tagged version of the type to get
*
* @tparam T The type to get
*
* @return A rvalue to the stored value
*/
template<class T>
T&& get(Tag<T>)
{
static_error(Dune::AlwaysFalse<T>::value, "Data context does not contain type T");
}
/**
* @brief Writes type names to the output stream
*
* @param os The output stream
* @param[in] ctx The context to print
*
* @return The output stream with the type names
*/
friend std::ostream& operator<<(std::ostream& os, const DataContextBase& ctx)
{
return os;
}
};
/**
* @brief This class describes a data context.
* @ingroup DataContext
*
* @tparam T Type to store in the data context
* @tparam Ctx Context to be extended
*/
template<class T, class Ctx = DataContextBase>
class DataContext : public Ctx
{
// ensure that we have means to store T
static_assert(std::is_copy_constructible_v<T> or std::is_move_constructible_v<T>);
// ensure that each type is associated with a value only once
static_assert(not Ctx::has( Tag<T>{} ), "Data context already contains type T");
// helper function to assert a universal reference of T
template<class _T>
static constexpr bool is_valid_t = std::is_same_v<std::decay_t<T>,std::decay_t<_T>>;
// helper function to assert a universal reference of Ctx
template<class _Ctx>
static constexpr bool is_valid_ctx = std::is_same_v<std::decay_t<Ctx>,std::decay_t<_Ctx>>;
public:
// Inherit methods from the base context
using Ctx::get;
using Ctx::has;
using Ctx::view;
/**
* @brief Constructs a new instance
*
* @param value The value
* @param ctx The base context
*
* @tparam _T Universal reference of T
* @tparam _Ctx Universal reference of Ctx
* @tparam <unnamed> Helper SFINAE to disable constructor when _T and _Ctx are not valid
*/
template<class _T, class _Ctx,
class = std::enable_if_t<is_valid_t<_T> and is_valid_ctx<_Ctx>>>
DataContext(_T&& value, _Ctx&& ctx)
: Ctx(std::forward<_Ctx>(ctx))
, _value(std::forward<_T>(value))
{}
/**
* @brief Constructs a new instance.
*
* @param value The value
*
* @tparam _T Universal reference of T
* @tparam is_ctx_default_ctble Helper SFINAE bool for default constructible base context
* @tparam <unnamed> Helper SFINAE to disable constructor when _T is not valid and Ctx is not default construcible
*/
template<class _T,
bool is_ctx_default_ctble = std::is_default_constructible_v<Ctx>,
class = std::enable_if_t<is_valid_t<_T> and is_ctx_default_ctble>>
DataContext(_T&& value)
: _value(std::forward<_T>(value))
{}
/**
* @brief Method to check if context contains a type
*
* @tparam T The type to check
*/
static bool constexpr has(Tag<T>)
{
return true;
}
/**
* @brief Gets a view on the stored value for type T
*
* @tparam T The type to view
*
* @return A const reference to the stored value
*/
const T& view(Tag<T>) const
{
return _value;
}
/**
* @brief Gets the ownership of the stored value for type T
*
* @tparam T The type to get
*
* @return A rvalue to the stored value
*/
T&& get(Tag<T>)
{
return std::move(_value);
}
/**
* @brief Writes type names to the output stream
*
* @param os The output stream
* @param[in] ctx The context to print
*
* @return The output stream with the type names
*/
friend std::ostream& operator<<(std::ostream& os, const DataContext<T,Ctx>& ctx)
{
std::string ending = std::is_same_v<Ctx,DataContextBase> ? ";" : ", ";
os << "\t" << Dune::className<T>() << ending;
os << *static_cast<const Ctx *>(&ctx);
return os;
}
private:
//! Actual storage data of type T
T _value;
};
/**
* @brief Create a data context for one value
*
* @param value The value to store in the data context
*
* @tparam T Universal reference of the value to store
*
* @return A context storing the value
*/
template<class T>
auto data_context(T&& value)
{
return Context::DataContext<std::decay_t<T>>(std::forward<T>(value));
}
/**
* @brief Create a data context for several values
*
* @param arg The first argument in the data context
* @param args The rest of arguments in the data context
*
* @tparam Arg0 Universal reference of the first value to store
* @tparam Args Universal references of the rest of values to store
*
* @return A context storing all the values
*/
template<class Arg0, class... Args>
auto data_context(Arg0&& arg, Args&&... args)
{
auto base_ctx = data_context(std::forward<Args>(args)...);
return Context::DataContext<std::decay_t<Arg0>, decltype(base_ctx)>(arg, std::move(base_ctx));
}
} // namespace Dune::Copasi::Context
#endif // DUNE_COPASI_CONTEXT_BASE_HH
#ifndef DUNE_COPASI_FACTORY_HH
#define DUNE_COPASI_FACTORY_HH
#include <memory>
#include <dune/common/typetraits.hh>
namespace Dune::Copasi {
/**
@defgroup Factory Factory
@{
@brief Class instance creator
A factory should be able to create an instance of a type out of @ref DataContext.
This is done by the definition of an specialization of the class Factory and a static function create.
One thing to take into account is that data context can only store unique values of certain data type.
This is quite restrictive is the contructor of T contains repeated or very common types (e.g. `T{int,int,double}`),
in shuch case, is best to wrap these values in a unique struct that contains all of these.
@}
*/
/**
* @brief Class Factory
*
* @tparam T Type to be created
*/
template<class T>
struct Factory {
// Factory for type T has not been instantiated
static_assert(Dune::AlwaysFalse<T>::value, "Factory does not exist");
};
} // namespace Dune::Copasi
#endif // DUNE_COPASI_FACTORY_HH
\ No newline at end of file
#ifndef DUNE_COPASI_PARAMETER_PARSER_HH
#define DUNE_COPASI_PARAMETER_PARSER_HH
#include <functional>
#include <dune/common/parametertree.hh>
namespace Dune::Copasi {
bool
eq(const ParameterTree& config_l, const ParameterTree& config_r)
{
// absolutely not obtimal for big trees!
for (auto&& key : config_r.getValueKeys())
if (not config_l.hasKey(key))
return false;
for (auto&& key : config_l.getValueKeys())
if (not config_r.hasKey(key))
return false;
for (auto&& section : config_r.getSubKeys())
if (not eq(config_l.sub(section), config_r.sub(section)))
return false;
for (auto&& section : config_l.getSubKeys())
if (not eq(config_l.sub(section), config_r.sub(section)))
return false;
return true;
}
// merge parameter trees
// config_l += config_r
void
add(ParameterTree& config_l,
const ParameterTree& config_r,
bool trow_on_override = true)
{
for (auto&& key : config_r.getValueKeys()) {
if (config_l.hasKey(key))
DUNE_THROW(RangeError,
"config file addition failed due to duplucated key: " << key);
config_l[key] = config_r[key];
}
for (auto&& section : config_r.getSubKeys())
add(config_l.sub(section), config_r.sub(section));
}
// diff parameter trees
// config_l -= config_r
void
diff(ParameterTree& config_l, const ParameterTree& config_r)
{
ParameterTree config_l_copy(config_l);
config_l = {};
for (auto&& key : config_l_copy.getValueKeys())
if (not config_r.hasKey(key))
config_l[key] = config_l_copy[key];
for (auto&& section : config_l_copy.getSubKeys()) {
diff(config_l_copy.sub(section), config_r.sub(section));
if (config_l_copy.sub(section).getValueKeys().size() > 0 or
config_l_copy.sub(section).getSubKeys().size() >
0) // remove empty sections
config_l.sub(section) = config_l_copy.sub(section);
}
}
// get keys in vector
ParameterTree
get_keys(const ParameterTree& config, const std::vector<std::string>& keys)
{
ParameterTree new_config;
for (auto&& key : keys)
new_config[key] = config[key];
return new_config;
}
// get all keys (no section)
ParameterTree
get_keys(const ParameterTree& config)
{
return get_keys(config, config.getValueKeys());
}
// get sections in vector
ParameterTree
get_sections(
const ParameterTree& config,
const std::vector<std::function<ParameterTree(const ParameterTree&)>>&
sections)
{
ParameterTree new_config;
for (auto&& section : sections)
add(new_config, section(config));
return new_config;
}
ParameterTree
get_grid(const ParameterTree& config)
{
std::vector<std::string> grid_keys{ "file", "initial_level" };
auto grid_config = get_keys(config.sub("grid"), grid_keys);
// todo : check file is valid
// todo : check initial_level is > 0
ParameterTree new_config;
new_config.sub("grid") = grid_config;
return new_config;
}
ParameterTree
get_initial(const ParameterTree& config)
{
auto initial_config = get_keys(config.sub("initial"));
// todo : check that they are math expressions
// todo : keys are ordered
ParameterTree new_config;
new_config.sub("initial") = initial_config;
return new_config;
}
// check_var_consistency with respect to initial sections
ParameterTree
get_diffusion(const ParameterTree& config, bool check_var_consistency = true)
{
ParameterTree diffusion_config;
if (check_var_consistency) {
auto initial_config = get_keys(config.sub("initial"));
diffusion_config =
get_keys(config.sub("diffusion"), initial_config.getValueKeys());
} else {
diffusion_config = get_keys(config.sub("diffusion"));
}
// todo : check that they are math expressions
// todo : keys are ordered
ParameterTree new_config;
new_config.sub("diffusion") = diffusion_config;
return new_config;
}
// check_var_consistency with respect to initial sections
ParameterTree
get_operator(const ParameterTree& config, bool check_var_consistency = true)
{
ParameterTree operator_config;
if (check_var_consistency) {
auto initial_config = get_keys(config.sub("initial"));
operator_config =
get_keys(config.sub("operator"), initial_config.getValueKeys());
} else {
operator_config = get_keys(config.sub("operator"));
}
// todo : check that they are signed integers
// todo : keys are ordered
ParameterTree new_config;
new_config.sub("operator") = operator_config;
return new_config;
}
ParameterTree
get_jacobian(const ParameterTree& config,
std::vector<std::string> base_variables)
{
auto jacobian_config = get_keys(config.sub("jacobian"));
std::size_t jac_size = jacobian_config.getValueKeys().size();
std::size_t base_size = base_variables.size();
// it's important that jacobian has the right size and is ordered
if (jac_size != base_size * base_size)
DUNE_THROW(RangeError, "jacobian section has wrong size");
std::size_t count(0);
for (auto&& var_i : base_variables) {
for (auto&& var_j : base_variables) {
std::string jac_key = jacobian_config.getValueKeys()[count];
auto found_i = jac_key.find(var_i);
if (found_i == std::string::npos)
DUNE_THROW(RangeError,
"Jacobian key '" << jac_key
<< "' does not contain its base key i '"
<< var_i << "'");
auto found_j = jac_key.find(var_j);
if (found_j == std::string::npos)
DUNE_THROW(RangeError,
"Jacobian key '" << jac_key
<< "' does not contain its base key j '"
<< var_j << "'");
count++;
}
}
// todo : check that they are math expressions
// todo : keys are ordered
ParameterTree new_config;
new_config.sub("jacobian") = jacobian_config;
return new_config;
}
ParameterTree
get_reaction(const ParameterTree& config, bool check_var_consistency = true)
{
ParameterTree reaction_config;
if (check_var_consistency) {
auto initial_config = get_keys(config.sub("initial"));
reaction_config =
get_keys(config.sub("reaction"), initial_config.getValueKeys());
} else {
reaction_config = get_keys(config.sub("reaction"));
}
auto jacobian_config =
get_jacobian(config.sub("reaction"), reaction_config.getValueKeys());
add(reaction_config, jacobian_config);
// todo : check that they are math expressions
// todo : keys are ordered
ParameterTree new_config;
new_config.sub("reaction") = reaction_config;
return new_config;
}
ParameterTree
get_compartment_i(const ParameterTree& config, const std::string& name)
{
std::vector<std::function<ParameterTree(const ParameterTree&)>> sections;
sections.push_back(get_initial);
sections.push_back([](auto i) { return get_diffusion(i); });
sections.push_back([](auto i) { return get_reaction(i); });
sections.push_back([](auto i) { return get_operator(i); });
auto compart_config = get_sections(config.sub(name), sections);
ParameterTree new_config;
new_config.sub(name) = compart_config;
return new_config;
}
ParameterTree
get_compartments(const ParameterTree& config)
{
auto compart_config = get_keys(config.sub("compartments"));
// todo : check that they are valid keys
ParameterTree new_config;
new_config.sub("compartments") = compart_config;
return new_config;
}
ParameterTree
get_model(const ParameterTree& config)
{
auto compartments = get_compartments(config.sub("model"));
std::vector<std::function<ParameterTree(const ParameterTree&)>> sections;
for (auto&& compartment : compartments.sub("compartments").getValueKeys())
sections.push_back(
[&](auto ini) { return get_compartment_i(ini, compartment); });
auto compartment_i = get_sections(config.sub("model"), sections);
add(compartments, compartment_i);
ParameterTree new_config;
new_config.sub("model") = compartments;
// bug in parameter tree. If keys go first, sub section is not well set!
std::vector<std::string> model_keys{ "begin_time", "end_time", "time_step" };
add(new_config.sub("model"), get_keys(config.sub("model"), model_keys));
// todo : check that they are valid keys
return new_config;
}
} // namespace Dune::Copasi
#endif // DUNE_COPASI_PARAMETER_PARSER_HH
\ No newline at end of file
#ifndef DUNE_COPASI_GRID_FUNCTION_EXPRESSION_ADAPTER_HH
#define DUNE_COPASI_GRID_FUNCTION_EXPRESSION_ADAPTER_HH
#include <dune/copasi/common/tiff_grayscale.hh>
#include <dune/pdelab/common/function.hh>
#include <dune/logging/logging.hh>
......@@ -12,9 +10,9 @@
#include <muParser.h>
#include <algorithm>
#include <string>
#include <type_traits>
#include <memory>
namespace Dune::Copasi {
......@@ -48,7 +46,7 @@ public:
ExpressionToGridFunctionAdapter(const GV& grid_view,
const std::string& equation,
bool do_compile_parser = true,
std::vector<std::string> other_variables = {})
const std::vector<std::string>& other_variables = {})
: _logger(Logging::Logging::componentLogger({}, "model"))
, _gv(grid_view)
, _time(0.)
......@@ -59,8 +57,6 @@ public:
constexpr int dim = Traits::dimDomain;
std::sort(other_variables.begin(), other_variables.end());
_logger.trace("initialize parser with constant variables"_fmt);
_parser.DefineConst("pi", StandardMathematicalConstants<double>::pi());
_parser.DefineConst("dim", dim);
......@@ -153,6 +149,11 @@ public:
_compiled = true;
}
/**
* @brief Get parser
*
* @return Reference to internal parser
*/
mu::Parser& parser()
{
assert(not _compiled);
......@@ -201,6 +202,41 @@ private:
bool _compiled;
};
/**
* @brief Gets the muparser expressions.
*
* @param[in] expressions_config The expressions configuration
* @param[in] gf_grid_view The grid view for the grid function
* @param[in] compile True to compile expression at construction
*
* @tparam GFGridView Grid view type
* @tparam RF Range field type
*
* @return Vector with muparser expressions pointers
*/
template<class GFGridView, class RF = double>
auto get_muparser_expressions(
const ParameterTree& expressions_config,
const GFGridView& gf_grid_view,
bool compile = true)
{
const auto& vars = expressions_config.getValueKeys();
using GridFunction = ExpressionToGridFunctionAdapter<GFGridView, RF>;
std::vector<std::shared_ptr<GridFunction>> functions;
for (std::size_t i = 0; i < vars.size(); i++) {
auto gf = std::make_shared<GridFunction>(
gf_grid_view, expressions_config[vars[i]], compile);
functions.emplace_back(gf);
assert(functions.size() == i + 1);
if (compile)
functions[i]->compile_parser();
}
return functions;
}
} // namespace Dune::Copasi
#endif // DUNE_COPASI_GRID_FUNCTION_EXPRESSION_ADAPTER_HH
\ No newline at end of file
install(FILES grid.hh
pdelab.hh
typetree.hh
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/concepts")
\ No newline at end of file
install(FILES grid.hh pdelab.hh typetree.hh
COMPONENT Development
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/concepts")
install(FILES dynamic_local_basis.hh
dynamic_local_coefficients.hh
dynamic_local_interpolation.hh
dynamic_local_finite_element.hh
multidomain_local_finite_element_map.hh
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/finite_element")
\ No newline at end of file
add_subdirectory(dynamic_power)
install(FILES dynamic_power.hh
local_basis_cache.hh
p0.hh
pk.hh
COMPONENT Development
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/dune/copasi/finite_element")
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment