Enable downstream testing for all core modules
This MR adds a ci job that runs tests for all downstream modules
Merge request reports
Activity
added 1 commit
- 46789ac0 - [feature][ci] try some sort of downstream testing.
added 1 commit
- 61205cb4 - [feature][ci] try some sort of downstream testing.
mentioned in issue dune-istl#98 (closed)
- Resolved by Simon Praetorius
- We should add all core modules, i.e., adding istl and localfunctions would be nice.
- This will prolong the CI tests for dune-common and increase the load on the CI runners remarkably. Further, issues in the other core modules would prevent merges in dune-common. This does not mean, I am against this idea, but we have to carefully balance these aspects.
added 1 commit
- 51983894 - [feature][ci] added downstream dune-istl test.
Just want to clarify something:
this approach now tests
newbranch
indune-common
withmaster
of all other core modules? We will be adding the same 'downstream' test to other core modules, i.e., anewbranch
indune-geometry
is tested withmaster
fromdune-grid
?Now if
newbranch
does not exists indune-common
then the test (indune-geometry
) would bedune-common:master
,dune-geometry:newbranch
,dune-grid:master
. Ifnewbranch
exists indune-geometry
the corresponding test would usedune-common:newbranch
,dune-geometry:newbranch
anddune-grid:master
. The MR indune-common
would be testingdune-common:newbranch
,dune-geometry:master
,dune-grid:master
.Is that right?
The consequence would then usually be that the MR in
dune-common
fails (since tested withdune-geometry:master
) while the MR indune-geometry
passes (since tested withdune-common:newbranch
). Which means the MR indune-geometry
needs to be merged first and the CI indune-common
needs to be restarted after that to make it pass.After a branch is merged the CI is run on
master
(I think) which would then fail indune-geometry
since nowdune-common:master
is used instead ofdune-common:newbranch
. So as long as thenewbranch
indune-common
remains unmerged downstream modules are buggy.That seems fine to me, just wanted to check that this is what would happen.
I don't think that is what happens. The
duneci-install-module
script that installs the downstream modules by default uses a policy that tries finding a branch name that matches whatever triggered the CI run: https://gitlab.dune-project.org/docker/ci/-/blob/master/base-common/duneci-install-module#L18 That means that - given suitable branches exist downstream - they are picked up. Or did I misunderstand you?Edited by Dominic KempfThat is my understanding, maybe @simon.praetorius can confirm. The only issue that I see is that merging related MR's in multiple modules could cause a bit of synchronization issues for the downstream testing of the master branch. If you merge top-down in the dependency DAG, the downstream tests will fail. If you merge bottom-up, the regular tests will fail. Rule of thumb: merge quickly!
@simon.praetorius I think the idea to use
stages
, as discussed on the meeting, would be very helpful.This approach is used in dune-codegen to run the expensive tests only upon request, or after merging to master.
I'm asking here for this feature, as I'm afraid it would require some additional configuration in core/ci-config.
It is a bit a question what we want to achieve. The current draft of this MR runs the test in all MR and branches for every new push. Instead of 5min. we now need 25min for the ci job to finish. While it could be activated for master only, or manually triggered, this is often too late or may be easily forgotten. Also, it "just" runs the core-module tests. While there could already be problems detected, sometimes the complicated failures emerge in the discretization modules only. For a manual trigger, I have started a full-system test repository https://gitlab.dune-project.org/infrastructure/dune-nightly-test. It is still work-in-progress, though.
In my opinion, the downstream tests that are part of the regular .gitlab-ci.yml files should be quick tests to be run for all branches and all MRs. Maybe we just run the
dune-install-module
command that also configures and compiles the modules but does not run the individual module test-suites. (Those could be run inside the downstream modules directly.) If we need more than just successful configure and compile, we could trigger another test (e.g. in the full-system test repository) manually. Maybe also add badges to indicate that these tests succeeded.Edited by Simon PraetoriusAlso, it "just" runs the core-module tests. While there could already be problems detected, sometimes the complicated failures emerge in the discretization modules only.
This is unavoidable anyway. IMO the main difference here is, that dune-common contains a significant part of the build-system for the downstream modules, most of which is used in the core modules. That's the real benefit of the downstream testing here and, as far as I understood, is also what @robert.kloefkorn originally had in mind.
For elaborate testing of further downstream modules I consider a nightly test setup much more approriate.
Right, it is certainly better than what we have right now. Thanks.
Edited by Robert K@robert.kloefkorn Did you mean '...better than...'?
For GitLab the stages with "sticky" runners, i.e., the ability to run multiple stages on the same runner without the need for artifacts or shared caches, is planned for some upcomming releases, see https://gitlab.com/gitlab-org/gitlab/-/issues/17497 (current plan is for release 13.9, ~ mid Feb.) Maybe this can be used for the downstream tests (and/or the system tests) to improve speed and clarity of the pipelines.
- Resolved by Simon Praetorius
added 47 commits
-
8933102c...7c5c1028 - 31 commits from branch
master
- e9044ed0 - [feature][ci] try some sort of downstream testing.
- 6e27679f - [feature][ci] added downstream dune-istl test.
- 5c59365c - run tests on all downstream modules
- b4f3fa8e - use the correct docker image
- 0beb964d - set path variable
- ccaeb961 - set path variable
- 7e8752a2 - set path variable
- d6cd1826 - set path variable
- a3fdb41d - also install dune-common
- a8c49b12 - build all modules one after another
- e97b094b - build all modules one after another
- 2711bc03 - test dune-common first
- 91993a47 - test dune-common first
- cd28958f - reenable python tests
- 3f0b2d0a - simply python-bindings flags
- 4a14ee0a - simplify cmake flags
Toggle commit list-
8933102c...7c5c1028 - 31 commits from branch
added 1 commit
- b998486d - remove python bindings from downstream testing
added 1 commit
- 21987ef1 - trigger the core pipeline in dune-nightly-tests
Instead of writing the downstream tests directly inside the
.gitlab-ci.yml
file of dune-common, I have implemented a multi-project trigger to run a pipeline in the dune-nightly repository. There, I have created a minimalcore
branch, currently with just one toolchain (gcc-7). Thus, we only need to maintain one repository with system tests instead of a similar thing in each module.Some details might be up to discussion:
- The downstream test is part of the dune-common pipeline. Should we mark it
allow_failure
? - The system test only runs one toolchain, but the full testsuite. Maybe we can remove the expensive tests from dune-common directly.
- Due to problems with the python bindings, I just run the simple (non-python) tests. This needs further investigation, if these tests should be part of the regular CI job. I don't know what I have to change in order to make this work. But, we could simply use the python-bindings branch of the dune-nightly repository.
- The downstream test is part of the dune-common pipeline. Should we mark it
After thinking about this a bit longer, can this be merged now?
Is there any response to the raised questions? What about the concern raised by @andreas.dedner ? Marking this system test "allow_failure" would solve the issue of order of merges. It might be that in some constellation if merged multiple dependent branches in the wrong order the pipeline fails, but this cannot be prevented. The "allow_failure" flag would allow to merge nevertheless.
All the other points I have raised can be put into a separate MR after merging this one. We can try it out and see if there are problems arising. If so, we simply work on this a bit more, but we cannot foresee all possible situations. The current alternative is to always manually run the system pipeline.
assigned to @robert.kloefkorn