In one of my applications the operator[] access to the array entries takes up to 5 % of total run-time.
If i comment the index check out i gain this 5% run-time improvement.
I would suggest the two options:
use preprocessor directives to (de-)activate the checks:
if (lb == j+n || *lb != i) DUNE_THROW(ISTLError,"index "<<i<<" not in compressed array");
implement at(size_type i) which includes the index checks and operator[] returns without checking.
The first option seems to be a little more 'dune-istl' like and less invasive, while the second one is similar to the behavior of a std::vector.
Designs
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
@tobias.malkmus May I ask how this method is called in your application and what your application is? I usually encountered this exception for algorithms that assume a symmetric sparsity pattern of a matrix but got an unsymmetric one. In that situation an exception is quite handy for a user compared to getting some segmentation fault.
In my application this method is called via BCRSMatrix::operator[] or BCRSMatrix::entry().
The matrix is constructed using the implicit build mode. The entries of the matrix are recomputed in each time step. The sparsity pattern is fixed.
In general this checks are appreciated. In production runs where i'm sure that everything is ok, a possibility to deactivate this checks would be great.
@tobias.malkmus I see. As we agree that these checks are of value, I would think that these checks should only be deactivated if the user uses -DNDEBUG for compilation.
@markus.blatt sorry for the missunderstanding, it was not my intention to get rid of these checks. I would like to have the possibility to deactive them.
For me the question is: what should be the default (activated or deactivated checks) and how should this be done ?
@pipping i don't think DUNE_ASSERT_BOUND returns a helpful error message, e.g. it does not tell you which index is not contained in the index mapping j.
@markus.blatt sorry for the missunderstanding, it was not my intention to get rid of these checks. I would like to have the possibility to deactive them.
IMHO the default should be to have the checks activated because the
cause could be a misuse of dune-istl by the user. Using
#ifndef NDEBUG if(...) DUNE_THROW...;#endif
forces the user to use g++ -DNDEBUG to deactivate them. If he/she does
he either is a power user or masquerades as one. I any case he/she
should know what he is doing.
@markus.blatt I'm not sure why you're recommending NDEBUG here now. I believe we've already had this debate and agreed on DUNE_CHECK_BOUNDS, which is what should be used directly (with a RangeError, not an ISTLError here) rather than NDEBUG, which is far too general, affecting every single file you're including).
@tobias.malkmus (as I said above): If DUNE_ASSERT_BOUNDS is too restrictive for this purpose then there's the DUNE_CHECK_BOUNDS guard and the RangeError class, still precisely for this purpose.
@markus.blatt I agree that checks such as this can be very helpful, especially for new users. This suggests that we should have an orthogonal discussion over how to best make DUNE_CHECK_BOUNDS more visible, e.g.
making doxygen generate documentation for it somehow? (no idea if that's possible)
@pipping This is not about new users making errors. I got these exceptions sometimes and am grateful for them. This is an error where a sane library that is allowed to waste a few cycles (5% in a corner case) would always throw. I do not want to rely on having to specify magical preprocessor defines for this (neither as a developer nor as a user) when there is a well established one like NDEBUG for people that need the extra bit of performance and are ready to take the danger of shooting themselves in the foot.
Let us face reality, How often do you use defines like DUNE_CHECK_BOUNDS, DUNE_ISTL_WITH_CHECKING and what for? I am using the latter very rarely, but only for checking whether everything is implemented correctly within dune-istl and not for detecting user errors. (To be honest I would even need to look it up in the header files first). But I might be an exception of course.
When do I use -DNDEBUG? Only if I am really doing productive runs and want the maximum performance. If something breaks a recompile with -O0 -g and use a debugger.
BTW: Who reads documentation and how? Regular users mostly if something changed in DUNE and their program does not compile any more and then only the part that they need. I.e. on demand.
To sum this up (in this case and probably others): If people do not know NDEBUG then they should probably not use it anyway. But if people do not know DUNE_CHECK_BOUNDS, or DUNE_ISTL_WITH_CHECKING and something breaks then they are simply screwed.
@markus.blatt Dear Markus, allow me to quote what you and I, respectively said:
(You) [..] forces the user to use g++ -DNDEBUG to deactivate [bounds checks]. If he/she does he either is a power user or masquerades as one. I any case he/she should know what he is doing.
(Me) I agree that checks such as this can be very helpful, especially for new users.
(You) This is not about new users making errors.
As you can see, I never said that it was solely about new users. You felt the need to differentiate between different types of users in the first time. Let's make it about all users again.
The bounds checking guard DUNE_CHECK_BOUNDS and the associated convenience macro goes back to an email I sent in January 2012 and turned into a flyspray issue the next day. You never commented on it. I don't know why. I would've appreciated additional feedback but there's not much else that I can do than writing to the dune-devel mailing list and opening a flyspray task. Do you disagree with the decisions that were made? So far you're the only one to do so. By far not every DUNE developer commented on the issue back then; but then, quite a few did. I'm surprised that you would find this issue important yet not speak up (only now, more than 4 years later, and I still feel that I had to drag you into this discussion). Did you not see it?
You're arguing that regular users (again, I'm not sure who they are) would know about NDEBUG but not DUNE_CHECK_BOUNDS because they only read documentation when something breaks. DUNE and C++ are not MATLAB -- I don't subscribe to this pessimistic view on the competence of DUNE users. I've also already mentioned before that DUNE_CHECK_BOUNDS adds value over NDEBUG, which you did not reply to.
I have two opts file, one with the bounds check, a low optimization level and debug symbols and a second one for performace runs. DuMuX ships two such opts files.
Maybe we should define default options for Debug and Releasebuild types. There would be the natural place to add the flags as defaults. What do you think about?
@pipping With digging up issues / mailing list entries from 2012 and checking who commented on them, this gets a bit too personal for my taste. Even I might have improved and learned in four years (hope is the last thing to die...). On a side note: Is bounds checking in FieldMatrix really the same as in BCRSMatrix?
You're arguing that regular users (again, I'm not sure who they are) would know about NDEBUG but not DUNE_CHECK_BOUNDS
That is not exactly what I am saying and definitely not what I mean. They know neither about NDEBUG nor about DUNE_CHECK_BOUNDS. But if something goes wrong they need to know about DUNE_CHECK_BOUNDS as the check is only executed if it is defined. In my suggestion the check is not executed if NDEBUG is defined. Therefore if you do not know NDEBUG then the check is executed.
In this particular pieces of code I think using NDEBUG is more safe because in 90% of the cases I want this check. In others DUNE_CHECK_BOUNDS might be the weapon of choice.
@tobias.malkmus, @markus.blatt I'm not surprised that you see not difference in performance. You are arguing about the case where the exception is not thrown, i.e. about the cost of checking whether to throw an exception. If you don't actually throw it, it doesn't really matter how expensive throwing it is (though that should be taken with a grain of salt, think exceeding inlining limits).
In the case of dune-common!121 (merged) where I saw the performance impact, the exception was actually thrown (and constructed), but internally caught again before reaching user code.
Is bounds checking in FieldMatrix really the same as in BCRSMatrix?
They're different in some ways and similar in others. I'm not sure which differences you find to be relevant here.
I see a lot of value in a general bounds checking facility for DUNE and this issue falls into that category. The past was a mix of DUNE_FMatrix_WITH_CHECKING, DUNE_ISTL_WITH_CHECKING, and all kinds of different error classes. We haven't quite reached the point that I'd like us to reach yet; I'd like to use this issue as a reminder to get back on that task and add more MRs in the near future. This obviously clashes with our disagreement on strategy that should be taken here.
Here's how I'd like to proceed: Since I'm not alone in my view of how this issue should be handled yet the impression I'm getting so far is that you are, I'd like to ask you to prove me wrong, e.g. by showing that your NDEBUG route is favoured by a majority of developers (at least for this issue).
This is costing a lot oft time and I doubt that it is worth. Therefore this is my last comment. Plesse open a separate issue in Dune-common to continue with others.
I think there is still a misunderstanding...
For the record I do think that the ability t o switch in additional checks for debugging is very useful. That is what your suggestion (DUNE_CHECK_BOUNDS, ...) is for.
But there are also Casey wäre checks should nearly always be performed but for produktive performance critical runs wäre wie can assume the code is right, we want to deactivate them. There i propose to usw NDEBUG. Here this seems to be the case.
Having no change at all would be the worst outcome of this discussion. So, I reopened !60 (closed).
I support Elias' mission to have a single general flag to enable bounds checking as I consider this feature useful. And I encourage him to push this topic further. I also understand Markus' argument, nevertheless I draw a different conclusion. I can live with the NDEBUG solution, I hope Elias will accept it, too.
I am mildly concerned about the culture of discussion. It reminds me more of German rhombus politics, then the Dune debates. <>
it has far less performance impact than a range check in the FieldVector.
For the FieldVector it is clear, that the range check should only be used very rare situations, in this case I think the check verifying the pattern
if (lb == j+n || *lb != i) DUNE_THROW(ISTLError,"index "<<i<<" not in compressed array");
are of value also for many people not in excessive debug mode.
I also hit this error, when working with a nested BCRSMatrix. I would have had a hard time figuring out what went wrong if it is only enabled with full range checks.
Perhaps the best compromise:
we introduce #ifndef NDEBUG for 2.5
we introduce at(...) in master
with at present we can "downgrade" the checks in operator[] to be only run when DUNE_CHECK_BOUNDS is enabled.
I don't like the 'at' since it means rewriting code to turn on/turn off checking. I like configuration flags for doing that. NDEBUG is fine with me - in fact for 2.5 I'm fine with any change that makes it possible to turn off the checks.
@christi: I'm not sure if the range check has more impact here. For FieldVector the indices are in almost all cases known at compile time, such that the compiler can eliminate checks that are always true. This is not possible in the dynamic case.
I'm also happy with any solution that helps to disable these flags. But I also appreciate @pipping's effort to do this in a structured and unified ways.
I can live with either solution. Currently I can't decide which way to lean. Anyway, the CI seems to fail with Dune::ImplicitModeOverflowExhausted in test bcrsimplicitbuild for both patches, so there.
My detailed opinion: NDEBUG is great. Everybody knows about it; you don't have to look up some obscure dune-specific flags. You leave it unset in your debug build and you get most debug checks; you set it in your release builds and you get most of the performance.
But NDEBUG is a very dark-gray-or-light-gray answer. Sometimes you might need something even more extreme than NDEBUG provides for, in either direction. And other times you need something in between. This is where obscure dune-specific flags have their place: we might want a flag that has more than two states, i.e. white and black in addition to NDEBUG's grays, and make it default to whatever NDEBUG says. And we may want additional flags that override the broad policy for certain categories of checks.
Then you can simply leave NDEBUG unset for your development builds; most checks will be on, but not so many that the programs will be unbearably slow. I.e. you probably still won't do index-checking in FieldVector::operator[]. Or you set NDEBUG when you do production runs on a cluster; there will still be some checks, but nothing that typically has a big performance impact. This should fit your needs most of the time, if it doesn't you can start investigating the other flags.
Question: This is a rather complicated system, do we really need it?
Answer: Probably. Not so much out of technical necessity, technically you can always modify the source locally when upstream Dune doesn't fit you (though it is a hassle). Rather we need it to have a framework for informed discussion. The problem with the current discussion is that we can use either NDEBUG or DUNE_CHECK_BOUNDS. NDEBUG is a rather blunt tool that sets the policy for everything with no way to deviate from that, DUNE_CHECK_BOUNDS on the other hand, as far as I can tell, is totally unaffected by the policy set by NDEBUG, so you need obscure knowledge to even know that you need to use it. In addition it has the light-gray-or-dark-gray problem; this is important here because checking bounds in BCRSMatrix has very different performance implications from checking bounds in FieldMatrix. With that solution space, every approximation we pick will be a bad, and everyone will weight the badness in a different way, and we can keep on discussing until we are so fed up with one another that no one wants to talk anymore. And that would be a really bad thing.
Question: Will you implement it?
Answer: No. I have other things I want to get done. Maybe you have a student who needs to do a practical course?