While working on the redesign of the quadraturerules I found out that the automatically generated 1d rules are not as exact as assumed, i.e. although the points and weights are printed with 100 digits the actual precision is only about 16-17 digits. This might be fine for double precision calculations, but when changing the ctype to long double or Float128 or even GMP it is not applicable.
I'm not an expert in Maxima, so I don't see the error in the script. It seems that the actual calculation is performed in double precision only and just the output is in high precision. @christi I think, you have uploaded the maxima scripts. Any ideas?
The error can be seen by simply summing up the weights. It should be 1 but the difference is about 1.e-16 for rules with order >= 8 at least.
To be mode precise: the quadrature weights are of low precision.
Edited
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
Can you point out a particular rule (flavor, order) with this problem? Does the problem occur with the existing checked-in values, with your newly computed values or both?
The checked-in values are only exact up to 1.e-16. I just run the test-quadrature with long double or Float128 type, to check the accuracy of the 1d rules and it failed.
After checking the other rules again, it seems that only the gausslobatto rules are defect. The Jacobi1 and Jacobi2 are not tested in test-quadrature at all (its a bit more complicated due to the non-constant weight function)
Test for the Jacobi1 and Jacobi2 rules are nevertheless missing. There, the weights do sum up the volume time integral over the weight functions and the quadrature rule can not so easy be compared to the analytical solution, since a different integrand is evaluated. But, since the weight functions of Jacobi rules are also polynomials, a simple shift in the call to the analytical solution could be implemented.
Is this something worth to have, or is it not really interesting to test also the Gauss-Jacobi Quadrature rules?
Yes, these two rules are correct. I meant, should we integrate the test for the correct weights and correct points into the test-quadrature.cc implementation? The, the checkWeights() and checkQuadrature functions need to be modified slightly to incorporate the factors for the weight-functions (e.g. 4/3 in the case of jacobi2)
Testing the jacobi quadrature is definitely useful, and if the means the test functions need additional weight parameters, then so be it.
I'm just wondering how to reliably incorporate high-precision tests. These do not necessarily need to check everything, but at least they should check the weights of the 1-d rules. (Since they are wrong and we have no test that caught that, we need at least a sort of regression test).
We could use long double or Float128, but those aren't really that precise.
We could use GMPField, but then the test (or GMPField) needs adjusting, because GMPField has no support for std::numeric_limits, which is used to extract the precision. (I'm not 100% sure how much sense std::numeric_limits::epsilon() makes for GMPField.)
Both Float128 and GMPField have the disadvantage that they are not necessarily available.
We could use some script to extract the values (or get them some other way) and feed them to some high-precision shell utilities such as bc or dc for checking. But we probably cannot rely on them since they are both priority optional in Debian stretch.
Considering there are no high-precision shell utilities whose presence we can rely on, it is probably best to just do the test using GMPField. But in a separate test executable so we can skip it if gmpxx was not found.
the values for the 1d rules are provided with double literals only, meaning
long double and Float128 get only double-precision coefficients and thus have also not the desired accuracy.
Float128 is not considered for the construction from string case, since numeric_limits::is_specialized==true, although std::is_floating_point==false. I would suggest, to change the bool-test to is_floating_point, because it provides just double values and nothing else.
Also, I would suggest, to use at least long double-literals in the 1d rules, by appending an L to the values.
The Float128 can, on the other hand, also not be used in the construction-from-string implementation, since Float128 has no string constructor, only a operator>>. An idea would be, to use a generic string-to-number conversion function (as I have provided in one of the quadrature-rule MRs already) and use this instead of a simple ct(const char[]) constructor.
The rules for simplices (and maybe also for prisms) are not in high precision (yet). In my high-precision tests, I have used a loop over types, that are different for tensor-product rules and simplex rules.
GMPField<n>value1(0.12345..);// fine, but loss of precisionFloat128value2(0.12345..);// fine, but loss of precisionlongdoublevalue3(0.12345..);// fine, but loss of precision
What about a generic function toNumber<ct>(const char*) or lexicalCast<ct>(const char*) or somthing similar?
GMPField<n>value1=lexicalCast<GMPField<n>>("0.12345");// fineFloat128value2=lexicalCast<Float128>("0.12345");// finelongdoublevalue3=lexicalCast<longdouble>("0.12345");// fine
And this function can be specialized for user-defined types and is provided for PODs.
I agree that we should probably make the numeric literals long double, and also use them for initializing long double quadrature rules (i.e. adjust the criterion that selects between the two implementations).
Float128 was introduced recently, and probably should just be extended to be constructible from a string (this holds whether or not that constructor is necessary to get quadrature rules working).
I may warm up to a generic conversion function, although it seems to be unnecessary for the types discussed here. However:
many existing conversion functions are locale-dependent, which we want to avoid. So do not use the name lexicalCast(). Also, do not use from_string(): while no such facility exists, to_string() does exist and is locale-dependent.
from_chars() is locale_independent, but the interface is too complicated to be directly useful in our case. It may be useful as a backend for a conversion function, but it is so new that it is unlikely extended precision types would provide an overload for it.
The parameter tree in dune-common has conversion functions that should be locale-independent, based on the standard stringstreams and str.imbue(std::locale::classic()).
I think the easiest way to fix this without getting sidetracked is to add the string constructor to Float128, fix the numeric literals to be long double, and fix the criterion selecting between string and numeric literals.
Again the same error occurs. The newly generated quadrature coefficients are different from the ones stored in file. And the quadrature test fails with the new generated coefficients. Was there a change in maxima that causes this new error? These maxima scripts are really frustrating.
Try the following:
(install maxima, e.g. sudo apt install maxima)
run in dune/geometry/quadraturerules: maxima --batch gausslobatto.mac
Compile the tests cmake --build build-cmake --target test-quadrature
Run the test: build-cmake/dune/geometry/test/test-quadrature
I'm using the latest maxima 5.45 and get errors in the test.
This seems to be a bug in maxima. Current version is 5.45.1,.Correct values are computed in 5.38.1. Since 5.39.0 some legendre polynomial roots are wrong.