Add analytic jacobian to increase performace
Current understanding
Implementing an analytical expression for the residual jacobian would significantly improve performance, because numeric differentiation requires evaluating the residual several times. However, (Bastian 2014) mentions that numeric differentiation can increase stability and improve convergence, especially in the case of "non-smooth non-linearities" (it might be disputed if this occurs in Richards equation).
The original description focused on artifacts in the solution gradient. However, directly interpreting this gradient as flux is difficult because the gradient is now reconstructed (as is usually required for DG spaces). Additionally, we expect an oscillatory error component due to the actual solvers.
With this, implementing the analytic jacobian might be worth a shot in terms of performance, and likely not in terms of precision.
Original description
Dorie uses numerical differentiation to compute the jacobian of the residual. The local operators are derived from the NumericalJacobianXYZ
classes. The PDELab developers will consider this a bug in the future. Edit: The PDELab developers consider it a bug if one of the local operators in PDELab does not implement an analytical jacobian.
The numerical differentiation might be the cause of the issue of 'wiggles' appearing in the solution plotted by a subsampling VTK writer. It limits the relative precision of the gradients to half the solution's precision, namely ~1e-8. These errors can become significant if values are added or subtracted due to the parameterization (e.g. in the case of the gravitational force).
There are two possible solutions:
-
use automated differentiation: There will be a future routine which computes the jacobian automatically, based on the function calls leading to the residual assemblyEdit: Feature won't be ready in the near future. - analytic differentiation: Ideally, the local operator should assemble the analytical jacobian. For this, the entire expression of the residual has to be differentiated
Both solutions are expected to drop the gradient precision towards the acutal solution precision (double precision)
Bug reports that are probably related to this issue: