A new numerical recipe for linear algebra : Lagrange multipliers elimination

6 December 2013

by J. Pellet, EDF R&D / AMA

Iterative methods used for solving linear systems (such as GCPC and PETSC in Code_Aster) usually use less memory than their so-called "direct" counterparts because they do not require to factor the matrix.

Unfortunately, several complex boundary conditions enforced through dualisation (by mean of the keywords LIAISON_UNIF, LIAISON_ELEM, LIAISON_MAIL, …) often lead to difficulties to achieve convergence with these iterative methods.

A new keyword (SOLVEUR / ELIM_LAGR=’OUI’) allows to eliminate the Lagrange equations coming from the dualisation of the boundary conditions. For most of the models, this removal will render the matrix positive and definite again, which eases the convergence of iterative methods.

Figure 1 : matrix profile before removal for test-case petsc02a
Figure 2 : matrix profile after removal for test-case petsc02a
(One can observe the decrease in matrix size
as well as positive-only diagonal coefficients, in red)

For instance, in three test-cases, ssls101m, sslx100b and zzzz112c, using ELIM_LAGR=’OUI’ render the use of the PETSC solver possible (in association with the ’LDLT_INC’ pre-conditioner). Without Lagrange elimination, the iterative solver fails to converge.

This new numerical recipe is also available for modal analysis with the help of the new operator ELIM_LAGR which enables Lagrange elimination in an already assembled matrix. In general, it decreases relative error for the computed Eigen vectors. It may also simplify user input (by requiring less tweaking for the CALC_FREQ keyword).

Figure 3 : matrix profile before removal for test-case ssls101m
Figure 4 : matrix profile after removal for test-case ssls101m