Welcome to the forums. Please post in English or French.

You are not logged in.

#1 2021-10-14 11:45:35

ing.nicola
Member
Registered: 2017-12-11
Posts: 130

AFFE_CHAR_MECA(_F) mpi behavior

Can I benefit from mpi resolution for AFFE_CHAR_MECA_F?
I have set the DITRIBUTION key in AFFE_MODELE (METIS, GROUP_ELEM ...) but it seems to me that it takes longer when running in parallel ....

Last edited by ing.nicola (2021-10-14 11:45:55)

Offline

#2 2021-10-14 12:05:45

mf
Member
Registered: 2019-06-18
Posts: 264

Re: AFFE_CHAR_MECA(_F) mpi behavior

Hello,

sure, that should be possible. But if your problem is small (meaning not many DOFS), then there might be little to no benefit (the communication of the processes would eat up a lot of CPU, the 'bars' in htop would be red for a great percentage). Also, Code Aster tells you in the .mess that 'system time' is high (but I don't remember the exact wording), that means a lot of communication and less 'real' calculation time (just like in real life, if you talk a lot, there's less time for work :-) ).

https: //www.code-aster.org/V2/doc/v15/en/man_u/u2/u2.08.06.pdf

If you look at htop during running the problem: is 'Load average' larger than your core count? E.g. 12.3 and you have 8 cores? Then you oversubscribed your CPU (too many waiting processes)..

How many cores do you have and what numbers did you set in Number_of_MPI_CPU/Number_of_threads?

Mario.

Last edited by mf (2021-10-14 12:12:16)


Attachments:
Bildschirmfoto vom 2021-10-14 13-00-54.png, Size: 119.68 KiB, Downloads: 6

Offline

#3 2021-10-14 12:16:36

ing.nicola
Member
Registered: 2017-12-11
Posts: 130

Re: AFFE_CHAR_MECA(_F) mpi behavior

Im' running a MECA STATIQUE problem with 2.4M gdls.
The machine has 6 pyhsical  cores

I tried:

mpi_nbcpu (MPI) = 3      ncpus  (OpenMPI) =2
mpi_nbcpu (MPI) = 6      ncpus  (OpenMPI) =1
mpi_nbcpu (MPI) = 6      ncpus  (OpenMPI) =2
mpi_nbcpu (MPI) = 4      ncpus  (OpenMPI) =3
mpi_nbcpu (MPI) = 4      ncpus  (OpenMPI) =2

Variuos DISTRIBUTION options

but nothing ... the std resolution thakes 17'  , the mpi ... 30' . It spend time in AFFE_CHAR_MECA and AFFE_CHAR_MECA_F

Offline

#4 2021-10-14 12:36:39

mf
Member
Registered: 2019-06-18
Posts: 264

Re: AFFE_CHAR_MECA(_F) mpi behavior

Hello,

The first option should be the quickest (3 MPI, 2 OMP). The last four are too much for a 6core CPU (there Load average is >6 in htop). Is something else with high demands running?

Personally I use SOUS_DOMAINE and METIS in AFFE_MODELE/DISTRIBUTION.

In STAT_NON_LINE/SOLVEUR did you activate MATR_DISTRIBUEE = 'OUI'?

Turning Hyperthreading off could also help (turn off in BIOS).

M.

Last edited by mf (2021-10-14 16:18:59)

Offline

#5 2021-10-15 18:17:08

ing.nicola
Member
Registered: 2017-12-11
Posts: 130

Re: AFFE_CHAR_MECA(_F) mpi behavior

Nothing to do . In linears problems MUMPS works better in sequential for 2.4M Gdls.  Good results obtained whit OpenMP=6  , it cut off 10% of time.
For non linear problems .. mpi is essential . Over 50% of less time.

working whit Hypertrheading ... not good idea !!!

Last edited by ing.nicola (2021-10-15 19:44:25)

Offline