Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Pages: 1
Can I benefit from mpi resolution for AFFE_CHAR_MECA_F?
I have set the DITRIBUTION key in AFFE_MODELE (METIS, GROUP_ELEM ...) but it seems to me that it takes longer when running in parallel ....
Last edited by ing.nicola (2021-10-14 11:45:55)
Offline
Hello,
sure, that should be possible. But if your problem is small (meaning not many DOFS), then there might be little to no benefit (the communication of the processes would eat up a lot of CPU, the 'bars' in htop would be red for a great percentage). Also, Code Aster tells you in the .mess that 'system time' is high (but I don't remember the exact wording), that means a lot of communication and less 'real' calculation time (just like in real life, if you talk a lot, there's less time for work :-) ).
https: //www.code-aster.org/V2/doc/v15/en/man_u/u2/u2.08.06.pdf
If you look at htop during running the problem: is 'Load average' larger than your core count? E.g. 12.3 and you have 8 cores? Then you oversubscribed your CPU (too many waiting processes)..
How many cores do you have and what numbers did you set in Number_of_MPI_CPU/Number_of_threads?
Mario.
Last edited by mf (2021-10-14 12:12:16)
Offline
Im' running a MECA STATIQUE problem with 2.4M gdls.
The machine has 6 pyhsical cores
I tried:
mpi_nbcpu (MPI) = 3 ncpus (OpenMPI) =2
mpi_nbcpu (MPI) = 6 ncpus (OpenMPI) =1
mpi_nbcpu (MPI) = 6 ncpus (OpenMPI) =2
mpi_nbcpu (MPI) = 4 ncpus (OpenMPI) =3
mpi_nbcpu (MPI) = 4 ncpus (OpenMPI) =2
Variuos DISTRIBUTION options
but nothing ... the std resolution thakes 17' , the mpi ... 30' . It spend time in AFFE_CHAR_MECA and AFFE_CHAR_MECA_F
Offline
Hello,
The first option should be the quickest (3 MPI, 2 OMP). The last four are too much for a 6core CPU (there Load average is >6 in htop). Is something else with high demands running?
Personally I use SOUS_DOMAINE and METIS in AFFE_MODELE/DISTRIBUTION.
In STAT_NON_LINE/SOLVEUR did you activate MATR_DISTRIBUEE = 'OUI'?
Turning Hyperthreading off could also help (turn off in BIOS).
M.
Last edited by mf (2021-10-14 16:18:59)
Offline
Nothing to do . In linears problems MUMPS works better in sequential for 2.4M Gdls. Good results obtained whit OpenMP=6 , it cut off 10% of time.
For non linear problems .. mpi is essential . Over 50% of less time.
working whit Hypertrheading ... not good idea !!!
Last edited by ing.nicola (2021-10-15 19:44:25)
Offline
Pages: 1