Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Pages: 1
Hello,
I am looking for a way to speed FRF calculations of a 3D model using something like:
MACRO_MATR_ASSE(MODELE=FEM,
CHAM_MATER=Mat,
CHARGE=(Force,BCnd,),
NUME_DDL=CO('NUM'),
MATR_ASSE=(_F(MATRICE=CO('MATASSR'),
OPTION='RIGI_MECA',),
_F(MATRICE=CO('MATASSM'),
OPTION='MASS_MECA',),
_F(MATRICE=CO('MATDAMP'),
OPTION='AMOR_MECA',),),
INFO=1,);
ForceVec=CALC_VECT_ELEM(OPTION='CHAR_MECA',
CHARGE=Force,);
ForceAss=ASSE_VECTEUR(VECT_ELEM=ForceVec,
NUME_DDL=NUM,
INFO=1,);
ListFreq=DEFI_LIST_REEL(DEBUT=300,
INTERVALLE=_F(JUSQU_A=1000,
PAS=10,),
INFO=1,);
Sol=DYNA_LINE_HARM(MATR_MASS=MATASSM,
MATR_RIGI=MATASSR,
MATR_AMOR=MATDAMP,
LIST_FREQ=ListFreq,
# SOLVEUR=_F(METHODE='MUMPS',PCENT_PIVOT=10,RENUM='METIS',),
EXCIT=_F(VECT_ASSE=ForceAss,
COEF_MULT=2.0,),);
which on a single core takes quite a long time for a medium sized problem. How can I speed up the process by taking advantage of multi-cores (OpenMP) or multiple PCs (Open MPI)? I tried looking for examples in astest/*.comm but did not find any that seemed to help. Apparently my attempt of using MUMPS is not working correctly when I set mpi_nbcpu>1 and mpi_nbnoeud>1. Any suggestions or hints would be very much appreciated. Thank you.
Regards, JMB
Last edited by JMB365 (2011-11-11 02:19:35)
SalomeMeca 2021
Ubuntu 20.04, 22.04
Offline
Hi,
What is the size of the system ?
TdS
Offline
Hi,What is the size of the system ?TdS
Hello TdS,
By your question I presume you are asking what is
a) the size of the hardware? -2 PC cluster (DualCore) OR
b) the size of the problem (study) I am trying? (unknown at the moment, just testing with small models)
Hope I have understood your question correctly and answered what you were looking for.
Regards, JMB
Last edited by JMB365 (2011-11-15 14:19:39)
SalomeMeca 2021
Ubuntu 20.04, 22.04
Offline
Hello, -any suggestions? Regards, JMB
SalomeMeca 2021
Ubuntu 20.04, 22.04
Offline
Hi,
which on a single core takes quite a long time for a medium sized problem.
From the excerpt above, it seems you're solving for 70 frequencies. So there's going to be 70 complex linear system factorization and solving which depending on the size of the linear system might cost a lot.
To speed-up computation time, you may want to first look at the time needed for only one frequency and try to speed-it up. Indeed computing for n frequencies will multiply the time needed for 1 frequency (there's nothing that can be done about that).
To speed-up the computation time for one solve :
- you might want to look at Open MP for MULT_FRONT solver. It seems you tried that and it didn't prove successful. This may happen since this type of parallelism is highly problem-dependent.
- the other way is to use MUMPS as a solver and increase the number of MPI process used to do the job.
TdS
Offline
- the other way is to use MUMPS as a solver and increase the number of MPI process used to do the job.TdS
Hello TdS,
That is what I was attempting using SOLVEUR=_F(METHODE='MUMPS',PCENT_PIVOT=10,RENUM='METIS',),; but I was not sure if that was the right way to do it. It did not reduce the time, but in fact multiplied it in proportion to the number of parallel cores the solver was using (asked to use). Hence my request for help. Thanks.
Regards, JMB
SalomeMeca 2021
Ubuntu 20.04, 22.04
Offline
Thomas DE SOZA wrote:- the other way is to use MUMPS as a solver and increase the number of MPI process used to do the job.TdS
Hello TdS,
That is what I was attempting using SOLVEUR=_F(METHODE='MUMPS',PCENT_PIVOT=10,RENUM='METIS',),; but I was not sure if that was the right way to do it. It did not reduce the time, but in fact multiplied it in proportion to the number of parallel cores the solver was using (asked to use). Hence my request for help. Thanks.
Regards, JMB
This is strange. Post the times corresponding to :
- 1 frequency, MUMPS, 1 process
- 1 frequency, MUMPS, 2 processes
- 1 frequency, MUMPS, 4 processes
- 1 frequency, MUMPS, 8 processes
By files I mean the message files so we can know the exact timings as well as the linear system sizes.
TdS
Offline
Pages: 1