Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Hello,
first of all, I really like the new way of compiling the MPI version. It's very handy for someone like me, that is not a software guru, just a user.
Coming from post ID=26569, I noticed that I have to set MATR_DISTRIBUEE='NON', OUI is not permitted in contact problems in 16.4. I don't understand, because in 15.4 it clearly was, and with the example that I ran today it is clearly necessary. In this example, which I cannot show, the necessary amount of memory is clearly ABOVE the memory connected to one of the two CPUs (2 CPU system). So my idea is: the 'NON' is ignored, otherwise this example would not run. Am I wrong?
I ran the same example, on the same machine with the same OS (Ubuntu 22.04LTS) with 15.4_MPI and with 16.4_MPI.
The new version does not cause any problems but it is 12% slower. Of course, any updates that might have influence, I do not know. This is just an observation.
Mario.
Last edited by mf (2023-10-30 13:03:39)
Offline
MATR_DISTRIBUEE='NON' needs some assumptions on the numbering. Theses assumptions are not necessary verified with contact so it has been decided to forbid it to avoid wrong result.
The new HPC algorithm developped during the next years will deal with this problem
For the memory, sometime the VMPeak is not measured accuratetly.
We have not noticed any degradation of performance on oour performance benchmak beetwen v15 en v16. I assume the difference here come from MATR_DISTRIBUEE
Offline
Hello,
so 'NON' only means the data of a certain part of the model may also be in the memory of the other CPU, so it must be moved to its core first (using the interconnect between the CPUs)?
It doesn't mean I can only use half of my memory, does it? This is what I observed. I watched the memory with htop.
Thanks, that's quite interesting,
Mario.
Offline
Yes 'NON' means that the assembly matrix is entierely stored on each cpu.
Yes, there is some MPI comm to shared the matrix
Not necessary, you can use more that half of your memory. The factorization of the matrix with MUMPS which consums most of the memory is distributed beetwen CPU.
Offline
Ouf, I am glad.
Thanks for your quick answers. Very much appreciated!
Mario.
Offline