Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
ok, now the 14.4 version (sequential and parallel) is also available as a Docker image, see the github page.
quay.io/tianyikillua/code_aster
Some issues with petsc
Also case tests are to be re-run. If someone could report on github it would be useful.
The old 13.6 version is available as quay.io/tianyikillua/code_aster:v13
Last edited by tianyikillua (2019-10-10 17:16:57)
Offline
You mention:
14.4 version (sequential and parallel)
Is the parallel version an MPI or an OpenMP version?
Offline
mpi + openmp
Offline
Dear Tianyi,
first of all, thank you for this, this is really great stuff. I tried to compile the parallel version and did not succeed (I finally gave up, none of the online tutorials is 100% correct), so this is a way out for all users who are not software engineers to use the MPI version. Under Linux it seems to be VERY fast.
However, I have one question: can I use this docker on more than one node (nodes are connected via Infiniband)? Or are there modifications to the docker necessary? I cannot test this at the moment, thus my question. There is also something called 'docker swarm' could this be a possibility to run it on more than one node? As you have guessed, I am not a software engineer, so these might be dumb questions....
Thank you anyway,
Mario.
Offline
Dear Tianyi,
first of all, thank you for this, this is really great stuff. I tried to compile the parallel version and did not succeed (I finally gave up, none of the online tutorials is 100% correct), so this is a way out for all users who are not software engineers to use the MPI version. Under Linux it seems to be VERY fast.
However, I have one question: can I use this docker on more than one node (nodes are connected via Infiniband)? Or are there modifications to the docker necessary? I cannot test this at the moment, thus my question. There is also something called 'docker swarm' could this be a possibility to run it on more than one node? As you have guessed, I am not a software engineer, so these might be dumb questions....
Thank you anyway,
Mario.
One of the things we're looking at is using this with singularity instead of docker (but we're still having issues building the container). The reason for this is that singularity should be more appropriate for HPC vs docker.
Once we've been able to build the container we will test and report back on our results.
Offline
Hi,
I'm trying to run cases in parallel, and I would like to ask for the correct configuration of "mpi_nbcpu" and "ncpus" variables for a multi-core CPU with one socket. I have an CPU with the I7-8700K, it has 6 cores with 2 threads per core (12 threads total) on one socket
I've successfully installed docker following tianyikillua instructions in GitHUB, all tests run Ok in parallel or sequential mode.
I've been doing some tests with some cases that need approximately 8 hours to compute on a single core using MUMPS. So far, the best configuration tested has "mpi_nbcpu = 3" and "ncpus = 2", and total computational time is 6 hours approximately with MUMPS.
Does anyone have other thoughts about this? Perhaps the limit is the single socket?
Thank you in advance,
Alejandro
Offline
Hello,
you are right, this does not seem to be a huge gain. From my experience, your numbers are good for this CPU (mpi_nbcpu=3/ ncpus=2). Turning Hyperthreading off also helps (your tasks do not get interrupted so often by other tasks if HT=off). It also depends on the problem itself. I am not sure now, but I think I had a thermal only calculation once, where the benefit though MPI was also very low. From my experience with larger problems (5-6 M DOFs), doubling the number of cores makes it ~1.5 times faster (but uses more RAM overall). If the problem is too small in DOFs, the benefit will also be smaller. Other factors may be, but I don't think it will help much, unless your problem is really big on more than one CPU:
-) MATR_DISTRIBUEE='OUI' in STAT_NON_LINE
-) DISTRIBUTION=_F(METHODE='SOUS_DOMAINE',
PARTITIONNEUR='METIS'),
or similar in AFFE_MODELE.
Take care that your calculation does not run OUT_OF_CORE, it will be very slow then (insufficient RAM).
h ttps://www.code-aster.org/doc/default/en/man_u/u2/u2.08.06.pdf
Mario.
Last edited by mf (2021-02-11 20:17:51)
Offline
Great Mario, thank you for your reply.
I will try those options in AFFE_MODELE, and see what happens.
Kind regards,
Alejandro
Offline
Hello again,
something else just came to my mind: if your CPU is strongly affected by the Spectre and Meltdown mitigations and security is not an issue (perhaps you are behind a strong and up-to-date firewall) you may think about turning these mitigations OFF. It may also boost your performance a bit (depends on CPU though, some tasks are 10-50% slower with mitigations ON). This is especially easy under Linux, with Windows I do not know how to turn these off.
Mario.
Offline
Thanks for the help Mario. Things are working ok now. I had a mechanical problem that needed 3 hours and 8 minutes for the solution when running on a single core of my CPU.
In parallel, things are much better:
* When using docker, and configuring (mpi_nbcpu=3/ ncpus=2), 57 minutes are needed for the solution.
* When using docker, configuring (mpi_nbcpu=3/ ncpus=2), and following your advice on (MATR_DISTRIBUEE='OUI' in STAT_NON_LINE) and (PARTITIONNEUR='METIS' in AFFE_MODELE.), 47 minutes are needed for the solution.
In my opinion, the scalability level in this parallel problem is great. Of course, the "mpi_nbcpu" and "ncpus" parameters depend on the CPU type. Cheers,
Alejandro
Offline
Could you please remove the left and right brackets?
docker run -ti --rm -e DISPLAY=192.168.44.176:0 -v %cd%:/home/aster/shared -w /home/aster/shared quay.io/tianyikillua/code_aster
Great work Tianyi!
I have the same problem when running astk. I am running ubuntu 20 and I'm using wired connection. I tried different IPs but cannot get the visual astk running. I get the error
application-specific initialization failed: couldn't connect to display "my ip address:0"
Error in startup script: invalid command name "wm"
while executing
"wm withdraw ."
if I export the DISPLAY to an IP like praful's case (192.168.44.176), astk commands runs without error But no astk window is shown (see attached)
Offline
Dear Tianyi!
I don't like to pose that question, but what are there any plans of updating your Docker with the upcoming stable 15.2 (?) version of Code Aster? :-) What are the chances? :-)
Don't get me wrong, in the meantime I managed to compile and build 14.6 with MPI and PETSC, but I only now understand how difficult creating a Docker Container is in reality. So, I gave up on building a Docker in the meantime.
Mario.
Last edited by mf (2021-05-03 13:04:06)
Offline
Code_Asterの開発者
Offline
Dear AsterO'dactyle,
I know about that, I participated a little bit (testing). It is still SalomeMeca, but I know that plans for a CA-MPI version in a singularity-containers are on the horizon. What I do not know is, how far they are. Though SM runs flawlessly in singularity,
Mario.
Offline
tianyikillua wrote:Could you please remove the left and right brackets?
docker run -ti --rm -e DISPLAY=192.168.44.176:0 -v %cd%:/home/aster/shared -w /home/aster/shared quay.io/tianyikillua/code_aster
Great work Tianyi!
I have the same problem when running astk. I am running ubuntu 20 and I'm using wired connection. I tried different IPs but cannot get the visual astk running. I get the error
application-specific initialization failed: couldn't connect to display "my ip address:0"
Error in startup script: invalid command name "wm"
while executing
"wm withdraw ."if I export the DISPLAY to an IP like praful's case (192.168.44.176), astk commands runs without error But no astk window is shown (see attached)
Good work Tianyi!
Im having the same DISPLAY problem above. Any clues?
Also, when I try to run the ./run_tests.sh command it says Permission denied.
Thanks for any insight!
Offline
Hi , with the files attached , i builded 14.6 mpi version with PETSc. I'm actually testing it.
I have preformed petsc tests cases and only 3 test fails and 1 exit with alarm.
If you want i will share image builded after test.
Some files needs to be downloaded manually and placed in "data" folder.
Last edited by ing.nicola (2021-08-14 14:31:25)
Offline
@ing.nicola thanks for this updated version (the 14.4 one no longer works due).
I think there might be an issue with the last but one line in your Dockerfile:
RUN sed -i "s/14.4/...
There seems to be something missing with this command. Do you have an updated version of the file? (I couldn't paste the whole line, the forum was blocking it for some reason).
Offline
I've merged your Dockerfile with Tianyi's so that it's built in one, rather than 'fixing' a version where it's already built.
This can be found here
github.com/IBSim/VirtualLab/blob/main/modules/Dockerfile_Aster_v14_6
Not been tested heavily yet, but we think it mostly works. There might be an issue with MFront (not yet investigated).
Feedback welcomed.
Offline