Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Hi everyone,
I'm new to aster install, but not to linux. For the time being I'm using code_aster on windows with salome meca. I'm pretty happy with the results. I would like to install code_aster in our on-premises cluster (1 master and 4 slaves running CentOS 8), but I find it somewhat difficult as there are a lot of dependecies to be compiled separatelly and some with MPI. The objective is to have an "up-to-date" (or as lastest as possible) release of code_aster running in parallel and using slurm on a CentOS machine. All our software is installed in a /soft/ directory that is shared with all the nodes. That would mean that most of the libraries should be compiled and installed under this folder in order for code_aster to resolve dependencies when launched on the nodes.
Could somebody help me start with this ? Does someone other than me is in this situation ?
For starters, I would need to understand which dependecies should be compiled using openMPI and which not.
Thanks in advance.
Regards,
Sebastian
Offline
This is not a easy task and there is no documentation for that.
These prerequisities need MPI (for version 16.1):
- hdf5
- med
- mpi4py
- medcoupling
- parmetis, ptscotch
- scalapack
- petsc
- mumps
Inside the singularity contenainer of code_aster, you have the version of prerequisities used. in the shell:
"cat /opt/public/scibian9_mpi.sh"
Offline
Hi Nicolas,
Thanks for your quick answer. I will study your list in detail.
I understand that it's not easy and I am ready to face the challenge. I suppose that universities or companies running a cluster with code_aster should be facing this problem, specially EDF. I suppose (and hope) in consequence that others achieved this task.
Concerning the list, is there an order or not ?
Thanks again.
Regards,
Sebastian
Offline
Of course, this is possible. It works at EDF but this is often to have specific build script for your own cluster (I advise you to write them to update easily prerequisites later)
There are some dependencies of course. This something like that
- hdf5
- med (hdf5)
- mpi4py
- parmetis, ptscotch
- medcoupling (hdf5, med, mpi4py, medcoupling)
- scalapack
- mumps (scalacpack parmetis, ptsotch)
- petsc (mumps, scalacpack parmetis, ptsotch, hypre, ml, superlu)
Offline