Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Hi,
I have installed salome_meca 2021 container image using singularity on my arco-linux machine. So far it worked without any issue and I was able to use it. But now may be after an upgrade the salome_meca is unable to start. Here follows what happens and what I know about the issue :
1. When I run it in GPU mode, I get the GLIBC_2.34 error message
$./salome_meca-lgpl-2021.0.0-2-20211014-scibian-9
/usr/bin/nvidia-smi
**************************************
INFO : Running salome_meca in GPU mode
**************************************
runSalome running on digvijay-gl552vx
Searching for a free port for naming service: 2812 - OK
Searching Naming Service + found in 0.1 seconds
Searching /Kernel/Session in Naming Service ++++SALOME_Session_Server: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLX.so.0)
Traceback (most recent call last):
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 181, in waitNSPID
os.kill(thePID,0)
ProcessLookupError: [Errno 3] No such process
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 694, in useSalome
clt = startSalome(args, modules_list, modules_root_dir)
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 639, in startSalome
session=clt.waitNSPID("/Kernel/Session",mySessionServ.PID,SALOME.Session)
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 183, in waitNSPID
raise RuntimeError("Process %d for %s not found" % (thePID,theName))
RuntimeError: Process 8691 for /Kernel/Session not found
--- Error during Salome launch ---
2. When I run it in software rendering mode, it starts properly though rendering is not proper:
$./salome_meca-lgpl-2021.0.0-2-20211014-scibian-9 --soft
*****************************************************
INFO : Running salome_meca in software rendering mode
*****************************************************
runSalome running on digvijay-gl552vx
Searching for a free port for naming service: 2813 - OK
Searching Naming Service + found in 0.1 seconds
Searching /Kernel/Session in Naming Service +++++++++++ found in 5.5 seconds
Start SALOME, elapsed time : 6.1 seconds
****************************************************************
Warning: module GHS3DPLUGIN is improperly configured!
Module GHS3DPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module GHS3DPRLPLUGIN is improperly configured!
Module GHS3DPRLPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module BLSURFPLUGIN is improperly configured!
Module BLSURFPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module NETGENPLUGIN is improperly configured!
Module NETGENPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module HYBRIDPLUGIN is improperly configured!
Module HYBRIDPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module GMSHPLUGIN is improperly configured!
Module GMSHPLUGIN will not be available in GUI mode!
****************************************************************
****************************************************************
Warning: module HexoticPLUGIN is improperly configured!
Module HexoticPLUGIN will not be available in GUI mode!
****************************************************************
3. I can get into the container shell and inspect the environment:
Singularity> ldd --version
ldd (Debian GLIBC 2.24-11+deb9u4) 2.24
Copyright © 2016 Free Software Foundation, Inc.
Ce logiciel est libre; voir les sources pour les conditions de
reproduction. AUCUNE garantie n'est donnée; tant pour des raisons
COMMERCIALES que pour RÉPONDRE À UN BESOIN PARTICULIER.
Écrits par Roland McGrath et Ulrich Drepper.
The container has GLIBC_2.24. My system has GLIBC_2.35. Not sure which GLIBC is causing this issue. It most likely appears to be from container but then not sure how it worked earlier.
singularity version is 3.9.0
I also checked the container image with Fedora 35 and Ubuntu 22.04 and they all give GLIBC_2.34 error whereas Ubuntu 18.04 works well. So is this an issue with GLIBC on host machine?
As far as I know, the newer GLIBC should have older GLIBC binaries.
Can someone point to a solution to this?
Offline
Hello, have you found a solution ?
I am experiencing the same issue. (Glibc_ 2.34) Crashing in GPU, rendering problems in software rendering.
(For example, when it works, I cannot edit in text mode in the aster-study module. )
Offline
Hello, have you found a solution ?
Not yet! Forced to stick to Ubuntu 18.04 for now until this gets resolved. Although I have no idea how thats going to happen because I was using it without any issue until a few weeks ago.
Offline
Hi dbpatankar and CharlesM
I got the same error on Fedora 36.
Another user @aurelius_nero also faced the same issue on Fedora 35. He solved it by installing nvidia-container-toolkit from NVIDIA website.
I also tried this solution and it worked for me also. The link to @aurelius_nero's post on this forum is https__://code-aster.org/forum2/viewtopic.php?id=26213 (remove __ as links are not allowed).
You can install `nvidia_container-toolkit` from the official NVIDIA repository https:__//nvidia.github.io/libnvidia-container/ (remove __ as links are not allowed). The repository also has instructions for how to install on Ubuntu 22.04.
Regards,
amit_code_aster
Offline
I have ubuntu 22.04, nvidia rtx-3070Ti and the installation of nvidia-container-toolkit solved the problem of salome-meca only working with --soft option. Now, it works without --soft. Thanks for the info. This should be mentioned in the FAQ.
Offline
I'm experiencing a similar issue, though I don't have an NVIDIA GPU so I suspect the previously mentioned solutions will not work. Any suggestions?
*****************************************************
INFO : Running salome_meca in software rendering mode
*****************************************************
runSalome running on laptop
Searching for a free port for naming service: 2810 - OK
Searching Naming Service + found in 0.1 seconds
Searching /Kernel/Session in Naming Service +++SALOME_Session_Server: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLX.so.0)
Traceback (most recent call last):
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 181, in waitNSPID
os.kill(thePID,0)
ProcessLookupError: [Errno 3] No such process
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 694, in useSalome
clt = startSalome(args, modules_list, modules_root_dir)
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 639, in startSalome
session=clt.waitNSPID("/Kernel/Session",mySessionServ.PID,SALOME.Session)
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 183, in waitNSPID
raise RuntimeError("Process %d for %s not found" % (thePID,theName))
RuntimeError: Process 25480 for /Kernel/Session not found
--- Error during Salome launch ---
Offline
Using Arch Linux
Dual GPU Inel + Nvidia
Same issue.
Works with --soft
**************************************
INFO : Running salome_meca in GPU mode
**************************************
runSalome running on archlinux
Searching for a free port for naming service: 2819 - OK
Searching Naming Service + found in 0.1 seconds
Searching /Kernel/Session in Naming Service ++SALOME_Session_Server: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLX.so.0)
Traceback (most recent call last):
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 181, in waitNSPID
os.kill(thePID,0)
ProcessLookupError: [Errno 3] No such process
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 694, in useSalome
clt = startSalome(args, modules_list, modules_root_dir)
File "/opt/salome_meca/appli_V2021/bin/salome/runSalome.py", line 639, in startSalome
session=clt.waitNSPID("/Kernel/Session",mySessionServ.PID,SALOME.Session)
File "/opt/salome_meca/Salome-V2021-s9/modules/KERNEL_V9_7_0/bin/salome/orbmodule.py", line 183, in waitNSPID
raise RuntimeError("Process %d for %s not found" % (thePID,theName))
RuntimeError: Process 137243 for /Kernel/Session not found
Last edited by Humberto (2022-08-12 23:59:13)
Offline
Hi @Humberto.
A solution that worked for me on Fedora 36 is
1. Search for a file named `nvliblist.conf` in your installation. It should be under your Singularity installation directory somewhere.
2. Make a back-up of this file `mv nvliblist.conf nvliblist.conf.bak`.
3. Open the file `nvliblist.conf` using a text editor.
4. Delete the lines `libGLX.so.0`, `libglx.so.0`, and `libGLdispatch.so`.
Try running the Salome container again, it should work this time.
Reference: https: //github.com/apptainer/apptainer/issues/598
Offline
Hello,
I also tried this, but: the mentioned lines ending with xxxx.so.0 do not exist in my nvliblist.conf in Ubuntu 22.04. All entries end with .so, it even says in the file, that this is mandatory.
So this did not solve the problem,
Mario.
Offline
This solution worked for me. After I installed the nvidia-container-toolkit it still didn't work. I didn't have the ".0" version lines listed in my file, so the lines I deleted from the "nvliblist.conf" file were: "libGLX.so", "libglx.so", and "libGLdispatch.so"
Also note that to make the backup copy use the command "cp nvliblist.conf nvliblist.conf.bak" (mv = move, so doesn't create a copy, in this case it moves it to the same place, i.e. just renames it
Hooray though, it does finally work for me, but this was a massive hassle to install, with all loads of dependencies to sort out before I even got singularity to work and then when that worked it had this problem!
Offline