site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban
You are not logged in.
Hello,
problem solved!
My mistake. I use:
/usr/bin/singularity shell /home/golbs/tmp/salome_meca-lgpl-2021.0.0-2-20211014-scibian-9.sif
(work, but with python3.5 and CA 14.8))
but
/usr/bin/singularity run /home/golbs/tmp/salome_meca-lgpl-2021.0.0-2-20211014-scibian-9.sif shell
(work also, but with python3.6)
and are correct for CA 15.4.
Greeting Markus
Hello,
do you use:
/usr/bin/singularity run /.../.../.../salome_meca-lgpl-2021.0.0-2-20211014-scibian-9.sif shell ?
Then you must:
Singularity> cd /opt/salome_meca/
and you are home in old times :-) work with vi, nano & Co.
I use singularity-ce SingularityCE, is the Community Edition of Singularity, 3.9.6-focal under Ubuntu 20.04
Greeting Markus
Hello,
under salome_meca 2021.0.0-2-20211014 Scibian9 singularity container I have the following output:
..
FIN();
# ------------------------------------------------------------------------------
Command line #1:
ulimit -c unlimited ; ulimit -t 60000000 ; ( /opt/python/3.6.5/bin/python3 -X faulthandler -- ./2022-04-06-Blechkiste_1-DKT-2.5mm.comm.changed.py --last --link="F::libr::/home/golbs/tmp/2022-04-08_15.5/2022-03-01-H257-Fops-Dach-3mm.unv::D::19" -max_base 1200000 --memory 22000.0 --tpmax 12000000 --numthreads 4 ; echo $? > _exit_code_ ) 2>&1 | tee -a fort.6
Traceback (most recent call last):
File "./2022-04-06-Blechkiste_1-DKT-2.5mm.comm.changed.py", line 7, in <module>
from code_aster.Commands import *
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Commands/__init__.py", line 30, in <module>
from ..Supervis import CO
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Supervis/__init__.py", line 27, in <module>
from .CommandSyntax import CommandSyntax
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Supervis/CommandSyntax.py", line 60, in <module>
from ..Cata import Commands
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Cata/Commands/__init__.py", line 23, in <module>
from ..Language.SyntaxObjects import Command
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Cata/Language/SyntaxObjects.py", line 47, in <module>
from .SyntaxChecker import checkCommandSyntax
File "/opt/salome_meca/Salome-V2021-s9/tools/Code_aster_stable-1540/lib/aster/code_aster/Cata/Language/SyntaxChecker.py", line 29, in <module>
import numpy
ModuleNotFoundError: No module named 'numpy'
EXECUTION_CODE_ASTER_EXIT_32185=1
restoring result databases from 'BASE_PREC'...
WARNING: execution failed (command file #1): <F>_ABNORMAL_ABORT
# ------------------------------------------------------------------------------
Content of /tmp/run_aster_xd_hm49u after execution:
.:
total 28
-rw-rw-r-- 1 golbs golbs 9347 avril 8 12:38 2022-04-06-Blechkiste_1-DKT-2.5mm.comm.changed.py
-rw-rw-r-- 1 golbs golbs 722 avril 8 12:38 32185.export
-rw-rw-r-- 1 golbs golbs 1474 avril 8 12:38 fort.6
drwxrwxr-x 2 golbs golbs 4096 avril 8 12:38 REPE_IN
drwxrwxr-x 2 golbs golbs 4096 avril 8 12:38 REPE_OUT
REPE_OUT:
total 0
# ------------------------------------------------------------------------------
Copying results
copying 'fort.6' to '/home/golbs/tmp/2022-04-08_15.5/2022-04-06-Blechkiste_1-DKT-2.5mm.mess'...
WARNING: file not found: fort.8
WARNING: file not found: fort.80
# ------------------------------------------------------------------------------
Execution summary
cpu system cpu+sys elapsed
--------------------------------------------------------------------------------
Preparation of environment 0.00 0.00 0.00 0.00
Execution of code_aster 0.07 0.05 0.12 0.26
Copying results 0.00 0.02 0.02 0.00
--------------------------------------------------------------------------------
Total 0.08 0.07 0.15 0.26
--------------------------------------------------------------------------------
------------------------------------------------------------
--- DIAGNOSTIC JOB : <F>_ABNORMAL_ABORT
------------------------------------------------------------
But in my comm file is not numpy option. In the container with Python 3.5.3 is import numpy possible.
Other users say to singularity "Singularity is a fairly problematic system given the complexity of how it treats the native environment and folders/permissions outside the container."
sample...
singularity run --cleanenv \
--no-home \
-B $(pwd):$(pwd) \
~/bin/sample/sample_x.x.x.sif \
/opt/sample/bin/run_sample \
..
...
--reads=..... \
--regions=..... \
..
..
--num_shards=16
How I have to start the singularity for salome?
Thanks and greeting Markus
Hello,
I have also a message 104 from astk into singularity container. To solve the problem I find:
....../viewtopic.php?pid=42546#p42546
But how can I edit the "/etc/codeaster/aster" in the Scibian9 singularity container (*.sif)?
Thanks and greeting Markus
I solve the first problem with the virtual container /opt...
Thanks Markus
Hello,
I have also a message 104 from astk into singularity container. To solve the problem I find:
....../viewtopic.php?pid=42546#p42546
But how can I edit the "/etc/codeaster/aster" in the Scibian9 singularity container (*.sif)?
Thanks and greeting Markus
Hello,
can I set the "internal decimal places" for the computation of a STAT_NON_LINE run? Is the PRECISION under INCREMENT a possibility, or work the code aster code with default, static precision? I have the information, that in the fortran code the "real" variables are "DOUBLE PRECISION".
Greeting Markus
Hello,
I have a strange effect by mesh import over unv ideas and group work.
Code-Aster Version:
-- CODE_ASTER -- VERSION : EXPLOITATION (stable) --
Version 13.6.0 modifi?e le 21/06/2018
r?vision fb950a49b96d - branche 'v13'
<INFO> D?marrage de l'ex?cution.
-- CODE_ASTER -- VERSION : EXPLOITATION (stable) --
Version 13.6.0 modifi?e le 21/06/2018
r?vision fb950a49b96d - branche 'v13'
Copyright EDF R&D 1991 - 2021
Ex?cution du : Fri Feb 12 16:54:38 2021
Nom de la machine : beo-01
Architecture : 64bit
Type de processeur : x86_64
Syst?me d'exploitation : Linux Ubuntu 18.04 bionic 4.19.102-ql-generic-11.0-14
Langue des messages : (ANSI_X3.4-1968)
Copyright EDF R&D 1991 - 2021
Ex?cution du : Fri Feb 12 16:54:38 2021
Nom de la machine : beo-01
Architecture : 64bit
Type de processeur : x86_64
Syst?me d'exploitation : Linux Ubuntu 18.04 bionic 4.19.102-ql-generic-11.0-14
Langue des messages : (ANSI_X3.4-1968)
Version de Python : 2.7.17
Version de NumPy : 1.13.3
Version de Python : 2.7.17
Version de NumPy : 1.13.3
Code Aster modify my group names, but why?
# ------------------------------------------------------------------------------------------
# Commande No : 0002 Concept de type : -
# ------------------------------------------------------------------------------------------
PRE_IDEAS(UNITE_MAILLAGE=20,
UNITE_IDEAS=19,
CREA_GROUP_COUL='NON',)
ON NE TRAITE PAS LE DATASET: 164
ON NE TRAITE PAS LE DATASET: 164
NOMBRE DE NOEUDS : 4809383
NOMBRE DE NOEUDS : 4809383
NOMBRE DE MAILLES : 3066444
NOMBRE DE MAILLES : 3066444
!----------------------------------------------------------------!
! <A> <STBTRIAS_7> !
! !
! le nom du groupe E01L1 est tronqu? : E01L !
! !
! !
! Ceci est une alarme. Si vous ne comprenez pas le sens de cette !
! alarme, vous pouvez obtenir des r?sultats inattendus ! !
!----------------------------------------------------------------!
...
..
....
...
!-------------------------------!
! <EXCEPTION> <MODELISA7_11> !
! !
! le groupe E01L existe d?j? !
!-------------------------------!
I work with this kind of group names years over years.
About information I'm happy.
Greeting Markus
Problem solved: It was 1 space character forward group name in unv file.
Hello,
I don't understand why -lmpi_f77 in Makefile.inc from mumps-5.1.2. ? "..mpif77 and mpif90 are deprecated as of Open MPI v1.7.." ..." ..It is in your interest to convert to mpifort now... Wiki open-mpi org " But how I have to modify "LIBPAR = $(SCALAP) $(LAPACK) -lmpi -lmpi_f77" so, that it works on openmpi 3.1.3?
I test: "LIBPAR = $(SCALAP) $(LAPACK) -L/usr/lib/x86_64-linux-gnu/openmpi/lib -lmpi" but
a - csol_fwd.o
a - csol_matvec.o
a - csol_root_parallel.o
a - ctools.o
a - ctype3_root.o
ranlib ../lib/libcmumps.a
make[3]: Verzeichnis „/tmp/mumps-5.1.2/src“ wird verlassen
make[2]: Verzeichnis „/tmp/mumps-5.1.2/src“ wird verlassen
make[1]: Verzeichnis „/tmp/mumps-5.1.2“ wird verlassen
(cd examples ; make c)
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird betreten
gfortran -O -I/usr/lib/openmpi/include -I. -I../include -c csimpletest.F -o csimpletest.o
gfortran -o csimpletest -O csimpletest.o ../lib/libcmumps.a ../lib/libmumps_common.a -L/usr/lib -lmetis -L../PORD/lib/ -lpord -L/usr/lib -lesmumps -lscotch -lscotcherr -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -lblacsCinit-openmpi -llapack -L/usr/lib/x86_64-linux-gnu/openmpi/lib -lmpi -lblas -lpthread
/usr/bin/ld: ../lib/libcmumps.a(cmumps_driver.o): undefined reference to symbol 'mpi_allreduce_'
/usr/bin/ld: //usr/lib/x86_64-linux-gnu/libmpi_mpifh.so.20: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:42: csimpletest] Fehler 1
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird verlassen
make: *** [Makefile:43: cexamples] Fehler 2
golbs@debian8-amd64:/tmp/mumps-5.1.2$
When I use "LIBPAR = $(SCALAP) $(LAPACK) -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi"
a - dsol_bwd.o
a - dsol_c.o
a - dsol_fwd_aux.o
a - dsol_fwd.o
a - dsol_matvec.o
a - dsol_root_parallel.o
a - dtools.o
a - dtype3_root.o
ranlib ../lib/libdmumps.a
make[3]: Verzeichnis „/tmp/mumps-5.1.2/src“ wird verlassen
make[2]: Verzeichnis „/tmp/mumps-5.1.2/src“ wird verlassen
make[1]: Verzeichnis „/tmp/mumps-5.1.2“ wird verlassen
(cd examples ; make d)
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird betreten
gfortran -O -I/usr/lib/openmpi/include -I. -I../include -c dsimpletest.F -o dsimpletest.o
gfortran -o dsimpletest -O dsimpletest.o ../lib/libdmumps.a ../lib/libmumps_common.a -L/usr/lib -lmetis -L../PORD/lib/ -lpord -L/usr/lib -lesmumps -lscotch -lscotcherr -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -lblacsCinit-openmpi -llapack -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lblas -lpthread
/usr/bin/ld: warning: libmpi_usempif08.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempif08.so.40
/usr/bin/ld: warning: libmpi_usempi_ignore_tkr.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempi_ignore_tkr.so.40
/usr/bin/ld: warning: libmpi_mpifh.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_mpifh.so.40
/usr/bin/ld: warning: libmpi.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi.so.40
/usr/bin/ld: warning: libgfortran.so.3, needed by //usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.20, may conflict with libgfortran.so.5
gcc -O -I/usr/lib/openmpi/include -DAdd_ -I. -I../include -I../src -c c_example.c -o c_example.o
gfortran -o c_example -O c_example.o ../lib/libdmumps.a ../lib/libmumps_common.a -L/usr/lib -lmetis -L../PORD/lib/ -lpord -L/usr/lib -lesmumps -lscotch -lscotcherr -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -lblacsCinit-openmpi -llapack -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lblas -lpthread
/usr/bin/ld: warning: libmpi_usempif08.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempif08.so.40
/usr/bin/ld: warning: libmpi_usempi_ignore_tkr.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempi_ignore_tkr.so.40
/usr/bin/ld: warning: libmpi_mpifh.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_mpifh.so.40
/usr/bin/ld: warning: libmpi.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi.so.40
/usr/bin/ld: warning: libgfortran.so.3, needed by //usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.20, may conflict with libgfortran.so.5
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird verlassen
(cd examples ; make multi)
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird betreten
gfortran -O -I/usr/lib/openmpi/include -I. -I../include -c multiple_arithmetics_example.F -o multiple_arithmetics_example.o
gfortran -o multiple_arithmetics_example -O multiple_arithmetics_example.o ../lib/libsmumps.a ../lib/libmumps_common.a ../lib/libdmumps.a ../lib/libmumps_common.a ../lib/libcmumps.a ../lib/libmumps_common.a ../lib/libzmumps.a ../lib/libmumps_common.a -L/usr/lib -lmetis -L../PORD/lib/ -lpord -L/usr/lib -lesmumps -lscotch -lscotcherr -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -lblacsCinit-openmpi -llapack -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lblas -lpthread
/usr/bin/ld: warning: libmpi_usempif08.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempif08.so.40
/usr/bin/ld: warning: libmpi_usempi_ignore_tkr.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_usempi_ignore_tkr.so.40
/usr/bin/ld: warning: libmpi_mpifh.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi_mpifh.so.40
/usr/bin/ld: warning: libmpi.so.20, needed by /usr/lib/libblacs-openmpi.so, may conflict with libmpi.so.40
/usr/bin/ld: warning: libgfortran.so.3, needed by //usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.20, may conflict with libgfortran.so.5
make[1]: Verzeichnis „/tmp/mumps-5.1.2/examples“ wird verlassen
golbs@debian8-amd64:/tmp/mumps-5.1.2$
When I test the examples I get:
golbs@debian8-amd64:/opt/salome_code-aster/aster144/public/mumps-5.1.2/examples$ ./c_example
Solution is : ( 1.00 2.00)
golbs@debian8-amd64:/opt/salome_code-aster/aster144/public/mumps-5.1.2/examples$ ./csimpletest
4
6
8
k
At line 36 of file csimpletest.F (unit = 5, file = 'stdin')
Fortran runtime error: Bad integer for item 2 in list input
Error termination. Backtrace:
#0 0x7efe3c7d58b0 in ???
#1 0x7efe3c7d6395 in ???
#2 0x7efe3c7d6b1a in ???
#3 0x7efe3c9c52c2 in ???
#4 0x7efe3c9c8392 in ???
#5 0x7efe3c9c98f9 in ???
#6 0x55b6aa17ceee in ???
#7 0x55b6aa17d4cb in ???
#8 0x7efe3c42409a in __libc_start_main
at ../csu/libc-start.c:308
#9 0x55b6aa17ca09 in ???
#10 0xffffffffffffffff in ???
golbs@debian8-amd64:/opt/salome_code-aster/aster144/public/mumps-5.1.2/examples$ ls -l
insgesamt 15112
-rwxr-xr-x 1 golbs golbs 1840008 Mär 31 23:34 c_example
-rw-r--r-- 1 golbs golbs 2338 Apr 3 2019 c_example.c
-rw-r--r-- 1 golbs golbs 3088 Mär 31 23:34 c_example.o
-rwxr-xr-x 1 golbs golbs 1855656 Mär 31 23:32 csimpletest
-rw-r--r-- 1 golbs golbs 2514 Apr 3 2019 csimpletest.F
-rw-r--r-- 1 golbs golbs 8872 Mär 31 23:32 csimpletest.o
-rwxr-xr-x 1 golbs golbs 1830880 Mär 31 23:34 dsimpletest
-rw-r--r-- 1 golbs golbs 2514 Apr 3 2019 dsimpletest.F
-rw-r--r-- 1 golbs golbs 8864 Mär 31 23:34 dsimpletest.o
-rw-r--r-- 1 golbs golbs 322 Apr 3 2019 input_simpletest_cmplx
-rw-r--r-- 1 golbs golbs 189 Apr 3 2019 input_simpletest_real
-rw-r--r-- 1 golbs golbs 2308 Apr 3 2019 Makefile
-rwxr-xr-x 1 golbs golbs 6141672 Mär 31 23:34 multiple_arithmetics_example
-rw-r--r-- 1 golbs golbs 3928 Apr 3 2019 multiple_arithmetics_example.F
-rw-r--r-- 1 golbs golbs 7896 Mär 31 23:34 multiple_arithmetics_example.o
-rw-r--r-- 1 golbs golbs 1505 Apr 3 2019 README
-rwxr-xr-x 1 golbs golbs 1826800 Mär 31 23:33 ssimpletest
-rw-r--r-- 1 golbs golbs 2514 Apr 3 2019 ssimpletest.F
-rw-r--r-- 1 golbs golbs 8720 Mär 31 23:33 ssimpletest.o
-rwxr-xr-x 1 golbs golbs 1863832 Mär 31 23:33 zsimpletest
-rw-r--r-- 1 golbs golbs 2514 Apr 3 2019 zsimpletest.F
-rw-r--r-- 1 golbs golbs 8872 Mär 31 23:33 zsimpletest.o
golbs@debian8-amd64:/opt/salome_code-aster/aster144/public/mumps-5.1.2/examples$ ./multiple_arithmetics_example
Creation of all instaces went well
Entering SMUMPS 5.1.2 with JOB = -2
executing #MPI = 1, without OMP
Entering DMUMPS 5.1.2 with JOB = -2
executing #MPI = 1, without OMP
Entering CMUMPS 5.1.2 with JOB = -2
executing #MPI = 1, without OMP
Entering ZMUMPS 5.1.2 with JOB = -2
executing #MPI = 1, without OMP
System is debian 10 64bit.
Thanks and greeting Markus
Hello,
when I use a still higher refinement by mesh, sample 10.000 dkt to 1.000.000 dkt elements, same simple sample modell, I have convergence problems (convergence to a "offset by RESI_GLOB_RELA" [sample not 1e-08] ). I use Newton in STAT_NON_LINE. This is perhaps the normal effect of high dimension and Newton (fast solver versus curse of dimension)? A method like monte carlo for solving are possible in STAT_NON_LINE? Is the NEWTON_KRYLOV a approach to solving this problems with curse of dimension? What can I use in code-aster as alternative to Newton in big none linear modells?
Thanks Markus
Hello,
I have by different analysis the same error, STAT_NON_LINE.
ERREUR A L'INTERPRETATION DANS ACCAS - INTERRUPTION
>> JDC.py : DEBUT RAPPORT
CR phase d'initialisation
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! <S> Exception utilisateur levee mais pas interceptee. !
! Les bases sont fermees. !
! Type de l'exception : MatriceSinguliereError !
! !
! Arr?t pour cause de matrice non inversible. !
! La base globale est sauvegard?e. Elle contient les pas archiv?s avant !
! l'arr?t. !
! !
! Conseils : !
! - V?rifiez vos conditions aux limites. !
! - V?rifiez votre mod?le, la coh?rence des unit?s. !
! - Si vous faites du contact, il ne faut pas que la structure ne "tienne" !
! que par le contact. !
! !
! - Parfois, en parall?le, le crit?re de d?tection de singularit? de MUMPS !
! est trop pessimiste ! Il reste n?anmoins souvent !
! possible de faire passer le calcul complet en relaxant ce crit?re !
! (augmenter de 1 ou 2 la valeur du mot-cl? NPREC) ou !
! en le d?branchant (valeur du mot-cl? NPREC=-1) ou en relan?ant le calcul !
! sur moins de processeurs. !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
fin CR phase d'initialisation
>> JDC.py : FIN RAPPORT
End of the Code_Aster execution
[beo-01:9838 :0:9838] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x48)
==== backtrace ====
0 /usr/lib/x86_64-linux-gnu/libucs.so.0(+0x1a780) [0x14cd65877780]
1 /usr/lib/x86_64-linux-gnu/libucs.so.0(+0x1a932) [0x14cd65877932]
2 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyErr_Occurred+0xa) [0x14cd784e3eba]
3 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(utprin_+0x6f) [0x55d24966f1bf]
4 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(utmess_core_+0x469) [0x55d24a9e3789]
5 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(utmess_+0x881) [0x55d24a9e30f1]
6 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(asmpi_check_+0x787) [0x55d24a57b2f7]
7 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(terminate+0x3d) [0x55d249675f2d]
8 /lib/x86_64-linux-gnu/libc.so.6(+0x43041) [0x14cd77a1b041]
9 /lib/x86_64-linux-gnu/libc.so.6(+0x4313a) [0x14cd77a1b13a]
10 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1bdd7f) [0x14cd784d4d7f]
11 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1bde6e) [0x14cd784d4e6e]
12 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyErr_PrintEx+0x175) [0x14cd7845fd15]
13 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0x398) [0x14cd78466618]
14 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(Py_Main+0xb92) [0x14cd784dad32]
15 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x14cd779f9b97]
16 /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster(_start+0x2a) [0x55d24966b68a]
===================
/data/home/userfe/solve/edb_99A/9/global/mpi_script.sh: line 47: 9838 Segmentation fault (core dumped) /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/lib/aster/Execution/E_SUPERV.py -commandes fort.1 -max_base 500000 --num_job=9768 --mode=interactif --rep_outils=/usr/lib/opse/apps/astk/2018/outils --rep_mat=/usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/share/aster/materiau --rep_dex=/usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/share/aster/datg --numthreads=2 --suivi_batch --memjeveux=31738.28125 --tpmax=12000000.0
EXECUTION_CODE_ASTER_EXIT_9768=139
PROC=0 INFO_CPU= 11689.86 11481.27 963.66 11987410.14
Content after execution of /tmp/slurm-userfe-289/proc.0 :
.:
total 15757895
drwx------ 3 userfe ad-domain 22 Jun 3 18:54 .
drwxr-xr-x 5 userfe root 5 Jun 3 15:39 ..
-rw-r--r-- 1 userfe ad-domain 2209 Jun 3 15:39 9768.export
drwxr-xr-x 2 userfe ad-domain 2 Jun 3 15:39 REPE_OUT
-rw-r--r-- 1 userfe ad-domain 2354 Jun 3 15:39 config.txt
-rw------- 1 userfe ad-domain 226877440 Jun 3 18:54 core
-rw-r--r-- 1 userfe ad-domain 11963 Jun 3 15:39 fort.1
-rw-r--r-- 1 userfe ad-domain 11963 Jun 3 15:39 fort.1.1
-rw-r--r-- 1 userfe ad-domain 0 Jun 3 15:39 fort.15
-rwxr-xr-x 1 userfe ad-domain 3022064 Jun 3 15:39 fort.19
-rw-r--r-- 1 userfe ad-domain 1233019 Jun 3 15:39 fort.20
-rw-r--r-- 1 userfe ad-domain 2042900 Jun 3 18:54 fort.6
-rw-r--r-- 1 userfe ad-domain 0 Jun 3 15:39 fort.8
-rw-r--r-- 1 userfe ad-domain 0 Jun 3 15:39 fort.9
-rw-r--r-- 1 userfe ad-domain 12884377608 Jun 3 18:54 glob.1
-rw-r--r-- 1 userfe ad-domain 12884377608 Jun 3 18:54 glob.2
-rw-r--r-- 1 userfe ad-domain 12884377608 Jun 3 18:54 glob.3
-rw-r--r-- 1 userfe ad-domain 3167027208 Jun 3 18:54 glob.4
-rw-r--r-- 1 userfe ad-domain 45 Jun 3 18:54 info_cpu
-rwxr-xr-x 1 userfe ad-domain 2599 Jun 3 15:39 mpi_script.sh
-rw-r--r-- 1 userfe ad-domain 4676178 Jun 3 18:54 pick.1
-rw-r--r-- 1 userfe ad-domain 305561608 Jun 3 18:54 vola.1
REPE_OUT:
total 2
drwxr-xr-x 2 userfe ad-domain 2 Jun 3 15:39 .
drwx------ 3 userfe ad-domain 22 Jun 3 18:54 ..
/data/home/userfe/solve/edb_99A/9/global/mpi_script.sh: line 47 >>>
cd $ASRUN_WRKDIR
( . /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/share/aster/profile.sh ; /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/bin/aster /usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/lib/aster/Execution/E_SUPERV.py -commandes fort.1 -max_base 500000 --num_job=9768 --mode=interactif --rep_outils=/usr/lib/opse/apps/astk/2018/outils --rep_mat=/usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/share/aster/materiau --rep_dex=/usr/lib/opse/apps/code_aster/13.6/openmpi-gcc8-4.0/share/aster/datg --numthreads=2 --suivi_batch --memjeveux=31738.28125 --tpmax=12000000.0 ; echo EXECUTION_CODE_ASTER_EXIT_9768=$? ) | tee fort.6
iret=$?
What can be the problem? About information I'm happy.
Thanks and greeting Markus
PS: The *.comm file + export file + mpi run generate however the files under base
Hello,
I have test many entity. The actual result is "it is a mesh problem, but no by individual degenerate elements". The mesh is a local partially dirty complex industrial stage of development 1.000.000 nodes and 500.000 COQUE_3D. Now I have create a academic mesh related size, related basic structure, but clearly structured., the same *.comm file. It works absolut correct. To create a cleanly mesh by a industrial stage of development cad modell is the challenge, short time...
Can mesh structures in picture are give numeric problems? I have fade in only a drop of elements an nodes. Basic are node with very low distance and on this nodes the elements....
Thanks and greeting Markus
COEF_RIGI_DRZ is a approach.
Hello Richard,
many thanks! "..solver should stop the solution as the loss of precision is too high.." How are calculate this precision in STAT_NON_LINE? Means this, that internal and external forces get no convergence or the different are to high to start newton method?
Why comes with NPREC=6 a singular matrix? How can precision generate singular matrix?
"..If your analysis still does not even converge .."
- I have in first step very low force, material only in elastic area
- no contact in modell (this time)
- displacements are very low, test or comparison with meca_statique
- no kinematic effect
- no RBM
- test group assignment
- test an modify material definition
- I have reviewed mesh 10 times and more for bad mesh quality. Deleted elements, take modal analyse for retest RBM ...
- I know this "..+ a million of different other reasons why a nonlinear analysis might fail.." unfortunately.
The modell with COQUE_3D works different to DKTG (rest of comm are absolut the same only in COMPORTEMENT different). (the basic mesh is a tria3 and for COQUE_3D I modify it to tria6 and in comm than to "tria6_7".
So i will next search and test.
Thanks and greeting Markus
Hello,
many thanks! How can I calculate the optimal NPREC for a specific Modell? Is the NPREC low (6) than I have singular problems. If the NPREC high (12) then no convergence. Also the default not work. Always by the same Mesh, same *.comm file. About information I'm happy.
Thanks and greeting Markus
Hello,
I get this output by a Code Aster 13.6 run, STAT_NON_LINE, NPREC=11.
Instant de calcul: 1.000000000000e-02
---------------------------------------------------------------------
---------------------------------------------------------------------
| NEWTON | RESIDU | RESIDU | OPTION |
| ITERATION | RELATIF | ABSOLU | ASSEMBLAGE |
| | RESI_GLOB_RELA | RESI_GLOB_MAXI | |
---------------------------------------------------------------------
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 0 X | 2.47950E+02 X | 9.32706E+05 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
| 1 X | 2.57134E+02 X | 9.22329E+06 |TANGENTE |
Timestep 0 (RESI_GLOB_MAXI) : (RESI_GLOB_RELA) = 3.762
Timestep 1 (RESI_GLOB_MAXI) : (RESI_GLOB_RELA) = 35.870
Can it true? I think between RESI_GLOB_MAXI and RESI_GLOB_RELA must be a "constant" factor??? About information I'm happy.
Thanks and greeting Markus
Hello,
I have the same problem with STANLEY in Salome V2018. What can I do?
Thanks and greeting Markus
Hello,
unfortunately no. Next I will Mr. jeanpierreaubry hint test "..but more probably a problem of group assignement.."
TL61GRA3=AFFE_MODELE(MAILLAGE=NETZ,
INFO=1,
VERI_JACOBIEN='OUI',
DISTRIBUTION=_F(METHODE='CENTRALISE',),
AFFE=(_F(TOUT='OUI',
PHENOMENE='MECANIQUE',
MODELISATION='3D',),
_F(GROUP_MA=('EL2DPR01','EL2DPR02','EL2DPR03','EL2DPR04','EL2DPR05','EL2DPR06','EL2DPR07','EL2DPR08',
'EL2DPR09','EL2DPR10','EL2DPR11','EL2DPR12','EL2DPR13','EL2DPR14','EL2DPR15',
'EL2DPR16','EL2DPR18','EL2DPR19','EL2DPR30','EL2DPR50',),
PHENOMENE='MECANIQUE',
MODELISATION='COQUE_3D',),),);
name definition of groups
EL: Elements (NO for nodes,..)
2D: Typ of dimension
PR: properties ( thickness >> material...)
01: consecutive number
Means "..but more probably a problem of group assignement.." that are one element in two groups and so on...?
Thanks and greeting Markus
Hello,
many thanks! I wonder, that in u4.51.03 no information to COMPORTEMENT. Therefore the question to default in STAT_NON_LINE to the feature COMPORTEMENT. STAT_NON_LINE without COMPORTEMENT runs basically.
Total 177785.81 16693.26 194479.07 5077.32
---------------------------------------------------------------------------------
(*) cpu and system times may be not correctly counted using mpirun.
as_run 2018.0
------------------------------------------------------------
--- DIAGNOSTIC JOB : <A>_ALARM
------------------------------------------------------------
EXIT_CODE=0
But under with conditions to come the results?
We have real test results by this project. STAT_NON_LINE with
...
...
KD2EVC=AFFE_MODELE(MAILLAGE=NETZ,
AFFE=(_F(TOUT='OUI',
PHENOMENE='MECANIQUE',
MODELISATION='3D',),
_F(GROUP_MA=('EL2DPR01','EL2DPR02','EL2DPR03','EL2DPR04','EL2DPR05','EL2DPR06','EL2DPR07','EL2DPR08',
'EL2DPR09','EL2DPR10','EL2DPR11','EL2DPR12','EL2DPR13','EL2DPR14','EL2DPR15',
'EL2DPR16','EL2DPR18','EL2DPR19','EL2DPR30','EL2DPR50',),
PHENOMENE='MECANIQUE',
MODELISATION='DKTG',),),);
..
..
..
S355Funk=DEFI_FONCTION(NOM_PARA='EPSI',
VALE=(0.00169 ,355.0 ,
0.08 ,500.0 ,
0.5 ,510.0 ,),
INTERPOL='LIN',
PROL_GAUCHE='LINEAIRE',
PROL_DROITE='LINEAIRE',);
# PROL_GAUCHE='EXCLU',);
S355=DEFI_MATERIAU(ELAS=_F(E=210000.0,
NU=0.3,
RHO=7.85e-09,),
TRACTION=_F(SIGM=S355Funk,),);
....
S420Funk...
S550Funk...
DC04Funk...
SG03Funk...
....
....
ERGEBNIS=STAT_NON_LINE(MODELE=KD2EVC,
CHAM_MATER=MATERIAL,
CONTACT = KONTAKT,
CARA_ELEM=CARA,
EXCIT=(_F(CHARGE=LAGER01,
TYPE_CHARGE='FIXE_CSTE',),
_F(CHARGE=Lastseit,
FONC_MULT=Lastfunk,
TYPE_CHARGE='FIXE_CSTE',),),
COMPORTEMENT=(_F(DEFORMATION='GROT_GDEP',
RELATION='ELAS_VMIS_TRAC',
TOUT='OUI',),
_F(DEFORMATION='GROT_GDEP',
GROUP_MA=('ALLE2DEL', ),
RELATION='ELAS_VMIS_TRAC',),),
INCREMENT=_F(LIST_INST=SolSchrX,
PRECISION=1e-06,),
METHODE='NEWTON',
# NEWTON=_F(MATRICE='ELASTIQUE',
# MATRICE='TANGENTE',
# PREDICTION='EXTRAPOLE',),
# PREDICTION='TANGENTE',
CONVERGENCE=_F(RESI_GLOB_MAXI=10,
# RESI_GLOB_RELA=1e-06,
ARRET='NON',
ITER_GLOB_MAXI=25,),
SOLVEUR=_F(METHODE='MUMPS',
RENUM='METIS',
NPREC=12,
ELIM_LAGR='NON',
STOP_SINGULIER='NON',),
ARCHIVAGE=_F(LIST_INST=ArcSchri,
CRITERE='RELATIF',
PRECISION=1e-06,),);
bring only 5% of real test fail force. In my the code is also an error, but where? I think my mistake located in COMPORTEMENT, possibly in DEFI_MATERIAU...
Thanks and greeting Markus
Hello,
here the next project with the same effect.
I5GF23B=AFFE_MODELE(MAILLAGE=NETZ,
AFFE=(_F(TOUT='OUI',
PHENOMENE='MECANIQUE',
MODELISATION='3D',),
_F(GROUP_MA=('EL2DPR01','EL2DPR02','EL2DPR03','EL2DPR04','EL2DPR05','EL2DPR06','EL2DPR07','EL2DPR08',
'EL2DPR09','EL2DPR10','EL2DPR11','EL2DPR12','EL2DPR13','EL2DPR14','EL2DPR15',
'EL2DPR16','EL2DPR18','EL2DPR19','EL2DPR30','EL2DPR50',),
PHENOMENE='MECANIQUE',
MODELISATION='DKTG',),),);
...
...
ERGEBNIS=STAT_NON_LINE(MODELE=I5GF23B,
CHAM_MATER=MATERIAL,
# CONTACT = KONTAKT,
CARA_ELEM=CARA,
EXCIT=(_F(CHARGE=LAGER01,
TYPE_CHARGE='FIXE_CSTE',),
_F(CHARGE=Lastseit,
FONC_MULT=Lastfunk,
TYPE_CHARGE='FIXE_CSTE',),),
COMPORTEMENT=(_F(DEFORMATION='GROT_GDEP',
RELATION='ELAS_VMIS_TRAC',
TOUT='OUI',),
_F(DEFORMATION='GROT_GDEP',
GROUP_MA=('ALLE2DEL', ),
# RELATION='VMIS_ISOT_TRAC',),),
RELATION='ELAS_VMIS_TRAC',),),
INCREMENT=_F(LIST_INST=SolSchrX,
PRECISION=1e-06,),
METHODE='NEWTON',
CONVERGENCE=_F(RESI_GLOB_MAXI=10,
ARRET='NON',
ITER_GLOB_MAXI=25,),
SOLVEUR=_F(METHODE='MUMPS',
RENUM='METIS',
NPREC=12,
ELIM_LAGR='NON',
STOP_SINGULIER='NON',),
ARCHIVAGE=_F(LIST_INST=ArcSchri,
CRITERE='RELATIF',
PRECISION=1e-06,),);
FIN();
How can I accelerate the solution/convergence? Is the modell without COMPORTEMENT(...) PETIT and ELAS?
Why goes the RESI_GLOB_RELA not in typical power of ten down ("primary at low force time with static determination")? Is it a general effect of large meshes, stiffness jumps between mesh regions, bad element quality,...?
Thanks and greeting Markus
Hallo,
many Thanks! The *.unv mesh file is nearly 100MB. It is a development project. I Cant post it complete. I have modify the last points of DEFI_FONCTION, upper "E-modul". Then I dont have the error "On ne trouve pas la courbe de traction (mot-clef TRACTION) dans le mat?riau fourni". It is not the absolut solution, but works now without this error.
i think i have already seen or made problem of that size or complexity..
This is also my problem. Little models works, but big modells brings question. The project works fine with MECA_STATIQUE little depl, little stress/stains, also with STAT_NON_LINE without COMPORTEMENT= .. but with STAT_NON_LINE and COMPORTEMENT= I have a convergence offset.
» STAT_NON_LINE basic convergence question
Thanks and greeting Markus
Hallo
is it possible to use test alternativ? How works the ELAS sector on this end?
S355Funk=DEFI_FONCTION(NOM_PARA='EPSI',
VALE=(0.00169 ,355.0 ,
0.08 ,500.0 ,
5.0 ,510.0 ,),
INTERPOL='LIN',
PROL_GAUCHE='LINEAIRE',
PROL_DROITE='LINEAIRE',);
# PROL_GAUCHE='EXCLU',);
S355=DEFI_MATERIAU(ELAS=_F(E=210000.0,
NU=0.3,
RHO=7.85e-09,),
TRACTION=_F(SIGM=S355Funk,),);
Can I use DEFI_MATERIAU only with TRACTION function, without ELAS?
It is a big model with 20 AFFE_CARA_ELEM and 5 DEFI_MATERIAU 2D DKT, STAT_NON_LINE, PETIT_REAC, MPI .... I will try different employments.
[as usual] the problem is probably not lying where one thinks it is at first
the .comm and mesh file would help to understand..
Yes this is the correct way. But I think the *.comm is not the primary starting point. I have test the *comm with a big Modell only one DKT group, equable CAD Modell >> equable mesh. It works all right.
How can I find problematic element quality with *.comm file implement?
Thanks Markus
Hello,
very thanks. I dont understand the problem, because the rest of 99,9% in GROUP_MA('EL2DPR01','EL2DPR02','EL2DPR04',) are correct.
Maybe the curve is not defined for epsi < 0.00169 because of PROL_GAUCHE = 'EXCLU' ?
I think epsi < 0.00169 then ELAS definition works?
Thanks Markus
Hello,
very thanks. My functions for material looks like:
S355Funk=DEFI_FONCTION(NOM_PARA='EPSI',
VALE=(0.00169 ,355.0 ,
0.08 ,500.0 ,
5.0 ,510.0 ,),
INTERPOL='LIN',
PROL_DROITE='LINEAIRE',
PROL_GAUCHE='EXCLU',);
S355=DEFI_MATERIAU(ELAS=_F(E=210000.0,
NU=0.3,
RHO=7.85e-09,),
TRACTION=_F(SIGM=S355Funk,),);
Is this a mistake? But why by so less elements?
Thanks Markus
Hello,
I have a big model, with different AFFE_CARA_ELEM and different AFFE_MATERIAU. Also I have test element quality. No I get following error.
! <EXCEPTION> <COMPOR5_1> !
! !
! On ne trouve pas la courbe de traction (mot-clef TRACTION) dans le mat?riau fourni. !
! !
! - !
! Contexte du message : !
! Option : RIGI_MECA_TANG !
! Type d'?l?ment : MEDKTR3 !
! Maillage : NETZ !
! Maille : M30474 !
! Type de maille : TRIA3 !
! Cette maille appartient aux groupes de mailles suivants : !
! EL2DPR01 ALLE2DEL EL2DTRIA !
! Position du centre de gravit? de la maille : !
! x=1312.662005 y=665.706398 z=1507.902874 !
!-!
!-!
! <EXCEPTION> <COMPOR5_1> !
! !
! On ne trouve pas la courbe de traction (mot-clef TRACTION) dans le mat?riau fourni. !
! !
! - !
! Contexte du message : !
! Option : RIGI_MECA_TANG !
! Type d'?l?ment : MEDKTR3 !
! Maillage : NETZ !
! Maille : M3004 !
! Type de maille : TRIA3 !
! Cette maille appartient aux groupes de mailles suivants : !
! EL2DPR01 ALLE2DEL EL2DTRIA ... !
! Position du centre de gravit? de la maille : !
! x=1019.073286 y=708.970468 z=37.550732 !
!-!
The listed Maille are over all good mesh quality. Also only "0,1%" this elements in GROUP_MA="..." brings <EXCEPTION> <COMPOR5_1> the rest off 99,9% are correct.
Why brings only less elements in the GROUP_MA="..." this "..traction (mot-clef TRACTION) dans le mat?riau fourni."? About informations I'm very happy.
Thanks Markus
Hello,
basic question. We have a cluster with one head-node and 4 compute nodes (one with 2x6 Core, 256GB and 3 with 2x10 core, 3x256GB). Code Aster MP must install on all 4 compute nodes or only on the head node? How must we manage this cluster machine so, that we can use the cluster machine on client workstations under astk >>> configuration >> servers that is one machine, give 72Core and 1TB RAM? Give it a document like U2.08.06?
About informations we are happy.
Thanks and greeting Markus
Hello,
cluster works now very well.
Thanks Markus
Hello,
FEM convergence in none linear static is a big "work of art". Over years I have in CA in STAT_NON_LINE often this effect:
The convergence by RESI_GLOB_RELA goes not to zero but to a little offset value.
In this fist time step I have :
- linear material behavior, no plasticizing
- short stress and strain
- very short displacement
- static determinacy
- no activ contact
- no degenerate elements (jacobian test must have?)
- but very much elements, big modell
- effect are in different modells with different element types (1D, 2D or 3D Elements)
A model test with MECA_STATIQUE and check the depl results by Dx, DY, DZ, DRX, DRY or DRZ brings no indications.
Whichever effect can generate this offset by RESI_GLOB_RELA? About informations I'm very happy.
Thanks Markus