Welcome to the forums. Please post in English or French.

You are not logged in.

#1 Re: Salome-Meca usage » Pretension Bolted Connection » 2021-03-15 15:07:07

mf wrote:

The attached example is an example of such a connection, but it's a stupid example because it is a very poorly designed connection (you should never connect 2 bodies like this :-) ). Pay attention to how the bolt relaxes in t=1, so the before applied pretension is lowered by the bodies being pulled together. In this example the pretension drops from 163kN (DIN 8.8 M24 bolt) in t=0 to 127kN in t=1 when they relax.


Hi Mario, thank you for your example, it helped me a lot. I got my bolts configured as Euler beams now, and things are working just fine!

Bolts are working ok, the stress level is correct,  and I can see the relaxation in the intermediate time-step. Printing the results in a text file is also very useful!

I had some trouble with thermal loads on the beams (this is something that I also found in an older post made by you). With the help of Johannes, I managed to solve that part of the problem.

Thanks for the great help, regards,


#2 Re: Code_Aster usage » [SOLVED] POU_D_E elements and thermal analysis » 2021-03-15 14:04:41

Thank you Johannes_ACKVA! Sorry for my late reply, It took me a while to configure all things the right way, but after several tries, it worked just fine! Your right, I'm running a linear thermal analysis, but the mechanical part is non-linear and has several contacts with Coulomb friction and bolts modeled as Euler beams.

Following your advice, I separated 3d from beams using the GROUP_MA command in both thermal and mechanic analysis. In particular, the AFFE_MATERIAU command in the mechanic analysis seemed to be the most critical part due to the AFFE_VARC  option, and once this was correctly configured, the test cases started to work.

Also, one should be very careful with the setup of the time-stepping function to account for the pretightening of the bolts, and the projection of the thermal results on the mechanical model.

Thank you again for the great help, regards,


#3 Re: Code_Aster usage » [SOLVED] POU_D_E elements and thermal analysis » 2021-03-05 21:14:43

Hi guys,

I see that this post is several months old, but I'm running into the same issues when modeling bolts as Euler beams.

I have a thermomechanical problem that combines 3D and beam elements, and POU_D_E elements cannot be used in a thermal model. My problem consists of 2 stages, thermal and mechanical. In the mechanical problem, I project the thermal solution field. The problem is that I can't project anything in POU_D_E elements since these are not taken into account in the thermal problem.

In the mechanical problem, is there a way to project the thermal field in the rest of the 3D model, and then manually assign temperatures only to the beam elements?

I assume that one can only use a single AFFE-MODELE and AFFE_MATERIAU statements in a problem, right? Any help is greatly appreciated, regards,


#4 Re: Salome-Meca usage » Pretension Bolted Connection » 2021-03-02 20:02:25

Thank you Mario.

I'm about to test all this in my model. I'll let you know what happens.



#5 Re: Salome-Meca usage » Pretension Bolted Connection » 2021-03-01 12:32:18

mf wrote:

If you are interested in the stress distribution WITHIN the bolt you should model the bolt (in 2D or 3D) itself, so it will be something similar to the above example

Hi Mario,

I've studying and testing your files, it is very interesting and helped me a lot. One of the key aspects is the correct use of the time-stepping, so that loads and pretensions are correctly assigned to the model.

I never used DEFI_CONTACT as in your file, I will give it a try.  I assume that the COEF_PENA_CONT is imposed so that the bodies do not penetrate one on the other.  I've read in the forum that the value of this coefficient is some orders of magnitude higher than the Young modulus, which is the case in your example.

Since your bolts are Euler beams, and the ends are glued to the 3D bodies, there is no need to model contact between the shank of the bolt and the hole, right? In the case of 3D bolts, this contact should be considered or at least replaced with LiASION_MAIL and 'DNOR' option.

I assume that as your modeling the bolts as Euler beams, the pretension is a CREA_CHAMP with TYPE_CHAM='ELGA_SIEF_R' and NOM_CMP=('N', ) (N if for normal I think). In case that the bolts are modeled like 3D solids, which CREA_CHAMP will be the equivalent to your example?

This pretension is then used to create an initial state ETATINIT, that is applied in INST=0.0 of resnon1.

In resnon2, the load CHARGE=glueBOLT is of the type TYPE_CHARGE='DIDI', I've read above that this is to consider the stressed state of the material.

Is this analysis correct? Thank you for your help,


#6 Re: Salome-Meca usage » Pretension Bolted Connection » 2021-02-25 23:46:51

Thank you Mario for your reply and example. I will look into it now.

I'm modeling the bolts as 3D solids because I'm interested in stress distribution. The preload stress is a 75% fraction of the material yield stress. Your right, it is time-consuming not only the geometrical setup (dividing geometry and mesh) but also the nonlinear solver execution. The PRE_EPSI option required dividing the volume mesh of the shank part of the bolt to assign the negative pre-deformation.

I will try your example, and see the effect on the steel plates. Perhaps I am not being careful with the time-steps.

Kind regards,


#7 Re: Salome-Meca usage » Pretension Bolted Connection » 2021-02-25 21:24:07

Hi, I have some questions on using bolts with a preload in Code_Aster. I have been reading many posts in the forum and testing some examples. Apparently, there are many ways to simulate a preload or tightening in a bolt.

I have two steel plates that are connected one on top of the other by 4 bolts. The bolts don't have nuts, the threads are glued with LIAISON_MAIL on the lower plate. I am using contacts with a Coulomb coefficient.

* One option is with AFFE_CHAR_MECA - PRE_EPSI. For what I've read, this is a pre-deformation. However, I don't see that the steel plates are pushed or pressed together as expected on a preload.

* Another option I found is using CREA_CHAMP - TYPE_CHAM='ELNO_SIEF_R'. However, I suspect that this creates a stress field compatible with the preload, but it is not a preload. Again, I don't see that the steel plates are pushed against each other.

* Another is with AFFE_CHAR_MEC_F, and LIAISON_GROUP. I have not tested this option but is in one of the tests on the Code_ASter web page.

Can anyone tell me if one of these options will create pressure between the plates? Which could better suit my need?

Thank you in advance,


#8 Re: Code_Aster installation » Code_Aster inside a Docker container available for everyone » 2021-02-25 21:06:08

Thanks for the help Mario. Things are working ok now. I had a mechanical problem that needed 3 hours and 8 minutes for the solution when running on a single core of my CPU.

In parallel, things are much better:

* When using docker, and configuring (mpi_nbcpu=3/ ncpus=2), 57 minutes are needed for the solution.

* When using docker, configuring (mpi_nbcpu=3/ ncpus=2), and following your advice on (MATR_DISTRIBUEE='OUI' in STAT_NON_LINE) and (PARTITIONNEUR='METIS' in AFFE_MODELE.), 47 minutes are needed for the solution.

In my opinion, the scalability level in this parallel problem is great.  Of course, the "mpi_nbcpu" and "ncpus" parameters depend on the CPU type. Cheers,


#9 Re: Code_Aster installation » Code_Aster inside a Docker container available for everyone » 2021-02-11 20:30:24

Great Mario, thank you for your reply.

I will try those options in AFFE_MODELE, and see what happens.

Kind regards,


#10 Re: Code_Aster installation » Code_Aster inside a Docker container available for everyone » 2021-02-11 18:42:40


I'm trying to run cases in parallel, and I would like to ask for the correct configuration of "mpi_nbcpu" and "ncpus" variables for a multi-core CPU with one socket. I have an CPU with the I7-8700K, it has 6 cores with 2 threads per core (12 threads total) on one socket

I've successfully installed docker following tianyikillua instructions in GitHUB, all tests run Ok in parallel or sequential mode.

I've been doing some tests with some cases that need approximately 8 hours to compute on a single core using MUMPS. So far, the best configuration tested has "mpi_nbcpu = 3" and "ncpus = 2", and total computational time is 6 hours approximately with MUMPS.

Does anyone have other thoughts about this? Perhaps the limit is the single socket?

Thank you in advance,


#11 Introduce yourself / Présentez vous » Presentation » 2021-02-11 18:22:19

Replies: 0

Hello everyone,

I'm a researcher from Argentina. I'm a mechanical engineer, and I have a PhD in Computacional Mechanics. 

I've been using Salome_Meca for a few months. In my opinion, it is great software.

In the near future, I would like to create my own user routines and Umats. I'm currently trying to run cases in paralell.

Best regards,