Welcome to the forums. Please post in English or French.

You are not logged in.

#1 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-29 18:49:44

Hello,

Okay, I understand that.

The newest stable version seems to be 15.5.0 ? (htt ps://sourceforge.net/p/codeaster/src/ci/stable/tree/) If that doesn't work with the newest salome_meca 2021.0.0-2-20211014 container then try it with older salome_meca 2021.0.0-0-20210601 container. Hope it works ! smile

Best regards,
VonPire

#2 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-29 18:28:31

Hello Mario,

Codeaster src 16.1.8 version (seems to be latest?) (htt ps://sourceforge.net/p/codeaster/src/ci/16.1.8/tree/) from Sourceforge works with the newest salome_meca 2021.0.0-2-20211014 container with yours and ing. nicola's MPI version recipe. Just update the container and version names when using commands and update also the pkginfo.py. smile

If I understood correctly that older versions of Codeaster src doesn't work because this new container salome_meca 2021.0.0-2-20211014 have newer version of MUMPS.

Best regards,
VonPire

#3 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-29 14:33:30

Hello,

I'm using Intel OneAPI Base and HPC toolkit. They are free. It says: "All Intel® oneAPI Toolkits products are available at no cost." Here is the source: htt ps://www.intel.com/content/www/us/en/developer/articles/news/free-intel-software-developer-tools.html (remove the space from start)

I agree with you Mario. This would be a lot easier or quicker if there would be somebody helping with more software engineering background.

Best regards,
VonPire

#4 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-29 14:12:54

Hello,

Yep, it would be nice to test code_aster with AMD's own optimized compilers and mathlibs and see the difference to GNU and Intel. However AMD provides own EPYC specific compiler options for using AOCC, GNU and Intel compilers and Intel seems to work with sequential version of aster-full-src 14.6.

I could give the GNU a chance with different optimization flags. I have tried only with default flags.

I haven't managed to get working yet the openMP and MPI version of aster-full-src 14.6.

Best regards,
VonPire

#5 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-29 13:54:49

Hello Mario,

My cpu is the new AMD EPYC 7443, 24-core, Octa memory channel, 4Ghz, 128 Mb L3 cache, etc.

AMD offers also own EPYC ZEN3/Milan optimized AOCC compilers (clang, clang++, flang) and AOCL mathlibrary (there are BLIS, ScaLAPACK, LibFLAME, FFTW, LibM, AOCL Sparse, AOCL enabled MUMPS), but I haven't managed to get aster working with these not even aster sequential version.

Best regards,
VonPire

#6 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-28 12:57:18

Hello Mario,

Thank you for your reply.

I gained 30% performance gain (compared to default GNU compilers and mathlib) with "onecore/sequential" version of aster-full-src-14.6.0 with my new workstation when I compiled it with Intel compilers and Intel MKL. I guess that the performance gain would be bigger when using parallel OpenMP and MPI version that has been compiled with Intel compilers and Intel MKL. So I think that there is good motivation to investigate and try this to get working.

Best regards,
VonPire

#7 Re: Salome-Meca installation » Questions about singularity version of salome_meca 2021 » 2022-03-28 10:25:33

Hello,

Great work from Ing. nicola and mf with this mpi+petsc version container.

I want to say that I'm a simulation engineer, not software engineer, so I really need tips with these questions:

Do you guys know how to use Intel compilers and Intel MKL mathlib with this singularity version ?

What is and where is the configuration file similar to setup.cfg (in aster-full-src-14.6.0) ?

Is it even possible to compile/build aster,mumps,petsc etc. with Intel compilers and Intel MKL with this Salome-meca singularity version ?

Best regards,
VonPire

#8 Re: Salome-Meca installation » How to use Intel compilers and AMD mathlibs with SalomeMeca container? » 2022-03-24 21:01:47

Hello Nicolas,

Thank you for your answer.

Yes, I know that aster tries to use gfortran instead of ifort, even I set ifort correctly in setup.cfg. The same Intel compilers+ Intel MKL setup.cfg works fine with aster-full-src-14.6.0-1. I got 30% more performance with that Intel setup.cfg in one core 14.6 version.

Maybe we can forget the aster-full-src-15.2, because there is no any benefit to get it working. aster-full-src-15.2 has old mumps 5.2.1 version which isn't compatible anymore with the newest aster-src from sourceforge. Newest aster-src have MUMPS 5.4.1c.

Next question is how can I build codeaster MPI version with Intel compilers and Intel MKL to salome-meca .sif container ?

Best regards.
VonPire

#9 Re: Salome-Meca installation » How to use Intel compilers and AMD mathlibs with SalomeMeca container? » 2022-03-23 10:31:12

Hello,

My monolog continues, I got some progress:

I managed to install aster-full-src-14.6.0-1 (without MPI) with Intel OneAPi compilers and Intel mkl, but precise same method and settings doesn't work with unstable aster-full-src-15.2 version.

Next thing is to test AMD AOCL mathlibraries if they would work with Codeaster. After that trying to build MPI version with intel compilers and custom mathlibs.

Best regards.
VonPire

#10 Re: Salome-Meca installation » How to use Intel compilers and AMD mathlibs with SalomeMeca container? » 2022-03-22 12:22:37

Hello,

I'm still stuck with this. I even tried to compile "aster-full-src-15.2.0" std version with Intel oneApi compilers, but there was error with aster installation. Everything else (MUMPS,med, hdf5,etc.) was installed successfully. 

Here is some setup.log details:

Compiler variables for aster (set as environment variables):
export               CC='/opt/intel/oneapi/compiler/2022.0.2/linux/bin/intel64/icc'
export           CFLAGS='-O3 -traceback -fPIC'
export       CFLAGS_DBG='-g  -traceback -fPIC'
export    CFLAGS_OPENMP='-openmp'
export              CXX='/opt/intel/oneapi/compiler/2022.0.2/linux/bin/intel64/icpc'
export           CXXLIB='-L/usr/lib/gcc/x86_64-linux-gnu/9 -lstdc++'
export          DEFINED='LINUX64 _USE_INTEL_IFORT _USE_OPENMP _DISABLE_MATHLIB_FPE'
export              F90='/opt/intel/oneapi/compiler/2022.0.2/linux/bin/intel64/ifort'
export         F90FLAGS='-O3 -fpe0 -traceback -fPIC'
export     F90FLAGS_DBG='-g  -fpe0 -traceback -fPIC'
export      F90FLAGS_I8=' -i8 -r8'
export  F90FLAGS_OPENMP='-openmp'
export               LD='/opt/intel/oneapi/compiler/2022.0.2/linux/bin/intel64/ifort'
export          LDFLAGS='-nofor_main'
export   LDFLAGS_OPENMP='-openmp'
export          MATHLIB='-Wl,--start-group -L/opt/intel/oneapi/mkl/2022.0.2/lib/intel64 -lmkl_intel_lp64 -L/opt/intel/oneapi/mkl/2022.0.2/lib/intel64 -lmkl_sequential -L/opt/intel/oneapi/mkl/2022.0.2/lib/intel64 -lmkl_core -Wl,--end-group'
export         OTHERLIB='-L/usr/lib/x86_64-linux-gnu -lpthread -L/usr/lib/x86_64-linux-gnu -lz -L/usr/lib/x86_64-linux-gnu -lpthread'

Filling cache...                                                       [  OK  ]
Checking permissions...                                                [  OK  ]

>>> Extraction <<<

entering directory '/tmp/install_aster.1717957'
Extracting aster-15.2.0.tgz...                                         [  OK  ]
--- 30900 files extracted
leaving directory '/tmp/install_aster.1717957'

>>> Configuration <<<

entering directory '/tmp/install_aster.1717957/aster-15.2.0'
leaving directory '/tmp/install_aster.1717957/aster-15.2.0'

>>> Configuration <<<

entering directory '/tmp/install_aster.1717957/aster-15.2.0'
leaving directory '/tmp/install_aster.1717957/aster-15.2.0'

>>> Configuration <<<

entering directory '/tmp/install_aster.1717957/aster-15.2.0'
Command line : export PATH=/tmp/tmp86r0b7_q:${PATH} ; . env.d/aster_full_std.sh ; ./waf configure --install-tests --prefix=/home/USER/aster/15.2
configure aster installation...                                       
Command output :
configure aster installation...                                        [FAILED]
Exit code : 1
readlink: missing operand

In the config.log for some reason aster wants to use gfortran even ifort was set up:

/usr/bin/gfortran
----------------------------------------
Checking for C compiler version
icc 20.2.1
----------------------------------------
Checking for Fortran compiler version
gfortran 9.4.0
----------------------------------------
fortran link verbose flag
==>
        PROGRAM MAIN
        END

<==
[1/2] Compiling build/std/.conf_check_e01dd803d6cb274f73e8aa4a2b8a1f70/test.f

['/usr/bin/gfortran', '-O3', '-fpe0', '-traceback', '-fPIC', '-c', '-o/tmp/install_aster.1717957/aster-15.2.0/build/std/.conf_check_e01dd803d6cb274f73e8aa4a2b8a1f70/testbuild/release/test.f.1.o', '/tmp/install_aster.1717957/aster-15.2.0/build/std/.conf_check_e01dd803d6cb274f73e8aa4a2b8a1f70/test.f']
err: gfortran: error: unrecognized command line option ‘-fpe0’
gfortran: error: unrecognized command line option ‘-traceback’

from /tmp/install_aster.1717957/aster-15.2.0: Test does not build: Traceback (most recent call last):
  File "/tmp/install_aster.1717957/aster-15.2.0/.waf3-2.0.20-df7687050314fa98c5daa534cb234b8c/waflib/Configure.py", line 335, in run_build
    bld.compile()
  File "/tmp/install_aster.1717957/aster-15.2.0/.waf3-2.0.20-df7687050314fa98c5daa534cb234b8c/waflib/Build.py", line 176, in compile
    raise Errors.BuildError(self.producer.error)
waflib.Errors.BuildError: Build failed
-> task in 'testprog' failed with exit status 1 (run with -v to display more information)

from /tmp/install_aster.1717957/aster-15.2.0: The configuration failed
==>
        PROGRAM MAIN
        END


So I'm not able to install even "aster-full-src-15.2.0" with Intel OneApi compilers. Tips and help would be welcome.

Best regards,
VonPire

#11 Salome-Meca installation » How to use Intel compilers and AMD mathlibs with SalomeMeca container? » 2022-03-16 11:51:13

VonPire
Replies: 4

Hello,

I have just build new workstation (CPU: AMD Epyc 7443, RAM: 256Gb ECC 3200-DDR4) and the code_aster performance is quite poor even I use MPI version of Code_aster.

I have managed to build and instal newest Code_aster 16.1.8 MPI version to Salome Meca container with "default" prerequisites. However I would like to build Code-aster MPI version to Salome Meca container with AMD math libraries (AOCL) and AMD (AOCC) or Intel compilers. I don't have any experience of doing this previously with any machine so help/tips would be welcome.

Here is old thread that are talking about the same thing, but in the time (2010) when there wasn't .sif / container version:
htt ps://www.code-aster.org/forum2/viewtopic.php?id=13616  (Please remove the space from the beginning)

What is the correct file/files that I should modify ? Before .sif / container time the right file was setup.py or setup.cfg.

Where I can find these files ?

Has anyone tested these AMD's AOCL libraries or AOCC compilers ?

Can somebody help or give tips with this ?

Edit:

Here are the links to the AOCC and AOCL documentations:
AOCC:
htt ps://developer.amd.com/wp-content/resources/57222_AOCC_UG_Rev_3.2.pdf (Please remove the space from the beginning)
AOCL:
htt ps://developer.amd.com/wp-content/resources/57404_AOCL_UG_Rev_3.1.pdf (Please remove the space from the beginning)

Best regards,
VonPire

#12 Re: Salome-Meca usage » Workstation/PC builds for simulations with Salome-Meca/Code_Aster MPI » 2022-01-27 17:11:38

Hello,

Thank you Mario for your reply.

I investigated processors from the blue company and AMD and came to conclusion that EPYC 7443P had best performance when considering 1.5k€ to 2k€ processor. So I ordered yesterday EPYC 7443P and ROMED8-2T motherboard. I was lucky and found them with a good price from Poland. I searched over day for them, because they were out of stock nearly every online shop at nordic countries and Germany.

Now I'm thinking about what RAM memory I should choose. EPYC 7443P supports Octa channel memory and DDR4-3200MHz. Capacity would be 258GB so I need 8 memory stick with 32Gb memory. I have understood that the memory for EPYC should be also ECC and registered. So I'm searching DDR4-3200 ECC RDIMM 32Gb memory sticks.

Candidates could be:
Kingston KSM32RD4/32MEI
MICRON MTA18ASF4G72PDZ-3G2E1

However the question arises that should I use Single Rank (1RX4 or 1RX8) or Double Rank (2RX4 or 2RX8) memory sticks to achieve maximum memory bandwidth and latencies or speed ? Do you guys have any knowledge are there any benefit with single rank vs double rank when it comes to RAM with FEA simulations ?

Best Regards,
VonPire

#13 Re: Salome-Meca usage » Workstation/PC builds for simulations with Salome-Meca/Code_Aster MPI » 2022-01-24 11:49:06

Hello Jonas and Mario,

Thank you for commenting my planned workstation build and telling the experience of your workstations.

This lead to thinking the workstation build again... The serious issue with both 5950x or i9-12900K is indeed the dual channel support resulting to low memory bandwidth. 5950x has 47.68 GiB/s and i9-12900K has 71.53 GiB/s (dual channel, but DDR5 ).

So now I have to think how can I keep the price not going much over 3k€, but have enough memory bandwidth and computing power. This leads to battle of 2 processor candidates:

AMD EPYC 7443P
Year: 03/2021
Cores:24
Threads:48
RAM: DDR4-3200MHz up to 4TB
Memory Bandwidth: Octa channel 190.7 GiB/s
L1 Cache: 1.5 MiB
L2 Cache: 12 MiB
L3 Cache: 128 MiB
Base clock: 2.85 GHz
Turbo clock: 4 GHz
TDP: 200 W
PCIe 4.0 lanes: 128
Price: 1650€

AMD Ryzen Threadripper PRO 3955WX
Year: 07/2020
Cores:16
Threads:32
RAM: DDR4-3200MHz up to 2TB
Memory Bandwidth: Octa channel 190.7 GiB/s
L1 Cache: 1 MiB
L2 Cache: 8 MiB
L3 Cache: 64 MiB
Base clock: 3.9 GHz
Turbo clock: 4.3 GHz
TDP: 280 W
PCIe 4.0 lanes: 128
Price:1200€

The question is that: Is EPYC 7443P better because of the larger L1,L2,L3 caches and additional 8 cores, but lower clock speeds ?
Another question is that could the 7443P be used as workstation build (using Ubuntu 20.04 LTS), because it is intented to be "Server CPU" ?

Best regards,
VonPire

#14 Salome-Meca usage » Workstation/PC builds for simulations with Salome-Meca/Code_Aster MPI » 2022-01-22 21:40:44

VonPire
Replies: 6

Hello Everyone,

I decided to open a new topic for open discussion about workstation/PC builds that are used for simulations with Salome-Meca/Code_Aster. This idea came from the fact that I'm searching and buying a new workstation build for Code_Aster MPI simulations.

It would be nice to hear what kind of workstation/PC builds people here have and also any kind of benchmark/performance test comparing between different builds would be welcome.

For example, does the optimal workstation build differ in some way when compared to workstation builds that are used for commercial FEA softwares ?

I think that this kind of discussion would help everyone to build optimal workstations for professional or hobbyist simulations with Salome-Meca/Code_Aster and decrease even more the performance gap to commercial softwares.

This is my current plan for build:

-Processor: AMD Ryzen 9 5950X; 3.4GHz, 16 cores, 32 threads, L2 Cache: 8MB, L3 Cache: 64MB, 24 PCIe 4.0 Lanes, Dual Channel support for RAM.

-Liquid cooling for processor: be quiet! Silent loop 2 360 mm

-RAM: G.Skill Ripjaws V DDR4-3600 CL16 QC – 128GB; DDR4, 3600MHz, Latency: 6-16-16-36, Non-ECC, Intel XMP 2.0

-SSD / Storage cooling: be quiet! MC1 PRO

-Motherboard: ASUS ROG CROSSHAIR VIII DARK HERO, Dynamic OC possibility, 24 PCIe 4.0 lanes.

-SSD storage: WD Black SN850 PCIe 4.0 NVMe M.2 - 1TB; Read Speed: 7000 Mt/s,  Writing Speed: 5300 Mt/s

-Power supply: Corsair RM750x (2021) - 750 Watt

-Case: Corsair 4000D AIRFLOW TG – Black

-GPU: My old PALIT GTX 1060 6GB

This kind of build costs like 2500€ (without GPU price)

What do you guys think ? Would you recommend an Intel 2xCPU or 1xCPU Thread Ripper/EPYC build instead of this ? I wish that 5950X would support PCIe 5.0 and DDR5 and Quad channel.

The new Intel i9-12900K would be tempting, because of the lower price, DDR5 and PCIe5.0 support + faster single and multi core speed at basic CPU benchmarks. This processor has only 24 threads and L3 cache is only 30 MB + the insure and question arises from that this processor have 8p (performance cores) + 8e (smaller and less powerful economical cores) and how this kind of CPU would work with Code_Aster ? 

Feel free to comment on this build and you can also post your workstation builds !

Best Regards,
VonPire

#15 Re: Salome-Meca usage » Threadripper/Threadripper Pro » 2022-01-20 15:29:46

Hello Mario,

This is very interesting question. It would be also nice to know that if people have AMD EPYC builds.

At the moment I'm thinking to buy and build a new calculation PC. I have certain budget and that leads to AMD Ryzen 9 5950X processor at the moment.

The downside is that 5950x supports only dual channel memory and DDR4 RAM...

I found also this processor comparison which was made with CalculiX. I put it as attachment.

Best regards,
VonPire

#19 Salome-Meca usage » ETAT_INIT consumes a lot of calculation time » 2022-01-11 22:53:44

VonPire
Replies: 3

Hello everyone,

I'm using Salome Meca 2021.0 MPI version on Ubuntu 20.04 LTS.

I have a bigger calculation model which consists of two STAT_NON_LINE. Second STAT_NON_LINE uses ETAT_INIT EVOL_NOLI. I noticed that using ETAT_INIT EVOL_NOLI consumes at least double time in intégration comportement phase.

Is this normal that restarting the same analysis with precise same settings with ETAT_INIT consumes at least double time ? I also use precise same run parameters (12288 mb memory, 2 MPI CPU, 1 MPI nodes, 0 number of threads, stable_mpi 15.4.0).

I made smaller test model to illustrate the same "problem" that I have with the bigger calculation model. With this smaller test model the time difference is insignificant (0.16s), but when thinking in percentages, it's 2 times more time consuming. With Larger model where one time step would take 15 min without using ETAT_INIT, takes then 30min with ETAT_INIT.

The time difference comes from time for intégration comportement phase. Look the picture that I put as attachment. MED file and COMM files are also as attachments.

Has anyone noticed similar behavior ? Is this normal ? Can somebody test this ?

Best Regards,
VonPire

#20 Salome-Meca usage » Restart of unconverged Case & Post-processing of last converged step » 2021-12-21 13:44:36

VonPire
Replies: 0

Hello everyone !

I'm using Salome-Meca 2021 and Ubuntu 20.04 LTS.

I have tried to get working the restart of the unconverged RunCase and also postprocessing of the last converged step of unconverged RunCase with separate RunCase with no succes. I think that this problem is due to the Salome-Meca 2021 AsterStudy module. I have read from the other topics that there shouldn't be this kind of problem when using original Code_Aster "standalone" or ASTK.

However the restart of the RunCase and postprocessing of the results in separate RunCase works if the original simulation RunCase has converged and calculated fully to the end (Green icon indicating completed RunCase).



This is the procedure that works when Simulation RunCase is converged and calculated fully to the end:

General structure of the Simulation RunCase:

DEBUT()
...
...
resnonl = STAT_NON_LINE(
                 ARCHIVAGE=_F(
                 LIST_INST=listr),
                 INCREMENT=_F(
                 LIST_INST=listr),
                 ...,
                 ...,)

FIN()

General structure of the post-processing RunCase:

POURSUITE()

resnonl = CALC_CHAMP(...,
                                            ...,
                                           RESULTAT=resnonl,
                                           reuse=resnonl)
IMPR_RESU(
  FORMAT='MED',
  RESU=_F(
    RESULTAT=resnonl
  ),
  UNITE=80
)

FIN()

General structure of the restart RunCase:

POURSUITE()

resnonl = STAT_NON_LINE(
                 ARCHIVAGE=_F(
                 LIST_INST=storing
                 ),
                 ETAT_INIT=_F(
                 EVOL_NOLI=resnonl,
                 NUME_ORDRE=16
                 ),
                 INCREMENT=_F(
                 LIST_INST=listr
                 ),
                 RESULTAT=resnonl,
                 )
FIN()


This Procedure doesn't work anymore for restarting or postprocessing if Simulation RunCase is stopped/unconverged (RunCase is shown as red) for example due to Newton iteration limit in 80%/100% of time and we are requesting restart time (with NUME_ORDRE) before uncorverged timestep. For some reason the blue reuse button cannot activated anymore if the Simulation Case isn't converged fully even there is the ARCHIVAGE on (in the STAT_NON_LINE).

Does anyone have solution to this problem ?

Best regards,
VonPire

#21 Re: Code_Aster installation » Problem with salome_meca-lgpl-2021.0.0-1-20210811-scibian-9 -> SIGSEG » 2021-11-20 00:04:02

Hello Cyprien and JMarcelino,

I have also had the same problem, but I managed to get over it by saving the study with a new name.

I have noticed that it sometimes causes this problem when you have modified, runned and saved the same study many times. Additionally it needs often also restart the computer, if a new blank study also crashes in Salome_Meca when going to ParaVis.

Best regards,
VonPire

#22 Re: Code_Aster usage » Contact with friction: -performance tetra/hexa - Contact algorithms » 2021-11-10 14:57:27

Hello Micke,

Have you tried does the Mortar algorithm work with friction ? At least In DEFI_CONTACT it is possible to use FORMULATION=CONTINUE, ALGO_CONT=LAC, APPARIEMENT=MORTAR, FROTTEMENT=COULOMB, COULOMB=0.1

At least Mortar should work without friction FROTTEMENT=SANS...

Mortar is SEGMENT-SEGMENT or FACE-FACE type contact and should capture penetration and contact pressures very precisely and there is no need to adjust penalization parameters.

Remember that you have to precondition the slave surface with CREA_MAILLAGE, DECOUPE_LAC when using Mortar algorithm.

For more info refer to U4.44.11 (DEFI_CONTACT), U2.04.04 (General info and tips for contacts), R5.03.52 (Continue method), R5.03.55 (Mortar method) and code-aster.org/V2/UPLOAD/DOC/Formations/13-contact-friction.pdf (Contact and friction presentation)

Best regards,
VonPire

#23 Re: Salome-Meca installation » [Mesh] error while trying to mesh an object » 2021-03-26 17:16:25

I also have the same problem with SalomeMeca2020 and Ubuntu 20.10 (Groovy Gorilla).

I tried to make symbolic link from my system liblapack.so.3 to salomes prequistic folder, but it didn't help to the problem:
ln -sf /lib/x86_64-linux-gnu/liblapack.so.3 home/username/salome_meca/V2020.0.1_universal_universal/prerequisites/Lapack-380/lib/liblapack.so.3

I'm not good at using Ubuntu so this was just a blind attempt.

Has anyone managed to solve this problem ?

Best regards,
VonPire

#24 Re: Code_Aster usage » Shell results visualization » 2021-03-25 12:53:12

Hello Ioannis,

Thank you for answering my question.

Would it be possible with other post-processing program then ? GMSH ?

If I remember correctly even CalculiX CGX can do the "3d" visualization for CalculiX shells and beams results ?

Regards,
VonPire

#25 Code_Aster usage » Shell results visualization » 2021-03-24 12:21:48

VonPire
Replies: 3

Hello everyone,

Does anybody know if it is possible to visualize DKT or Coque_3d shell results in Paravis, so that it looks like 3D. I want to see bottom, (mid) and top surface stresses at the same time so that it shows also the thickness for the shell part. In commercial FEA programs this seems to be a normal option when post-processing shells results.

I attached a picture to clarify what I mean.

At this point I'm only able to view bottom, mid or top surface stresses at the midplane surface separately.

Regards,
VonPire