Welcome to the forums. Please post in English or French.

You are not logged in.

#1 2012-11-21 10:33:27

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

[SOLVED] MPI communitaction

Hello,
as stated here http://www.code-aster.org/forum2/viewtopic.php?id=18376
I'm currently starting on working on FSI with Code_Aster and Open_Foam.
The idea is that the two codes communicate via MPI (e.g. openMPI).

I have some questions concerning this mpi communication:

1. As I don't need to use mpi for parallel calculations do I even need to compile a special version of code_aster?
2. Is there any documentation on how to include a new fortran routine (which I need in order to invoke the mpi communications)?

I hope the questions are understandable.
Any comment is appreciated.

Best Regards,
Richard

Last edited by RichardS (2013-02-12 14:20:33)


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#2 2012-11-21 15:37:40

Thomas DE SOZA
Guru
From: EDF
Registered: 2007-11-23
Posts: 3,066

Re: [SOLVED] MPI communitaction

RichardS wrote:

1. As I don't need to use mpi for parallel calculations do I even need to compile a special version of code_aster?

It is recommended to compile a parallel version of Code_Aster even if you only plan to use MPI for interprocess exchange. Indeed this will at least ensure that MPI initialization is correctly done (call to MPI_Init just after the executable has been launched) and provide you with some utilities that may come handy.

RichardS wrote:

2. Is there any documentation on how to include a new fortran routine (which I need in order to invoke the mpi communications)?

You may look inside the D6 section of the Code_Aster manual. It deals with a lot of Code_Aster development possibilities. This may not cover what you need but it will at least explain how to add a fortran subroutine to the code.
In order to learn how to build a new executable with that subroutine, you need to read the manual of ASTK : U1.04.00.

TdS

Offline

#3 2012-11-22 13:33:39

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hello Thoms,
thank you very much for your comment.

Thomas DE SOZA wrote:

It is recommended to compile a parallel version of Code_Aster even if you only plan to use MPI for interprocess exchange. Indeed this will at least ensure that MPI initialization is correctly done (call to MPI_Init just after the executable has been launched) and provide you with some utilities that may come handy.

I compiled an Aster version with OpenMPI, leaving the options for the mumps,petsc,.. solvers unchanged. I attached my config file.
After testing the parallelism of MULT_FRONT it seems that it works.

Thomas DE SOZA wrote:

You may look inside the D6 section of the Code_Aster manual. It deals with a lot of Code_Aster development possibilities. This may not cover what you need but it will at least explain how to add a fortran subroutine to the code.
In order to learn how to build a new executable with that subroutine, you need to read the manual of ASTK : U1.04.00.

You probably mean D5?
At least I was able to compile a simple "Hello World!" example with the help of the documentation.

      SUBROUTINE OP0199()
      IMPLICIT  NONE
C     Fortran example  

      include 'mpif.h'
      integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
      logical flag
   
      call MPI_INITIALIZED(flag,ierror)
      print*, 'ierror MPI_INITIALIZED:', ierror
      print*, 'flag   MPI_INITIALIZED:', flag
      call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
      print*, 'size',size
      print*, 'ierror ', ierror
      call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
      print*, 'node', rank, ': Hello world'
      print*, 'ierror ', ierror
      call MPI_FINALIZE(ierror)
      end

Here is the output of the significant part of my *.mess file:

  # ------------------------------------------------------------------------------------------
  # Commande No :  0002            Concept de type : -
  # ------------------------------------------------------------------------------------------
  HELLO_WORLD()

 ierror MPI_INITIALIZED:                    0
 flag   MPI_INITIALIZED: T
 size                    1
 ierror                     0
 node      140733193388032 : Hello world
 ierror                     0

Everything seems to run fine, e.g. MPI_INITIALIZED flag returns "T", which means MPI_INIT was already called.
But I get a strange rank of 140733193388032???

Do you have any suggestions?

Best regards,
Richard


Attachments:
config.txt, Size: 6.57 KiB, Downloads: 856

Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#4 2012-11-23 10:54:41

Thomas DE SOZA
Guru
From: EDF
Registered: 2007-11-23
Posts: 3,066

Re: [SOLVED] MPI communitaction

RichardS wrote:

I compiled an Aster version with OpenMPI, leaving the options for the mumps,petsc,.. solvers unchanged. I attached my config file.
After testing the parallelism of MULT_FRONT it seems that it works.

I wonder how the MPI initialization was done as you're missing one macro in your config.txt (_USE_MPI), see my own config.txt attached. After adding it to your config.txt you need to clean the C library of Code_Aster and recompile :

$ as_run --vers=XXX --make clean bibc/* bibf90/* && as_run --vers=XXX --make
RichardS wrote:

You probably mean D5?

Right.

RichardS wrote:

At least I was able to compile a simple "Hello World!" example with the help of the documentation.
Here is the output of the significant part of my *.mess file:

  # ------------------------------------------------------------------------------------------
  # Commande No :  0002            Concept de type : -
  # ------------------------------------------------------------------------------------------
  HELLO_WORLD()

 ierror MPI_INITIALIZED:                    0
 flag   MPI_INITIALIZED: T
 size                    1
 ierror                     0
 node      140733193388032 : Hello world
 ierror                     0

Everything seems to run fine, e.g. MPI_INITIALIZED flag returns "T", which means MPI_INIT was already called.
But I get a strange rank of 140733193388032???

Do you have any suggestions?

First try to compile with _USE_MPI. If it does not work, you must make sure that integers passed or retrieved from MPI are 32bits long (INTEGER*4 type).

See mpicm0.F mpicm1.F mpicm2.F in bibf90/utilitai for some example.

TdS

Offline

#5 2012-11-23 11:34:08

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hello Thomas,
thanks for you comments!

Thomas DE SOZA wrote:

I wonder how the MPI initialization was done as you're missing one macro in your config.txt (_USE_MPI), see my own config.txt attached. After adding it to your config.txt you need to clean the C library of Code_Aster and recompile : ...

I'am also curious where MPI_INIT is called. I knew that the correct option would be to add _USE_MPI, but I got the following compilation errors and chose to give it a try without that DEF:

<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/fetmpi.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpichk.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpicm1.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpisst.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpicm0.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpialr.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/utilitai/mpicm2.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/utilitai.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/debug/dbgmpi.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/debug.msg)
<E>_COMPIL_ERROR   error during compiling /temp-5/rs/code_aster/unstable/bibf90/debug/mpierr.F (see /temp-5/rs/code_aster/STA11.2/obj/aster/debug.msg)

I had a look into these *msg files but they appear to be empty.

Thomas DE SOZA wrote:

If it does not work, you must make sure that integers passed or retrieved from MPI are 32bits long (INTEGER*4 type).

Thats exactly what caused this large integer to appear. Now my output is as expected.
I'm still wondering when MPI_INIT is called, I'll have to trace it back...

Best Regards,
Richard

Last edited by RichardS (2012-11-23 11:35:31)


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#6 2012-11-23 13:09:19

Archibald Archambaud
Member
From: Clamart, France
Registered: 2007-12-03
Posts: 322

Re: [SOLVED] MPI communitaction

Hi,

The MPI_Init is done in the C layer of Aster : $ASTER_ROOT/NEW11/bibc/supervis/python.c.

int
_MAIN_(argc, argv)
    int argc;
    char **argv;
{
    int ierr;
#ifdef _USE_MPI
    int rc;
    rc = MPI_Init(&argc,&argv);
    if (rc != MPI_SUCCESS) {
         fprintf(stderr, "MPI Initialization failed: error code %d\n",rc);
         abort();
    }

AA

Offline

#7 2012-11-23 18:03:42

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Thanks Archibald,
this means that MPI is not initialized, so the values I got from my test function are just nonsense.
I tried getting a minimal communication between the function I compiled into Aster and a standalone version,
but with no positive result. Inside Aster it returns always rank 0, even if mpiexec or mpirun tell me that it's
rank should be 1.

So I tried to go further with compilation using _USE_MPI option.
As no message was in the error message files utilitai.msg/debug.msg I tried to manually debug it with ASTK.
I mainly got messages about nonexistant IMPLICIT type:

      IAUX2 = MPI_SUCCESS
                         1
Error: Symbol 'mpi_success' at (1) has no IMPLICIT type

So Itried to declare a type for all these occurances.

After that compilation runs further and ends with this error:

 Compilation of commands catalogue

creating directory /temp-5/rs/code_aster/unstable/commande              [  OK  ]
copying .../unstable/catapy...                                          [  OK  ]
adding a symbolic link /tmp/rs-newton-build.19568/global/global/asteru to /temp-5/rs/code_aster/unstable/asteru...
                                                                        [  OK  ]
adding a symbolic link /tmp/rs-newton-build.19568/global/global/Python/Cata/cata.py to /tmp/rs-newton-build.19568/global/global/cata.py...
                                                                        [  OK  ]
copying /temp-5/rs/code_aster/unstable/elements...                      [ SKIP ]
compilation of commands                                                 [FAILED]
Exit code : 256
*** The MPI_comm_size() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[newton:12148] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
End of the Code_Aster execution - MPI exits normally

I found something similar here:
http://www.code-aster.org/forum2/viewtopic.php?id=13680

But unfortunately I couldn't find a solution (as for the moment I'm trying to avoid using mpich2).
Any idea where my problem could arise?

Regards,
Richard

Last edited by RichardS (2012-11-23 18:06:55)


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#8 2012-11-23 18:26:07

Thomas DE SOZA
Guru
From: EDF
Registered: 2007-11-23
Posts: 3,066

Re: [SOLVED] MPI communitaction

MPI_SUCCESS is defined by mpif.h in Fortran so something is definitely wrong with your MPI install (maybe the wrappers).

For MPI init problems, try to clean the whole thing before recompiling they may have been some mismatches ("as_run --vers=XXX --make clean").

TdS

Offline

#9 2012-11-25 13:38:36

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hello Thomas,
of course I cleaned all before recompiling...

i the meantime I tried to install a Parallel Version on my laptop as it is described here:
https://sites.google.com/site/codeaster … aster-11-2

Of course I tested both my Openmpi installation and the Parallel mumps version with the provided examples.
Everything was ok.

The compilation runs OK:
I tested the liste_short test cases, 29 end  OK and 9 end with ALARM, nothing special.

But I noticed some strange behaviour:

- If I run the same study several times (using as_run study.export) the path of the temporary directory
  gets longer every time by adding a "/global" subdirectory. After several starts I have something like:

P rep_trav /tmp/richi-richi-laptop-interactif.32460/global/global/global/global/global/global/global/global/global

- If I run a study in parallel on n procs, every command is executed n times!

So definitely something is wrong with my configuration. Still lost...
Any Ideas?

Best Reards,
Richards

..perhaps this should be moved to Installation.

Last edited by RichardS (2012-11-25 13:41:02)


Attachments:
config.txt, Size: 4.85 KiB, Downloads: 756

Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#10 2012-11-26 17:36:19

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hello again,

concerning

RichardS wrote:

- If I run a study in parallel on n procs, every command is executed n times!

sorry for my stupidity, of course this is the right behaviour.
If I understand it correctly, for the sequential commands (e.g. not involving mumps, petsc....) only the execution of PROC 0
is relevant and in case of parallel computations the solver handles the parallelism itself, distributing the work to each processor.

As I have a (correctly) working MPI installation on my laptop I tested the communication of Aster with a compiled version of the hello world example. Running an mpirun like:

mpirun -np 1 --hostfile /opt/aster_mpi/codeaster/mpi_hostfile /tmp/...../mpi_script.sh : -np 1 HELLO_WORLD

leads to an infinite waiting of aster before executing command after HELLO_WORLD() (in my case FIN()).
I think this is due to some checking made within mpichk.f:
Aster is  waiting for all procs to exit HELLO_WORLD(), but as only one of the two procs is invoking HELLO_WORLD() (as the other is outside of Aster) this check is never finished and aster waits forever.

So my conclusion:
- As I do not need any parallelism I will stick to a sequential version of Code_Aster, checking myself that MPI_INIT() and MPI_FINALIZE() are invoked correctly.

Indeed with this simplification the communication works!
....finally I can start the real working process.

Best Regards,
Richard


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#11 2013-03-10 23:11:58

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:

Hello Thoms,

At least I was able to compile a simple "Hello World!" example with the help of the documentation.

      SUBROUTINE OP0199()
      IMPLICIT  NONE
C     Fortran example  

      include 'mpif.h'
      integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
      logical flag
   
      call MPI_INITIALIZED(flag,ierror)
      print*, 'ierror MPI_INITIALIZED:', ierror
      print*, 'flag   MPI_INITIALIZED:', flag
      call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
      print*, 'size',size
      print*, 'ierror ', ierror
      call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
      print*, 'node', rank, ': Hello world'
      print*, 'ierror ', ierror
      call MPI_FINALIZE(ierror)
      end

Here is the output of the significant part of my *.mess file:

  # ------------------------------------------------------------------------------------------
  # Commande No :  0002            Concept de type : -
  # ------------------------------------------------------------------------------------------
  HELLO_WORLD()

 ierror MPI_INITIALIZED:                    0
 flag   MPI_INITIALIZED: T
 size                    1
 ierror                     0
 node      140733193388032 : Hello world
 ierror                     0

Everything seems to run fine, e.g. MPI_INITIALIZED flag returns "T", which means MPI_INIT was already called.
But I get a strange rank of 140733193388032???

Best regards,
Richard

Hi Rechard,
I'm trying to compile a subroutine in Fortran (for example: mypro.f) and call it from Aster. Can you guide me more details how to do that, please?
How to call the subroutine "mypro" from Code_Aster's commands (file.comm)?

Thank you so much
Sheva

Last edited by sheva_bk (2013-03-11 00:11:04)

Offline

#12 2013-03-10 23:18:10

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:

Hello,
as stated here http://www.code-aster.org/forum2/viewtopic.php?id=18376
I'm currently starting on working on FSI with Code_Aster and Open_Foam.
The idea is that the two codes communicate via MPI (e.g. openMPI).

I have some questions concerning this mpi communication:

1. As I don't need to use mpi for parallel calculations do I even need to compile a special version of code_aster?
2. Is there any documentation on how to include a new fortran routine (which I need in order to invoke the mpi communications)?

I hope the questions are understandable.
Any comment is appreciated.

Best Regards,
Richard

Hi Richard,

Can you lead me to some useful documents of MPI?. I want to study it for the the work of couple FSI.

Thank you so much,
Sheva

Offline

#13 2013-03-11 14:03:36

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

sheva_bk wrote:

Hi Rechard,
I'm trying to compile a subroutine in Fortran (for example: mypro.f) and call it from Aster. Can you guide me more details how to do that, please?
How to call the subroutine "mypro" from Code_Aster's commands (file.comm)?

Thank you so much
Sheva

Hello Sheva,
first thing you need if you want to call your own subroutine from a command file is a new operator.
How to implement a new operator is explained in the document D5.01.01.
As I struggled a bit implementing my first operator I will try to write down all the necessary steps:

1. Every operator needs a number ranging from 0001 to 0199. Most of the numbers are already occupied by existing commands, others are not. In order to not conflict with existing code, you should start numbering your own operators from 0200. Making numbers greater than 199 available to code aster the following changes have to be done:

- add a file ex0200.f to bibfor/supervis which is similar to ex0100
- edit execop.f in bibfor/supervis so that EX0200 is called

2. Assuming your first aster-command should be called 'mypro', has no return value (so its a 'PROC' and no 'OPER'), no input and you want to assign number 200 to it:
- save you fortran subroutine in bibf90/utilitay (probably not the best idea but ok for the beginning) and name it OP0200.F (the name of your subroutine should be OP0200).
- assign the name 'MYPRO' to it by creating a file mypro.capy in catapy/commande and assigning the number 200 to it:

MYPRO=PROC(nom="MYPRO",op=200,
                                 reentrant='n',
                                 fr="my first aster command",   
                        )  ;

- create a file op0200.f in fermetur directory by copying an existing op0*** and adjusting the name.

3. Recompile code-aster.

Now you should be able to use MYPRO() inside a command file.

I attached an archieve containing all files for adding a 'HELLO_WORLD' command to your code_aster installation.
Just copy the files in the right directories and recompile code_aster. Using HELLO_WORLD() in a command-file should print 'HELLO WORLD!' to your mess-file.

I hope this helps getting you started.

Best regards,
Richard

Last edited by RichardS (2013-03-12 11:52:33)


Attachments:
HELLO_WORLD_CA.tar.gz, Size: 2.39 KiB, Downloads: 633

Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#14 2013-03-11 16:26:44

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

Hi Rechard,

With your help now I understand how to call a program Fortran from Code_Aster. However when i recompile Code_Aster, there are some errors in the file execop.f:

extracting 'python.o' from libaster.a...                                [  OK  ]
creating .../STA11.2/asteru...                                          [FAILED]
Exit code : 256
/opt/aster/STA11.2/lib/libaster.a(execop.o): In function `execop_':
execop.f:(.text+0x17a): undefined reference to `utgtme_'
execop.f:(.text+0x424): undefined reference to `utptme_'
execop.f:(.text+0x4ac): undefined reference to `jermxd_'
execop.f:(.text+0x508): undefined reference to `utgtme_'
collect2: ld returned 1 exit status

<E>_LINK_ERROR     error during linking

I think it has lack of link to subroutines 'utgtme' ..... Can you fix it, please?
Anyway, I use the free subroutines such as: op0053.f op0058.f op0061.f op0062.f op0063.f op0064.f op0065.f op0066.f
and it works well!

One more question, what type of data (sd_prod) do you assign for the results of CFD? Do you have to create a new structure of data for the field of fluid?

Thank you so much!
Best regards,
Sheva

Offline

#15 2013-03-11 19:08:04

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hello Sheva,
I think the problem arose because I uploaded the files for V11.3 and you are using V11.2.
I added execop_V112.F to the archieve for the use with V11.2.
Please test again, it should work now.

Best regards,
Richard

Last edited by RichardS (2013-03-11 19:38:02)


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#16 2013-03-11 19:20:20

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

sheva_bk wrote:

One more question, what type of data (sd_prod) do you assign for the results of CFD? Do you have to create a new structure of data for the field of fluid?

I don't use a particular datatype for the CFD solution. I recieve the pressure data from the fluid solver as a list of reals via the MPI communcation (the correct ordering is crucial here). With this data I compute the correct nodal values and apply them as a BCND.
This is done with a command similar to MODI_CHAR_FSI. (all files in /bibfor/echange are worth having a look at in this context!)
Then I restart a DYNA_NON_LINE calculation with the changed BCNDs and the correct time intervall (you should have a look at the macro CALC_IFS_DNL).

The advantage of using MPI communication is that you don't have to compile your fluid solver "into" code_aster and no data transfer via harddisk writing/reading is needed.

Good luck!

Best regards,
Richard


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#17 2013-03-12 10:23:46

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:

Hello Sheva,
I think the problem arose because I uploaded the files for V11.3 and you are using V11.2.
I added execop_V112.F to the archieve for the use with V11.2.
Please test again, it should work now.

Best regards,
Richard

Hi Rechard,

Thank you so much for your help! however I don't find the file execop_V112.F in your archive file *.tar.gz. where is it?

Regards,
Sheva

Offline

#18 2013-03-12 11:47:49

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:
sheva_bk wrote:

One more question, what type of data (sd_prod) do you assign for the results of CFD? Do you have to create a new structure of data for the field of fluid?

I don't use a particular datatype for the CFD solution. I recieve the pressure data from the fluid solver as a list of reals via the MPI communcation (the correct ordering is crucial here). With this data I compute the correct nodal values and apply them as a BCND.
This is done with a command similar to MODI_CHAR_FSI. (all files in /bibfor/echange are worth having a look at in this context!)
Then I restart a DYNA_NON_LINE calculation with the changed BCNDs and the correct time intervall (you should have a look at the macro CALC_IFS_DNL).

The advantage of using MPI communication is that you don't have to compile your fluid solver "into" code_aster and no data transfer via harddisk writing/reading is needed.

Good luck!

Best regards,
Richard

Thanks for your answer!
It sound so interesting! Can you give me some useful tutorials of MPI communication for this case of coupling. I'm also newbie with MPI sad(.
I imagine a algorithm of coupling like this:

couplagefs.JPG


I wonder how to the code Aster and code CFD finish their calculation and WAIT new data from each others? What commands did you used?

Thank you so much!

Regards,
Sheva

Last edited by sheva_bk (2013-03-12 11:52:52)

Offline

#19 2013-03-12 11:56:20

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hopefully now all is ok.


sheva_bk wrote:
RichardS wrote:

Hello Sheva,
I think the problem arose because I uploaded the files for V11.3 and you are using V11.2.
I added execop_V112.F to the archieve for the use with V11.2.
Please test again, it should work now.

Best regards,
Richard

Hi Rechard,

Thank you so much for your help! however I don't find the file execop_V112.F in your archive file *.tar.gz. where is it?

Regards,
Sheva


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#20 2013-03-15 01:04:00

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:

Hopefully now all is ok.

It worked well.
and..How about my another question?

Thank you Richard
Sheva

Offline

#21 2013-04-02 16:07:44

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

Hi Richard,

You know I'm trying to couple the code Aster and our code CFD for the problem transient FSI. I've introduced our code in Aster's sources like a operator (subroutine) and some new others routines for exchanging the datas of coupling. The loops of coupling has done by calling our code for CFD and the command DYNA_LINE_TRANS for CSD, all of them are in the Python's commands (file .comm).  It looks like work well with the sequential version! But for the parallel version, I always blocked up with the error "Erreur numérique (floating point exception)". I can't debug the parallels codes then I haven't found where the error! I note that our standalone code works well.
I temporarily leave this method and think about the MPI communicator to exchange the datas. It looks like better than the method above, because the recompiling one big code is not easy!. I know you have very much experiences in MPI, can you give me some helps please!
Firstly, I wonder how to identify the rang of each code? 
for example : one processor for code Aster: rang = 0
                    four processors for code CFD: rang = 1, 2, 3, 4
                    After that we can exchange the datas between the processors with the routines of MPI: MPI_SEND, MPI_RECV,....
Secondly, how to the code Aster and code CFD finish their calculation and WAIT new data from each others?
Thank you so much,

Best regards,
Sheva

Offline

#22 2014-05-22 17:24:47

sheva_bk
Member
Registered: 2012-03-14
Posts: 49

Re: [SOLVED] MPI communitaction

RichardS wrote:

Hello,
as stated here http://www.code-aster.org/forum2/viewtopic.php?id=18376
I'm currently starting on working on FSI with Code_Aster and Open_Foam.
The idea is that the two codes communicate via MPI (e.g. openMPI).

I have some questions concerning this mpi communication:

1. As I don't need to use mpi for parallel calculations do I even need to compile a special version of code_aster?
2. Is there any documentation on how to include a new fortran routine (which I need in order to invoke the mpi communications)?

I hope the questions are understandable.
Any comment is appreciated.

Best Regards,
Richard

Hi Rechard,

I guess you finished this FSI work for your thesis! Does your algorithm work well with MPI communication?
I am always interested in this idea that two codes communicate via MPI. Can you give me some guides to do that? Do you use the PYTHON layer (like a plateform) to manage the iteraction of coupling?

I need your helps! thanks in advance!
Sheva

Offline

#23 2014-05-22 20:16:06

RichardS
Member
From: Munich, Germany
Registered: 2010-09-28
Posts: 551
Website

Re: [SOLVED] MPI communitaction

Hi Sheva,

sheva_bk wrote:

Hi Rechard,
I guess you finished this FSI work for your thesis! Does your algorithm work well with MPI communication?

Yes, I finished it some time ago. Finally all went well with the MPI communication and besides the parallelization of the different solvers themsalves all went well and the results were quite satisfying. I think I will ask my supervisors if I would by allowed to post my thesis here (or at least some relevant parts).

sheva_bk wrote:

I am always interested in this idea that two codes communicate via MPI. Can you give me some guides to do that? Do you use the PYTHON layer (like a plateform) to manage the iteraction of coupling?

Sure I can try. Here just the short form:
There are three mpi processes started simultaniously via mpirun. Code_Aster alongside OpenFOAM and the third one is the "supervisor"/coupler. The supervisor will tell each code what they should do, like "compute the next time increment", "edit time increment and compute from same start time" and so on, he also communicates the results from one solver to the other. All the communication is done on the FORTRAN side for Code_Aster, for the others it's C/C++. Basically aster stays in a for loop until the supervisor tells him what to do next. Also the supervisor checks for convergence and has some additional features to improve convergence (some relaxation methods).

Hope that helps you a bit. In the meantime I will try to post my thesis, but there won't be any programming aspects in it.
Best regards,
Richard


Richard Szoeke-Schuller
Product Management
www.simscale.com
We are hiring! https://www.simscale.com/jobs/

Offline

#24 2021-07-01 19:34:29

Fabrizio
Member
Registered: 2020-11-20
Posts: 27

Re: [SOLVED] MPI communitaction

Dear Richard, I am writing to you from Argentina, I am a relatively new code aster user.
In general I use UMAT's, to implement through them, constitutive laws.
Today I need to call a subroutine (written in fortran 90) before calling UMAT, which is in charge of loading the characteristics of the microstructure of the material.
I read that you managed to create a new code aster command that calls an external subroutine. Where can I see information about this?
I also saw that you attached a simple example of how to do it,
How do I find the bibfor/supervis bibf90/utilitay folders?

I hope you can help me

Best regards

Offline