Newsgroups: comp.parallel.mpi
From: bjarne@poppel.ii.uib.no (Bjarne Geir Herland)
Subject: MPI_COMM_UNION ?
Keywords: mpi, parallel programming
Organization: OrgFreeware
Date: 12 Sep 1994 13:35:18 GMT
Message-ID: <351lem$dt6@due.uninett.no>



  Dear fellow MPI'ers,
 
  I'm desperately in need of an MPI_COMM_UNION-like function, which
  returns the union of two communicators to the nodes in *both* communicators.
  Let me illustrate my problem with an example :

  I start with nodes 1,2,3 and 4 . These form my idle_comm-communicator,
    thus idle_comm=(1,2,3,4} .
  Then, I chose node 1 and 2 to form a new communicator (using MPI_COMM_SPLIT()
    or whatever), and they start solving some task in parallel. I update
    my idle_comm, so idle_comm={3,4}
  After that, I decide that node 3 should do something also, so I create
    another communicator for it and it starts working. I update idle_comm,
    so idle_comm is now {4}
   Finally, nodes 1 and 2 have finished their task, and I want to update my
    idle_comm to be {1,2,4}. The natural solution would be to use
    some MPI_COMM_UNION() to obtain the new idle_comm, but this function does
    not exist. So - what do I do? I use MPI_COMM_GROUP() to get the
    two groups for the sub-communicators, then MPI_GROUP_UNION() to create
    the group for the new idle_comm, but then, I cannot use MPI_COMM_CREATE()
    to form then new idle_comm, because I have no communicator for which I
    could call MPI_COMM_CREATE()... I cannot save and use the original
    idle_comm (or MPI_COMM_WORLD), because this would involve node 3, who
    are busy doing something else. Likewise, I cannot call MPI_COMM_CREATE()
    for just one of the sub-communicators, and then send it to the nodes
    in the other, because the MPI_COMM_CREATE()-call demands that the group
    used to form the new communicator must be a *subset* of the group of
    the communicator for which the operation is called (phew!).

    The problem is actually that I don't group the nodes together in the
    strict reverse order of the splitting... 

  So - any ideas? I could, of course, create all possible communicators in
  the init-phase and pick the one needed, but this is rather impractical
  for any number of nodes larger than 4.
  All solutions/suggestions are greatly appreciated! 
  
  Best regards,

  Bjarne Geir Herland
---
  Paragon Systems Engineer     \\Parallel processing lab/National MPP Centre \\
  ,-. ,-  ,- ,-   / / ,-  |-.  //Dept. of Informatics, University of Bergen, //
  |-' `-` |  `-` / /  `-` `-'  \\High Technology Centre, N-5020 Bergen,Norway\\
  Bjarne.Herland@ii.uib.no     //phone: +47 55 54 41 66  fax: +47 55 54 41 99//
---


