Newsgroups: comp.parallel.mpi
From: torres@cc.gatech.edu (torres)
Subject: Re: Collective calls and MIMD?
Organization: College of Computing, Georgia Tech
Date: 2 Mar 1996 00:01:49 -0500
Message-ID: <4h8knt$f1k@lennon.cc.gatech.edu>

In article <4h7rod$qbr@watnews1.watson.ibm.com>, Marc Snir  <snir> wrote:
>I would suggest that you read the MPI material on communicators in general, and
>intercommunicators, in particular.  If you want different groups of processes
>to execute different collective communication calls, you don't use
>MPI_COMM_WORLD.  If you want client-server code, you use intercommunicators.

  Thank you for your advice.

  I'm probably wrong, but 

   * In order to setup the INTERcommunicators you have to issue some initial
     collective calls that involve the use of MPI_COMM_WORLD. For instance,
     we use MPI_Comm_split to create the initial INTRAcommunicators and at
     the beginning the only available communicator is MPI_COMM_WORLD. Even
     the MPI_Comm_dup function is collective. This is not big deal if the
     only existing processes are our servers and clients of our servers,
     but the problems start if you have a second set of servers non-related
     to our servers.

   * How can we protect our servers (who obviously are members of MPI_COMM_
     WORLD) from any collective call under MPI_COMM_WORLD issued by an arbi-
     trary client? (Actually, our servers wont be affected, but the processes
     that issue these collective calls will never finish because they will
     wait for our servers to concur with a collective call unknown to them).

  Our goal is to write servers that can work with arbitrary clients (no nece-
ssarily written by ourselves). It seems that this should not be impossible
for a message system that claims to support MIMD programming.

  Still looking for an answer,
-- 
					
-Francisco Jose Torres-Rojas (torres@cc.gatech.edu)

