Newsgroups: comp.parallel.mpi
From: A Gordon Smith <smith>
Subject: Re: How to do this on MPI nicely
Organization: Department of Computer Science, Edinburgh University
Date: Mon, 27 Nov 1995 10:24:30 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <DIp697.G40.0.staffin.dcs.ed.ac.uk@dcs.ed.ac.uk>

pikus@sbphy.physics.ucsb.edu (Fedor G. Pikus) wrote:
>I want to do the following two things on MPI:
>1) Each node holds a real number and 2 interger numbers; I need to find
>the node which has the smallest real number and broadcast its numbers (both
>real and integer) to all nodes. The numbers which other nodes held
>should be discarded and overwritten by the numbers from the selected
>node.


Hello Fedor,

This sounds like a job for the MPI_MINLOC reduction operator. It operates
on (value, index) pairs where the result is the pair with minimal value
and with minimal index over pairs with same minimal value. You could use
this to get the rank of the process holding minimal real value by pairing
the real data value with each process' rank as the index. This can be done
with a 2 element REAL array represented by MPI datatype MPI_2REAL.

Example (forgive me my Fortran):

      PROGRAM minloc
#include <mpif.h>
      IMPLICIT NONE

      REAL datarank(2)
      REAL mindatarank(2)
      INTEGER idata(2)
      INTEGER root, rank, ierr


      call MPI_INIT(ierr)
      call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

      datarank(1) = (100.0 - rank) + (rank / 100.0)
      datarank(2) = rank
      idata(1) = 1000 + rank
      idata(2) = 2000 + rank

      call MPI_ALLREDUCE(datarank, mindatarank, 1, MPI_2REAL,
     +                   MPI_MINLOC, MPI_COMM_WORLD, ierr)

      datarank(1) = mindatarank(1)
      root = mindatarank(2)
      call MPI_BCAST(idata, 2, MPI_INTEGER, root,
     +               MPI_COMM_WORLD, ierr)

      write (*,*) 'Results: ', datarank(1), idata

      call MPI_FINALIZE(ierr)
      END


>2) There is an array, each node computes few elements of it, then I need
>each node to have the entire array.
>

This sounds like MPI_ALLGATHER, possibly requiring some datatypes magic.


>--
>                                  Fedor G. Pikus
>WWW: http://www.physics.ucsb.edu/~pikus/
>E-Mail: pikus@physics.ucsb.edu
>Department of Physics, University of California at Santa Barbara.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 -=-=- A. Gordon Smith -=- Edinburgh Parallel Computing Centre -=-=-
 =-=  mailto:smith@epcc.ed.ac.uk -=- Phone {+44 (0)131 650 6712} =-=
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


