Newsgroups: comp.parallel.mpi
From: gropp@godzilla.mcs.anl.gov (William Gropp)
Subject: Re: MPI_ALLGATHER question
Organization: MCS, Argonne National Laboratory
Date: Fri, 5 Jul 1996 19:4:39 GMT
Message-ID: <836593479313@godzilla.mcs.anl.gov>

In article <31DD5E1A.41C6@optimus.cee.cornell.edu>,
Matt Willis  <matt@optimus.cee.cornell.edu> wrote:
>I'm not sure that I understand the logic behind MPI_ALLGATHER. I'm using
>Fortran on an SP2 and preparing columns of a matrix in parallel, then
>using an MPI_ALLGATHER to distribute. 
>
>The syntax of MPI_ALLGATHER is:
>
>call MPI_ALLGATHER(WorkMtx, send_count, MPI_DOUBLE_PRECISION, 
>	AssmbledMtx, recv_count, MPI_DOUBLE_PRECISION, 
>	MPI_COMM_WORLD, ierr)
>
>Is it possible to have the send_count different from the recv_count?
>This is an issue when the size of the matrix doesn't divide evenly by
>the number of processors (i.e. usually) Also, Why do I have to specify
>the MPI_TYPE twice?

You want to use MPI_ALLGATHERV instead.  It is more general than you 
need, but MPI_ALLGATHER requires the same number of elements from each
sender.

You need to remember that the datatypes need not be the primative ones,
all that must match is the type signature.  So, you could send 10 
MPI_DOUBLE_PRECISION and receive a single indexed type containing 10 
MPI_DOUBLE_PRECISION elements.

Bill Gropp

