Newsgroups: comp.parallel.mpi
From: Matt Willis <matt@optimus.cee.cornell.edu>
Subject: MPI_ALLGATHER question
Organization: Cornell University
Date: Fri, 05 Jul 1996 14:25:30 -0400
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <31DD5E1A.41C6@optimus.cee.cornell.edu>

I'm not sure that I understand the logic behind MPI_ALLGATHER. I'm using
Fortran on an SP2 and preparing columns of a matrix in parallel, then
using an MPI_ALLGATHER to distribute. 

The syntax of MPI_ALLGATHER is:

call MPI_ALLGATHER(WorkMtx, send_count, MPI_DOUBLE_PRECISION, 
	AssmbledMtx, recv_count, MPI_DOUBLE_PRECISION, 
	MPI_COMM_WORLD, ierr)

Is it possible to have the send_count different from the recv_count?
This is an issue when the size of the matrix doesn't divide evenly by
the number of processors (i.e. usually) Also, Why do I have to specify
the MPI_TYPE twice?

In my test program I have 8 columns distributed across three processors:

Processor:      0      1      2
Columns:        3      3      2
send_count:    3*lda  3*lda  2*lda    (lda is the leading dim of mtx.)
recv_count:    3*lda  3*lda  3*lda

I tried calling MPI_ALLGATHER using the different send_count in p2, but
received this error:

ERROR: 0032-126 Inconsistent message size(1600) in MPI_Allgather, task 2
ERROR: 0031-250  task 2: Terminated
ERROR: 0031-250  task 0: Terminated
ERROR: 0031-250  task 1: Terminated

So I get the feeling I have to use 3*lda everywhere, which means the
recv_buf must be at least 9 columns, to hold a garbage column. Is this
thinking correct? If this is the case, what's the deal with all the
extra information in the MPI_ALLGATHER call? 

___________________________________________________________________
Matthew Willis                          Environmental Systems Group 
mbw8@cornell.edu                                 Cornell University

