Newsgroups: comp.parallel.mpi
From: sanders@ira.uka.de (Peter Sanders)
Subject: Re: MPI_Alltoall() on Cray T3D
Organization: Universitaet Karlsruhe, Germany
Date: 13 Nov 1996 14:11:32 +0100
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <t1rk9rquol7.fsf@i90s25.ira.uka.de>



> >MPI_Alltoall() call has many restrictions.
> >For example, send and receive buffer have to be continuous and
> >the receive message size has to be specified. It's very bad
> >if the receiver doesn't know the message size of the sender.
>
> True. The only application I know of where the receiver does not
> have this knowledge is parallel sample sort. 
> I'd like to hear from
> people if any other application shares this characteristic. 

What about transposing a sparse matrix?
Actually, I would guess that the current form of 
MPI_Alltoallv has less applications than a variant where 
only the sender determines the actual message lengths.
The trouble is that the more general form
would require a quite complicated interface.

Regards,

Peter Sanders
University of Karlsruhe

