Newsgroups: comp.parallel.mpi
From: chmbt@zeus.bris.ac.uk (MB. Taylor)
Reply-To: mark.taylor@bristol.ac.uk
Subject: Can you do mpi_reduce using same send and receive buffers?
Organization: Chemistry dept, Bristol University, UK
Date: Thu, 9 Nov 1995 10:30:51 GMT
Message-ID: <DHruJG.L9q@uns.bris.ac.uk>


Is it possible to sum a number of elements on different processes
without using a separate buffer to do so?

Suppose one has a variable at a buffer buf1 on each process, and wants
the sum over all processes of buf1 to end up on each process.
What I'd like to be able to do is something like:

   mpi_allreduce (buf1, buf1, count, datatype, mpi_sum, comm)

In the MPI document I can't find anything saying whether you can use
the same buffer for send and receive or not, but since none of the examples
I've seen show this being done, I suppose that you can't.
So instead I have to do something like:

   mpi_allreduce (buf1, buf2, count, datatype, mpi_sum, comm)
   buf1 = buf2

which uses an extra buf's worth of memory.

Does anybody know if I can in fact use the same buffer for send and receive
in a reduction operation?  Is there any other way of doing this without 
incurring the memory allocation penalties of the second approach?
I'm using the EPCC implementation of MPI on a Cray T3D. 

Thanks in advance for any helpful comments.

Mark

-----------------------------------------------------------------------
| Mark Beauchamp Taylor  -  physicist trapped in a chemist's body.    | 
| mark.taylor@bris.ac.uk    http://zeus.bris.ac.uk/~chmbt/index.html  |
| Department of Chemistry, University of Bristol, UK                  -------
-----------------------------------------------------| ... It's the future! |
                                                     ------------------------

