Newsgroups: comp.parallel.mpi
From: David Wolff <wolffd@ucs.orst.edu>
Subject: MPI_Pack question
Organization: Oregon State University
Date: Mon, 26 Aug 1996 10:39:42 -0700
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Message-ID: <Pine.OSF.3.91.960826102640.17114B-100000@ucs.orst.edu>


Hello!

I have a couple of questions regarding buffering:

1.  To use MPI_Pack, I need to allocate a buffer of a size large enough 
to hold all the data I want to pack into it.  Since the size of my 
messages are variable, currently I just allocate a large buffer, 
something that is larger than the maximum message size I expect and use 
that for all my messages.  This seems to be not very efficient because I 
believe MPI sends the entire buffer whether it is full or not.  I am 
familiar with the routine MPI_Pack_size, but my problem is that the data 
is not all of the same type.  Hence, I would need to go through the 
entire message and call MPI_Pack_size for each type and add up all the 
results.  Again, this seems like it would take a lot of extra time, and 
the code needs to be efficient.  Is there a better way to do this?

2. My second question is about sending of MPI_PACKED data.  As I 
understand it, MPI_Send does NOT block until a matching receive is 
posted.  I believe that the data is copied to some buffer until the data 
is received.  If this is correct then if I call MPI_Send with a MPI_PACKED 
buffer, am I free to clear the buffer, even before a matching receive has 
been posted?

Thanks in advance for any help.

   Dave Wolff
   Oregon State University


