Newsgroups: comp.parallel.pvm,comp.parallel.mpi
From: Chet Vora <chet@brc.uconn.edu>
Subject: Re: PVM send buffers
Organization: University of Connecticut
Date: 20 Mar 1996 17:42:26 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <4ipg22$761@orson.eng2.uconn.edu>

Hi, 

I can't send mail to your address. It returns with the error "554 too many 
hops 26 (25 max): from <chet@eng2.uconn.edu> via wrangler.lanl.gov, to
<rbarrett@ranger.lanl.gov>

Anyway, in your earlier mail, you wrote

> It is quite inconvenient in terms of passing interior
domain boundaries in that the data is scattered about memory. Hundreds
of calls to pvm_pack per time step leads to an obvious lack of
performance.Easy solution: do my own memory management, resulting in a
contiguous buffer of data.

What kind of memory managment are you referring to ?? Do you malloc a
large buffer for all the structs at a time ?? The drawback (applicable
to some cases) of this scheme would be that you can't free only a part
of the memory later if you no longer need it.
I am working on a similar application and I am looking into the
possiblity of mallocing large memory chunks (multiples of structs )at a
time and then since each is contiguous we can pack in one call. Of
course, each large chunk will require a separatate pvm_pk but its better
than the earlier case which, I agree with you, is a big performance
snag. And you can free any of this chunks if you don't later need it.

>Folks,
> 
> A week or so ago I posted regarding the pvm_pack functions. I asked if
> others would like to be able to allocate the entire send buffer before
> making multiple calls to pvm_pack in order to reduce the overhead of
> malloc. (This would be analogous to MPI_Buffer_attach.)
> 
> So far I count zero responses, and I didn't receive any via private email.
> 
> Oh well.
> 
> This message is cross posted to the mpi list. Does anyone there use
> MPI_Buffer_attach? If not, thanks to the MPI Forum for providing me with
> a personal function! My cell based hydrodynamics C code realized a 25 times
> speed up on the Cray T3D with this functionality.
> 
> Richard

I would be interested in being able to have pvm get this kind of
functionality where I can attach my send buffer to the one requested to
in pvm_send. With the current pvm_send, I guess with contiguous buffer
allocation, one would still need a pack for each data type, whereas the
above functionality would mean a single call ( Am I understanding you
right ?? Pardon me but I am not very familiar with MPI_BUFFERATTACH )


Regards,
Chet

-- 
*******************************************************************************
Chetan Vora		www: http://www.eng2.uconn.edu/~chet
The scientific theory I like the most is that Saturn's rings are composed 
entirely of lost airline baggage.  -Mark Russel
*******************************************************************************


