Newsgroups: comp.parallel.pvm,comp.parallel.mpi
From: rbarrett@ranger.lanl.gov (Richard Barrett)
Subject: Re: PVM send buffers
Organization: Los Alamos National Laboratory
Date: 25 Mar 1996 15:09:30 GMT
Message-ID: <4j6cva$qtk@newshost.lanl.gov>

In article <4ipg22$761@orson.eng2.uconn.edu>, Chet Vora <chet@brc.uconn.edu> writes:
(Richard Barrett writes:)
|> > It is quite inconvenient in terms of passing interior
|> domain boundaries in that the data is scattered about memory. Hundreds
|> of calls to pvm_pack per time step leads to an obvious lack of
|> performance.Easy solution: do my own memory management, resulting in a
|> contiguous buffer of data.
|> 
|> What kind of memory managment are you referring to ?? Do you malloc a
|> large buffer for all the structs at a time ?? The drawback (applicable
|> to some cases) of this scheme would be that you can't free only a part
|> of the memory later if you no longer need it.
|> I am working on a similar application and I am looking into the
|> possiblity of mallocing large memory chunks (multiples of structs )at a
|> time and then since each is contiguous we can pack in one call. Of
|> course, each large chunk will require a separatate pvm_pk but its better
|> than the earlier case which, I agree with you, is a big performance
|> snag. And you can free any of this chunks if you don't later need it.


I malloc the space. Right now our application involves a static grid.
Once we add re-meshing, we'll incorporate a more sophisticated memory
management scheme. 

|> 
|> >Folks,
|> > 
|> > A week or so ago I posted regarding the pvm_pack functions. I asked if
|> > others would like to be able to allocate the entire send buffer before
|> > making multiple calls to pvm_pack in order to reduce the overhead of
|> > malloc. (This would be analogous to MPI_Buffer_attach.)
|> > 
|> > So far I count zero responses, and I didn't receive any via private email.
|> > 
|> > Oh well.
|> > 
|> > This message is cross posted to the mpi list. Does anyone there use
|> > MPI_Buffer_attach? If not, thanks to the MPI Forum for providing me with
|> > a personal function! My cell based hydrodynamics C code realized a 25 times
|> > speed up on the Cray T3D with this functionality.
|> > 
|> > Richard
|> 
|> I would be interested in being able to have pvm get this kind of
|> functionality where I can attach my send buffer to the one requested to
|> in pvm_send. With the current pvm_send, I guess with contiguous buffer
|> allocation, one would still need a pack for each data type, whereas the
|> above functionality would mean a single call ( Am I understanding you
|> right ?? Pardon me but I am not very familiar with MPI_BUFFERATTACH )
|> 

MPI_Buffer_attach lets you designate the memory address for the send buffer.
Lets you set up a memory management scheme for your application. 

Our problem is that even though our code is portable, our target machine these
days is a Cray T3D. Message passing is built on top of its shmem stuff, so
sends are not safe (i.e. the send returns before all the data has been sent
to the target process). So we're experimenting with MPI_Wait (blocks until the
data has been sent vs. the buffering scheme. And as usual, this will be 
MPI implementation dependent as well as problem dependent.

Richard

|> 
|> Regards,
|> Chet
|> 
|> -- 
|> *******************************************************************************
|> Chetan Vora		www: http://www.eng2.uconn.edu/~chet
|> The scientific theory I like the most is that Saturn's rings are composed 
|> entirely of lost airline baggage.  -Mark Russel
|> *******************************************************************************
|> 

