Newsgroups: comp.parallel.mpi
From: gottliej@ri.cs.wm.edu (Jeremy F. Gottlieb)
Subject: Reducing overhead
Organization: College of William & Mary, founded 1693
Date: Thu, 6 Jul 1995 17:33:57 GMT
Message-ID: <1995Jul6.173357.6899@cs.wm.edu>

Okay, I suppose I'm probably just the blindest idiot in the world and
this is mentioned in roughly a hundred different places, but I would
appreciate it if someone could illuminate me in my ignorance:

	I'm trying to find a way to cut down the communication
overhead. Doing a standard send and receive, MPI executes three copy
operations, from the send process to the MPI buffers, from there to
the MPI buffers on the receiving machine, and from there into the
receive buffer. I know (or at least have heard) that there is a way to
cut down on this by "binding" the buffer in the send processes address
space to one in the receive processes address space and always using
the same ones, thus you only do one copy instead of three. Mr. Gropp
mentions this problem sort of in passing on his tutorial web page, but
isn't real clear about it. Am I safe in assuming that the sort of
thing you want to do is something involving MPI_Start()s,
MPI_Send_init()s, MPI_Recv_init()s and MPI_Wait()s?

--
Jeremy Gottlieb				Parallel Computing Lackey
gottliej@cs.wm.edu			William & Mary
gottliej@mathcs.carleton.edu		Dork: Carleton College
"Without C, we'd have BASI, OBOL, and PASAL."


