Newsgroups: comp.parallel.pvm
From: wcs@win233.nas.nasa.gov (William C. Saphir)
Subject: Re: Flow Control in pvm
Organization: Numerical Aerodynamic Simulation, NASA Ames
Date: Fri, 5 May 1995 22:38:26 GMT
Message-ID: <WCS.95May5153826@win233.nas.nasa.gov>

Lawrence Chan writes:
> Let's say I have 2 tasks, task A and task B, and task A is just sending data
> to task B.  If task A is much much faster than task B, will pvm guarantee
> that all messages are safely buffered and that no messages are being lost?
> What if there's not enough memory on the machine task A is running, and thus
> no buffers are available?  If there's any flow control mechanism in pvm, how
> is it done?

Generally, PVM guarantees that messages won't be lost. It doesn't
specify anything about flow control, although pvm_send() rarely
blocks, and people usually assume that it never blocks - i.e., 
it acts as if there's an infinite amount of buffering. 

What actually happens in practice depends on details. For instance, if
you're using direct routing on a NOW, pvm_send() uses TCP sockets to
send data and will eventually block if the sender isn't listening. The
sender "listens" whenever it does a send or receive. Messages which
have been "heard" but not received (e.g., a sender sends 1000
messages, the receiver receives only the last of those messages)
are buffered at the receiver, which may die due to resource
exhaustion (e.g., malloc() fails). 

With regular routing, all messages are buffered by the daemon, 
so pvm_send() never blocks. The daemon may die, however, due to 
resource exhaustion. 

The behavior described above is not specified in any document. 
It is just the way pvm "works", or used to work last time I 
checked. It could conceivably change in the future, although
anything that would break a lot of code is unlikely. 

If you want better defined behavior, try MPI. In the spirit of full
disclosure, I should mention that poor implementations of MPI may have
resource exhaustion problems, and MPI itself has some problems with
the so-called "progress rule", which is pretty much impossible to
satisfy. However, in general things are *much* better defined in 
MPI than they are in PVM. 

Bill Saphir






