Newsgroups: comp.parallel.pvm
From: papadopo@cs.utk.edu (Philip Papadopoulos)
Subject: Re: Question about pvm_mcast on an Ethernet LAN
Organization: CS Department, University of Tennessee, Knoxville
Date: 29 Aug 1994 18:08:21 -0400
Message-ID: <33tm8lINNcap@duncan.cs.utk.edu>

In article <1994Aug26.202556.13174@enterprise.rdd.lmsc.lockheed.com> ohellwig@scuacc.scu.edu (Oliver Hellwig) writes:
>
>
>I am running PVM 3.2 on an ethernet LAN of HP 9000/7XX workstations.
>I have done some comparisons with the pvm_mcast and pvm_send routines
>and I have found that pvm_mcast appears to do an individual send
>to each PVM process.  I came to this conclusion when after, I
>substituted a pvm_mcast call with an equivalent number of pvm_send
>calls to 18 other workstations, and got basically the same timing
>results.  Now, I'm sure that the ethernet hardware should be able
>to handle a broadcast (or even a multicast) so I presume that either
>I'm completely off track or that PVM can't or isn't doing the
>multicast.  I just want to be able to do a true broadcast to all
>of my pvm processes.  Could anyone give me some insights into what might
>be happening?
pvm_mcast sends a copy of the message to each unique host in the
virtual machine. If you have 18 processes running on 18 machines, then
18 copies of the message are sent out. If you have 18 processes on 
6 machines, then 6 copies of the message are sent out. Once a host
receives a multicast message it forwards it to all the processes on
its local machine.

pvm_mcast does not use the broadcast mechanism of ethernet because
of the number of side-effect packets that can be sent over the network.
The benefit of hardware broadcast is about twice as fast as individual
point-to-point sends for pvm messages. Why? broadcast is not a reliable
protocol, so all the recipients must give an acknowledgement and
lost packets must be resent to individual processes.  So, in your
case, 1 message would be broadcast (to everybody on your subnet) and
18 acks would come back for 19 total messages. The current method
would send 18 and receive 18 acks.

pvm_mcast for default routing should be slightly faster than
individual pvm_sends. Direct routing would have individual sends
being faster, but you would be near the total number of open file
descriptors for a VM of 18. pvm maps directly to broadcast 
primitives on some MPPs (like the i860 and Paragon), and is 
much faster than point-to-point sends.  

-Phil Papadopoulos


