Newsgroups: comp.parallel.pvm
Path: ukc!uknet!EU.net!howland.reston.ans.net!europa.eng.gtefsd.com!emory!nntp.msstate.edu!saimiri.primate.wisc.edu!hpg30a.csc.cuhk.hk!uxmail!ustsu3.ust.hk!ccsamuel
From: ccsamuel@uxmail.ust.hk (Samuel S.K. Kwan)
Subject: Re: Performance of PVM on HP with Gigaswitch
Message-ID: <1994Apr19.032632.25920@uxmail.ust.hk>
Sender: usenet@uxmail.ust.hk (usenet account)
Organization: Hong Kong University of Science and Technology
X-Newsreader: TIN [version 1.1 PL8]
References: <1994Apr14.092148.19662@uxmail.ust.hk>
Date: Tue, 19 Apr 1994 03:26:32 GMT
Lines: 67

Samuel S.K. Kwan (ccsamuel@uxmail.ust.hk) wrote:

: We are running PVM on 4 HP 735 workstations connected using a Gigaswitch.
: The communication performance between nodes is about 40 Mb/sec when using
: ttcp (actually 90 Mb/sec when udp is used), but it drops drastically to
: about 12 Mb/sec when the timing test of PVM is run. We are already using
: the PVM 3.2.6 and it seems that there is much overhead in PVM itself. Three
: of the HP workstations are equipped with FDDI interface only. The fourth
: one has both FDDI and Ethernet interfaces and functions as a host-based
: router for the rest three to the outside network.

: I just want to ask if the PVM performance we get is normal or there is
: something we can do to improve it. Please send me email directly if you 
: have any idea/similar experience and I will summarize later if I get enough
: information from the net. Thanks a lot. 

: Samuel Kwan
: ===============================================================================
: email:		ccsamuel@usthk.ust.hk

: address:	Systems and Operations
: 		Centre of Computing Services and Telecommunications
: 		Hong Kong University of Science and Technology
: 		Sai Kung
: 		HONG KONG
: ===============================================================================
There are a number of factors attributable for the bad performance of
running PVM over FDDI. The most important one should be the 'routing'
methods used: pvmd or direct. By using the 'pvm_advise' routine, the
throughput can reach 45 Mbps when the message size is large enough (say
100KB). 

Another point to note is whether to use the XDR encoding for PVM messages.
If the participating machines are homogeneous, we can simply use PvmDataRaw
to avoid XDR encoding which can slow down the communication throughput. 

Yet another difficult parameter to tune is the TCP window size of HPUX.
By default it is 8K but it is said that the best performance can be
achieved by using 56K TCP window size. Unfortunately, there is no supported 
way to change this parameter and you have to use 'adb' to modify it.

For those who are interested in PVM communication performance in a FDDI
environment, there is a very good paper (presented in an IEEE workshop last
Oct) by Michael Lewis of Sandia National Labs. Performance results of other
workstation platforms such as SGI, IBM, Sun and DEC can be found in his 
work.

The above summary is a result of the replies from a number of knowledgeable
persons in the field. Thanks must go to:

	Rick Jones, Hewlett Packard
	David Clay Patton, University of Alabama at Birmingham
	Scott Townsend, NASA
	Thomas Pfenning, University of Cologne, Germany
	Michael Lewis, Sandia National Labs

in the order they replied.

===============================================================================
email:		ccsamuel@usthk.ust.hk

address:	Systems and Operations
		Centre of Computing Services and Telecommunications
		Hong Kong University of Science and Technology
		Sai Kung
		HONG KONG
===============================================================================

