Newsgroups: comp.parallel.pvm
From: eugene@amelia.nas.nasa.gov (Eugene N. Miya)
Subject: FAQ vers. 1.0, pvm
Organization: NAS, NASA Ames Research Center, Moffett Field, California
Date: Mon, 11 Jul 1994 11:58:09 GMT
Message-ID: <CsryKy.B39@cnn.nas.nasa.gov>

PVM: What is it?
	parallel virtual machine

	PVM is a software system to link clusters of machines.
	PVM is a software system that enables a heterogeneous collection of
	computers (some times called a "cluster") to be programmed as a
	single machine.  It provides the user with process control and
	message passing calls for communication between tasks running on
	different hosts.

	PVM currently supports Fortran and C programs on a variety of machines
	(workstations to supercomputers) and a limited number of operating
	system/network environments.

How do I get it?
	The current version is obtained by sending a request to the
	netlib mail daemon.  Mail (in the Subject: line or message body) to
	netlib@ornl.gov (or research.att.com):

		send index from pvm

	will get you info for pvm 2.4.2.

		send index from pvm3

	will get you info for pvm3.x.

	Similar "send" messages will return parts of PVM.  This assumes the
	ORNL (Oak Ridge National Lab or AT&T) host can figure out how to
	return your email.  Do not assume reliability.  Caveat Receptor.

	pvm (and the entire netlib tree) is also available:

	for anonymous ftp from netlib2.cs.utk.edu

	via anonymous rcp from netlib2.cs.utk.edu
	 ( "rsh netlib2.cs.utk.edu -l anon ls pvm3" gets you a file list.)
	 ( "rcp anon@netlib2.cs.utk.edu:pvm3/??? your-local-file" )

	via the xnetlib browsing tool

	For access from Europe, try the duplicate collection in Oslo:
        Internet:       netlib@nac.no
        EARN/BITNET:    netlib%nac.no@norunix.bitnet
        X.400:          s=netlib; o=nac; c=no;
        EUNET/uucp:     nac!netlib
	For the Pacific, try    netlib@draci.cs.uow.edu.au
	located at the University of Wollongong, NSW, Australia.


What is XPVM?

	Back when PVM was being developed, the in-house version of PVM
	was known as PVM1.0.  This version was not released to the public.
	PVM1.0 had a X interface console which gave some basic stats about
	PVM.  Many of the features mentioned in the paper "Network Based 
	Concurrent Computing on the PVM System" were not included in the public
	release of PVM.
	
	The version released to the public domain are versions PVM2.0 and
	greater which have no X interface.  One reason why XPVM was not
	released in PVM2.0 was that many of the features in XPVM were not
	robust enough to be included in a general release (see the paper by
	Grant and Skjellum, "The PVM Systems: An In-Depth Analysis and
	Documenting Study - Concise Edition").
	
	XPVM spun off into two projects: 
		HeNCE  Heterogeneous Network Computing Environment
		XAB    X window Analysis and Debugger
	
	HeNCE is a X window parallel development tool for PVM.  Using HeNCE
	you can graphically describe the parallelism in your program and
	HeNCE will automatically add the PVM code for parallelization.
	HeNCE will make a trace file so you can do a post mortem look at
	execution.
	
 	HeNCE 1.4 is now available on netlib.  New files are:
 	
 	hence1.4.shar                   shar file (1.8 Mb)
 	hence1.4.shar.z.uu              uuencoded compressed shar file (.9 Mb)
 	hence-1.4-changes               (short) list of changes
 	porting-status                  updated porting status
 	read-me.pvm3                    notes on HeNCE 1.4 support for pvm3
 	
 	The major change from HeNCE 1.3 is support for PVM3.  In addition,
 	HeNCE should now compile and run on SGI and ALPHA.
 	
 	How to get it:
 	
 	a) from netlib.  Send email containing the line
 	
 	send hence1.4.shar.z.uu from hence
 	
 	to netlib@ornl.gov.  You will get back several files which you then
 	edit, strip off the headers, and concatenate together to make a
 	uuencode file.  uudecode the file and decompress the result, and
 	run that through /bin/sh (in an empty directory) to extract the
 	HeNCE source files.
 	
 	The command "send index from hence" returns a list of everything
 	from HeNCE.
 	
 	b) from xnetlib.  Xnetlib is an x-windows program that allows easy
 	file retrival from netlib.  You can get xnetlib via anonymous ftp
 	to cs.utk.edu, directory pub/xnetlib, file xnetlib3.3.tar.Z.
 	There are also pre-compiled binaries for several platforms.
 	
 	There is an experimental "anonymous rcp" facility on netlib2.cs.utk.edu.
 	
 	To get HeNCE 1.4 via rcp, type:
 	
 	   rcp anon@netlib2.cs.utk.edu:hence/hence1.4.shar.z.uu local-file-name
 	
 	(for the uuencoded, compressed version), or
 	
 	   rcp anon@netlib2.cs.utk.edu:hence/hence1.4.shar local-file-name
 	
 	(for the uncompressed .shar file).
 	
 	You can also do "ls" commands:
 	
 	   rsh netlib2.cs.utk.edu -l anon ls -l hence
 	
 	(on some hosts, the command is "remsh" rather than "rsh")
 	
 	For more information, finger anon@netlib2.cs.utk.edu.
 	
 	Notes:
 	
 	a) This is an experimental service.  We will be watching it to see how
 	   well it works.  You may send comments to moore+netlib@cs.utk.edu.
 	
 	b) Please note: "anon" isn't really anonymous.  Requests are logged.
 	
	XAB is a X tool which lets you trace individual messages.  
	It has VCR type displays which allow you to control the flow of
	your program while its running.  Your code is instrumented by linking 
	to a special library.
	
 	Beta xab3 is now available for pvm3.  This is a beta code so
 	please be gentle and give us feedback.  Send comments and
 	bug reports to xab@psc.edu.
 	
 	Eventually the xab functionality will be integrated in to xpvm.
 	
 	You can set the level of monitoring through a call to
 	xab_showevents().  This will soon (end of August)be tunable as the
 	program runs via a command called xabsf (xab set flags).
 	
 	There is a program called xabsf which will allow you to set certain
 	flags as the program is running.
 	
 	We are also working on an interface to Pablo.
 	
 	anonymous ftp: dao.nectar.cs.cmu.edu (128.2.205.73)
 		cd /afs/cs.cmu.edu/project/nectar-adamb/ftp
 		bin
+		get xab3.3.9.tar.Z

	Currently, an effort is underway to develop XPVM for PVM3.  The
	funtionality has not yet been determined.  The expected released date
	is by the end of the summer of 1993.
	
	To obtain info on HeNCE, send the e-mail message
		send index from hence
	to automatic mail server netlib@ornl.gov.  For XAB, send the
	e-mail message 
		send index from pvm/xab

Subject: patches 1 and 2 for version 3.2 are available on netlib
	# so are patches up to 6.

The files are:

    32patch01 - Set of context diffs to be applied with the patch command
    32patch02 - Another set of context diffs

To apply these patches, get them and put them in pvm3/patches.
Then chdir to pvm3 and type:

    % patch -p0 < patches/32patch01
    % patch -p0 < patches/32patch02

These two patches assume you have the base distribution and man
pages already installed.  They are also available in preassembled
frozen form, in file:

    pvm3.2.2.tar.z.uu  (includes base, man pages, patches 1 & 2)

---
Patch 1 fixes lots of bugs in the 3.2 base distribution
From the patches/Contents file:

    . pvmd signal code uses new macros.  SYSVSIGNAL should be set for sysV 
      handlers (need to reset after a signal).  NOWAIT3 should be
      defined if wait3() isn't available.  NOWAITPID should be defined
      if neither wait3() nor waitpid() is available.
    . added conf/MASPAR.def file.
    . fixed bug - console would hang if hosts added in .pvmrc script.
    . fixed PGON library defs, "-lrpc -lnx".
    . pvm_start_pvmd() doesn't free return val of pvmgetpvmd().
    . pvm_send() was missing a return near end.
    . fixed LINUX.def - had wrong arch name.
    . pvmgetarch now detects LINUX machines and Solaris machines correctly.
    . removed CC = cc from all makefiles.
    . fixed SUN4SOL2.def
    . fixed examples/testall.f - was calling config() incorrectly.
    . added <linux/time.h> to console/cons.c.
    . defined max() in src/host.c

Patch 2 fixes a bug that patch 1 introduced; get it if you get patch 1.

________________________________________________________________________
How to get them:

They've just been put on netlib2.CS.UTK.EDU and will propagate to the
other servers soon.

From netlib/email:
    echo "send 32patch01 from pvm3" | mail netlib@ORNL.GOV

For more information about PVM:
    echo "send index from pvm3" | mail netlib@ORNL.GOV

Using xnetlib:
    select library pvm3, file 32patch01

Via FTP:
    host netlib2.CS.UTK.EDU, login anonymous, directory /pvm3

For more information about file retrieval from netlib:
    finger anon@netlib2.CS.UTK.EDU

Send questions or problems to:
    pvm@MSR.EPM.ORNL.GOV

Sincerely,
The PVM research group

Academic reference:
	[These references are provided here for completeness.
	Do not ask me for copies; I do not have time to copy them.
	Please use the "usual" channels to obtain them.  I will
 	gladly add references to the list.  If you need something larger,
 	more comprehensive and general, email me, and I will send you
 	"the standard form letter..."]

%A V. S. Sunderam
%Z Emory U., Atlanta, GA
%T PVM: A Framework for Parallel Distributed Computing
%J Concurrency: Practice & Experience
%V 2
%N 4
%D December 1990
%P 315-339
%K parallel virtual machine,
%X See netlib.

Additional references:
	(I will remove these over time and just place them in the parallelism
	biblio.)

%A G. A. Geist
%A V. S. Sunderam
%T The PVM System: Supercomputing Level Concurrent Computations on
a Heterogeneous Network of Workstations
%J Sixth Distributed Memory Computing Conference Proceedings
%I IEEE
%C Portland, OR
%D April/May 1991
%P 258-261
%K DMCC6, software technology and tools, short papers, network computing,

%A Brian K. Grant
%A Anthony Skjellum
%T The PVM System: An In-Depth Analysis and Documenting Study --
Concise Edition
%R TR UCRL-JC-112016
%I LLNL
%C Livermore, CA
%D 1992
%X Also in "The 1992 MPCI Yearly Report: Harnessing the Killer
Micros", UCRL-ID-107022-92, LLNL, Livermore, CA, August 1992, pp. 247-266
%X This is now available via anonymous ftp at aurora.cs.msstate.edu:
pub/reports/pvm_short.ps.Z

%A W. A. Shelton, Jr.
%A G. A. Geist
%Z ORNL
%T Developing Large Scale Applications by Integrating PVM and
the Intel iPSC/860
%J Proceedings Intel Supercomputer Users' Group 1992 Annual Users' Conference
%C Dallas, TX
%D October 1992
%P 105-132
%K viewgraphs,

%A Madhavan Narayanan
%A Susan X. Ying
%T Solving the Navier-Stokes Equations in Homogeneous Networks
%R FSU-SCRI-92-177
%I Supercomputer Computations Research Institute
%C Tallahassee, FL
%D December 1992
%K PVM, RS/600s, CFD, fluid dynamics,

%A William W. Carlson
%T RES: A Simple System for Distributed Computing
%R SRC-TR-92-067
%I Supercomputing Research Center, IDA
%C Bowie, MD
%D May 1992
%K distributed computation, distrbuted scheduling, load balancing,
network security, workstation,
%X Condor and PVM-like.
%X DSB asks, "What does RES stand for?"

%A Jack Dongarra
%A G. A. Geist
%A Robert Manchek
%A V. S. Sundaram
%T Integrated PVM Framework Supports Heterogeneous Network Computing
%J Computers in Physics
%V 7
%N 2
%P 166-175
%D 1993
%X A very readable explanation of what PVM does,
how it's supposed to work, and what it's new directions are ("HeNCE").
The article also briefly discusses competing paradigms: LINDA, P4,
Parmacs and Express.

%A G. Betello
%A G. Richelli
%A S. Succi
%A F. Ruello
%T Lattice Boltzmann Method on a Cluster of IBM RS/6000 Workstations
%J Proceedings of the First International Symposium on
High-Performance Distributed Computing
%I IBM ECSEC and IEEE
%C Syracuse NY
%D 1992
%P 242-247
%K PVM, Lattice Boltzmann Method

%A Timothy G. Mattson
%A Craig C. Douglas
%A Martin H. Schultz
%T A Comparison of CPS, Linda, P4, POSYBL, PVM, and TCGMSG:
Two Node Communication Times
%I Yale University Department of Computer Science
%R Research Report YALEU/DCS/TR-975
%D May 1993
%X Abstract:
In this paper, we compare simple, two node communication times for a
number of distributed computing programming environments.
For each environment, round trip communication times for a ping/pong program
are considered.  The times were obtained using two SPARCstation 1 workstations
on an isolated ethernet LAN.
%X casper.cs.yale.edu in the file tr975.ps.Z:

%A Adam L. Beguelin
%T Xab: A Tool for Monitoring PVM Programs
%B Proceedings Workshop on Heterogeneous Processing WHP'93
%I IEEE Computer Society Press
%C Los Alamitos, CA
%D April 1993
%P 92-97
%K tools/systems,

%A A. Beguelin
%A J. Dongarra
%A A. Geist
%A V. Sunderam
%T Visualization and Debugging in a Heterogeneous Environment
%J IEEE Computer
%V 26
%N 6
%D June 1993
%P 88-95

%A Adam Beguelin
%A Jack J. Dongarra
%A Al Geist
%A Robert Manchek
%A Vaidy Sunderam
%T PVM and HeNCE: Tools for Heterogeneous Network Computing
%E Jack J. Dongarra
%E Bernard Tourancheau
%B Environments and Tools for Parallel Scientific Computing
%S Advances in Parallel Computing
%V 6
%I Elsevier Science Publishers B. V.
%C Sara Burgerhartstraat 25, P.O. Box 211, 1000 AE Amsterdam, The Netherlands
%D 1993
%P 139-153
%O chapter 3

%A R. D. da\ Cunha
%A T. R. Hopkins
%T A parallel implementation of the restarted {GMRES} iterative method for
nonsymmetric systems of linear equations
%J Advances in Computational Mathematics
%V 2
%N 3
%D 1994
%P 261-277
%K gmres, transputers, pvm
%X Also as TR-7-93, Computing Laboratory, University of Kent at Canterbury


From: dongarra@thud.cs.utk.edu (Jack Dongarra)
Subject: PVM Users Guide

There is a new version of the PVM Users' Guide available on netlib.
To retrieve a copy type:

rcp anon@netlib2.cs.utk.edu:pvm3/ug.ps ug.ps

or in email to netlib@ornl.gov type:
send ug.ps from pvm3

or use xnetlib.

This update reflects version 3.1 and was created using the appropriate
postscript document structuring conventions, so it should be compatible
with a larger number of previewers and printers than the previous version.

**************************************************************


From: Renu.Raman@Eng.Sun.COM (Renu Raman, Sun Microsystems)
Subject: Re: PVM and HeNCE Literature


                        Distributed Queueing System
                         Version 2.1 Release Notes


                      Tom Green <green@scri.fsu.edu>
                   Robert Pennington <penningt@psc.edu>
                    Dan Reynolds <dan@chpc.utexas.edu>

               Supercomputer Computations Research Institute
                         Florida State University

                     Pittsburgh Supercomputing Center

                   Center for High Performance Computing
                      The University of Texas System





INTRODUCTION

     Revision 2.1 of DQS is the latest release of the Distributed  Queueing
System,  a  batching  system for clusters of high performance workstations.
This release contains both feature enhancements and bug  fixes  to  version
2.0.

WHERE TO GET THE SOURCE CODE

     A compressed tar(1) archive  of  the  DQS  sources  is  available  via
anonymous ftp from the following file servers on the Internet:

     ftp.scri.fsu.edu      in pub/dqs/DQS-2.1.tar.Z
     ftp.chpc.utexas.edu   in packages/dqs/DQS-2.1.tar.Z
     ftp.psc.edu           in pub/dqs/DQS-2.1.tar.Z

Don't forget to select binary transfer  mode  when  fetching  the  archive.
Change  directory  to the appropriate place in your source tree and extract
the DQS sources with the following command:

                    ``zcat DQS-2.1.tar.Z | tar xvf -''

zcat(1) is part of the compress/uncompress package.  If  you  do  not  have
compress(1)  and uncompress(1) at your site, source code for them is avail-
able via anonymous  ftp  from  several  archive  servers  on  the  Internet
(ftp.uu.net is a good place to start looking).

     Note to DQS 2.0 users: The changes between version 2.0 and version 2.1
were quite numerous: several old source files disappeared and new ones took
their place. The patch(1) kit to replicate all  these  changes  would  have
been  quite  large so we did not produce one. You will have to replace your



DQS Version 2.1               March 22, 1993                         Page 1





Distributed Queueing System                                   Release Notes


2.0 sources with the 2.1 code in order to upgrade.

     The DQS source files, manual pages, and assorted documentation require
about  5  megabytes  of disk space. The DQS binaries will consume between 5
and 20 megabytes of disk space for each architecture on which which  it  is
built.

NEW FEATURES

AFS version 3 support. DQS jobs can  now  access  files  stored  under  the
Andrew File System from TransArc.

New operating system support. DQS now compiles and runs under UNICOS  6.1.6
from  Cray Research, Inc. [on a Y-MP8/864], under ConvexOS 10.1 from Convex
Computer Corporation [on a Convex C220] and under the OSF/1 operating  sys-
tem as supplied by Digital Equipment Corporation [on a DEC Alpha].

PVM version 3 support. Via the -pvm3 switch on qsub, you may now  run  ver-
sion 3 of the Parallel Virtual Machine software in a DQS job. The pvm3 dae-
mon will be automatically launched on the master node at job initiation and
terminated at job completion.

Temporary directory support. DQS jobs  are  now  provided  with  a  working
directory  in  which  scratch  files  may be placed. At job completion, the
tmpdir is automatically scrubbed. The base path to the temporary  directory
filesystem is configurable by queue.

Dynamic global configuration support. DQS can be configured  to  allow  the
cluster administrator to change the global DQS configuration on the fly via
the -mc option of qconf(1).  When a node notices a change in the configura-
tion  version  number,  its dqs_execd automatically restarts so that global
configuration changes propagate throughout the cluster in one load  average
reporting interval.

Choice of queue scheduling policies.  The  cluster  administrator  may  now
choose  one  of  two scheduling policies for the computational cluster.  By
default, DQS will schedule jobs to the nodes within a DQS  group  based  on
the queue sequence numbers [i.e., in the order in which the nodes appear in
the queue list]. Via a configuration switch, you may choose to schedule  by
weighted  load  average  within  a  group  so  that  the least busy node is
selected.

Job name support. DQS users may now specify a name by which a  DQS  job  is
known via the -N parameter on qsub. qstat will now report the job name.

Limits on the number of active jobs per user.  Sysadmins may  now  place  a
maximum  limit to the number of running jobs an individual user can have in
a DQS cluster. Queued jobs in excess of MAXUJOBS for a user now wait  until
a currently running job completes.

Support for user access controls.  The cluster administrator may  now  con-
trol access by individual user to the computational cluster as whole and to
specific queues using one of three access strategies. A similar feature was
promised  in  2.0 but was disabled for efficiency's sake. The interface has



Page 2                        March 22, 1993                DQS Version 2.1





Release Notes                                   Distributed Queueing System


been completely rewritten for  2.1  using  the  ndbm(3)  library.  See  the
qconf(1) manual page for more information.

Combined spooling directory structure.  In an attempt to simplify DQS  con-
figuration,  the qmaster and the dqs_execds can now share a common spooling
area. Each dqs_execd will automatically create its own unique spool  direc-
tory.

BUG FIXES

o+  Reworked wait(2)  interface  to  use  POSIX-compliant  system  calls  to
   improve portability.

o+  Fixed reported problem with signal handling on some systems.

o+  Cleaned up the manual pages and documented new features.

o+  Fixed load average reporting under IRIX 4.0.1.

o+  Fixed problem with signal handling wrt to abort(2).

o+  Fixed errant memory reference to dynamic storage after it was freed.

o+  Closed security hole. Corrected a typo which caused the  temporary  edit
   file not to be removed.

o+  Increased use of POSIX standard data types to improve portability.

STILL TO COME

     Work on mqmon (the Motif-based queue monitoring program) for  DQS  2.1
had  not  been completed as of the release date. The DQS 2.0 mqmon has been
included with this distribution; this  mqmon  mostly  works  with  the  2.1
ancillary  programs  but  does not understand the 2.1 features and options.
When the 2.1 version of mqmon is complete, we will notify  each  site  that
fetches  the  source  code that the new mqmon is available. In the interim,
please do not bombard us with queries as to when: it will be out Real  Soon
Now.

     With regard to qmon, as of the 2.1 release, we are dropping support of
this program (in other words, we will not answer questions about qmon prob-
lems). We have provided the code in this release but qmon does not have any
support in it for the 2.1 features provided by the ancillaries. We are con-
sidering whether to provide a qmon based on the Athena  widget  set  or  to
only provide mqmon. Let us know your requirements in this regard.

PLATFORMS

     DQS 2.1 has seen limited testing on the following machines and operat-
ing systems:

Sun IPCs, SPARCstations & 600s       SunOS 4.1.2/4.1.3
IBM RISC System 6000 model 550       AIX 3.2.3
SGI 4d/310GTX                        IRIX 4.0.1



DQS Version 2.1               March 22, 1993                         Page 3





Distributed Queueing System                                   Release Notes


DEC Alpha                            OSF/1
Convex C220                          ConvexOS 10.1
Cray Research Y-MP8/864              UNICOS 6.1.6

The 2.1 release has been compiled under SunOS 5.1 (Solaris 2.1) in BSD com-
patibility  mode  but  is  untested.  We  encountered difficulties with the
Solaris/BSD compatibility libraries and will  defer  supporting  DQS  under
Solaris until a later release.

INSTALLATION

     The file INSTALL in this directory contains instructions on  configur-
ing, building, and installing DQS.

PROBLEMS AND BUG REPORTING

     The authors make no claim of warranty nor do we commit to  fixing  any
bugs  that you might discover in DQS. However, as time permits, we will try
to fix problems that crop up and incorporate these fixes into new  releases
of DQS. Please send all comments, suggestions and bug reports to:

                             dqs@scri.fsu.edu

Please send all flames, derogatory remarks and personal attacks to:

                        devnull@dont.wanna.hear.it

Before launching your question at us, please look over the file FAQ in  the
DOC directory. It contains a number of frequently-asked questions about DQS
and perhaps your question has already been asked and answered.

LEGALESE

     DQS is copyrighted, freely-distributed software. You  may  use  it  at
your  own  risk without charge. The authors make no claim as to the useful-
ness of the software nor do they warrant its  fitness  for  any  particular
purpose.   You may freely redistribute the source code so long as all copy-
right notices remain intact in the distribution. The  Distributed  Queueing
System is:

Copyright (c) 1992, 1993 Supercomputer Computations Research Institute,
Florida State University.

     Some portions of DQS were contributed by the Center for  High  Perfor-
mance  Computing,  The  University  of  Texas  System and by the Pittsburgh
Supercomputing Center. These portions are:

Copyright (c) 1993 Pittsburgh Supercomputing Center
Copyright (c) 1993 The Regents of the University of Texas System

AUTHORS

     Tom Green of SCRI at FSU wrote the  code.  Jeff  Snyder  (formerly  of
SCRI)  wrote  the  X  interfaces. Rob Pennington of PSC plugged AFS support



Page 4                        March 22, 1993                DQS Version 2.1





Release Notes                                   Distributed Queueing System


into DQS and provided the first cut at the tmpdir code. Steve Hall  of  the
CHPC integrated the tmpdir support into 2.1 and wrote the request name pro-
cessing code. Dan Reynolds of the CHPC ported the code to UNICOS  and  Con-
vexOS, wrote the global configuration code, and reworked the access control
interface for the 2.1 release.

CREDITS

     Thanks go to Mike Iglesias, University of California, Irvine, for  the
MAXUJOBS  code and to Tom Quinn, University of Oxford, for finding & fixing
the two bum memory references. Thanks to all the brave sysadmins who helped
us test 2.1-alpha.

FUTURE PLANS

     Work on DQS version 2.2 is underway with a tentative release  date  of
May 15, 1993. We hope to provide in version 2.2:

o+ qsub "handoff's" to other batching systems (e.g., NQS)
o+ intercluster batching support
o+ Solaris 2.1 (and 2.2?) support
o+ queue and cluster access controls by groups
o+ improved configuration and administration support

If you have features that you would like to see in the package,  send  your
comments  to  dqs@scri.fsu.edu.  We don't promise we'll answer each request
but we will at least read them.

``You get what you pay for - or, at least you never get any more than what
you pay for!''



DQS Version 2.1               March 22, 1993                         Page 5

Article 1271 of comp.parallel.pvm:
Newsgroups: comp.parallel.pvm
From: adamb+@cs.cmu.edu (Adam Beguelin)
Subject: Re: books on PVM?

The book, PVM: Parallel Virtual Machine -- A Users Guide and Tutorial
for Network Parallel Computing, by Al Geist, Adam Beguelin, Jack
Dongarra, Weicheng Jiang, Bob Manchek, and Vaidy Sunderam should be out
by Supercomputing 94 this November.  It will be published by MIT Press.


