Newsgroups: comp.sys.transputer,comp.parallel.pvm,comp.parallel.mpi
From: D.J.Beckett@ukc.ac.uk (Dave Beckett)
Subject: Parallel Computing Archive at HENSA Unix: NEW FILES
Summary: New files since 7th April 1995. See ADMIN article for other info.
Keywords: transputer, occam, parallel, archive, anonymous ftp, www, gopher
Organization: University of Kent at Canterbury, UK.
Date: Thu, 01 Jun 95 15:53:57 GMT
Message-ID: <29@mint.ukc.ac.uk>

This is the new files list for the Parallel Computing Archive at
HENSA Unix.  Please consult the accompanying article for
administrative information and the various ways to access the
files.

For experts:
     World Wide Web <URL:http://www.hensa.ac.uk/parallel/>
	       OR
     anonymous ftp to unix.hensa.ac.uk and look in /pub/parallel
	       OR
     gopher to unix.hensa.ac.uk port 70 and go to "Parallel Archive"

Dave


MIRROR SITES
~~~~~~~~~~~~

There are two full mirror sites for the archive available for anonymous ftp:

FRANCE:	  <URL:ftp://ftp.ibp.fr/pub/parallel/>
	  <URL:ftp://ftp.jussieu.fr/pub/parallel/>

JAPAN:	  <URL:ftp://ftp.center.osaka-u.ac.jp/parallel/hensa/>


See <URL:http://www.hensa.ac.uk/parallel/www/mirror-sites.html> for full
details.


NEW FEATURES
~~~~~~~~~~~~

* New searching method - using Harvest system and Glimpse for the
  queries.  Much more sophisticated than WAIS but not yet a full text
  search.  Harvest is availabe at <URL:http://harvest.cs.colorado.edu/>.

* New areas (some empty): Acronyms; Algorithms; Applications;
  Architectures; Compilation tools; Libraries; Performance &
  benchmarks; Simulation and Standards.


IN PROGRESS
~~~~~~~~~~~

* Better keywords for files, using IAFA indices.

* User custom WWW interface created on-the-fly.


NEW FILES since 7th April 1995
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
30th May 1995

/parallel/environments/pvm3/distribution/pvm_test.tar.gz
	Updated: PVM Test utilities

/parallel/environments/pvm3/tape-pvm/
	Tape/Pvm event tracing tool developed and maintained at
	LMC-IMAG

/parallel/environments/pvm3/tape-pvm/patch02.tgz
	Patch 02 for Tape/Pvm - support for pvm_trecv

/parallel/environments/lam/distribution/
	LAM and MPI Area

/parallel/environments/lam/distribution/lam-papers.tar.gz
	Updated: Troillus and LAM papers

/parallel/environments/lam/distribution/lam52-patch.tar.gz
	Updated: LAM 5.2 source patch

/parallel/environments/lam/distribution/mpi-cubix10.tar.gz
	MPI Cubix library

/parallel/environments/mpi/unify/reports/Message-Passing/zipcode_parcomp.ps.Z
	"The Design and Evolution of Zipcode"
	by Anthony Skjellum, Computer Science Department & NSF Engineering
	Research Center for Computational Field Simulation,Mississippi State
	University; Steven G. Smith, Numerical Mathematics Group, Lawrence
	Livermore National Laboratory; Nathan E. Doss, NSF Engineering
	Research Center for Computational Field Simulation, Mississippi State
	University; Alvin P. Leung, Northeast Parallel Architectures Center,
	Syracuse University and Manfred Morari, Chemical Engineering,
	California Institute of Technology. March 8, 1994
	ABSTRACT:
	Zipcode is a message-passing and process-management system that was
	designed for multicomputers and homogeneous networks of computers in
	order to support libraries and large-scale multicomputer software. The
	system has evolved significantly over the last five years, based on
	our experiences and identified needs. Features of Zipcode that were
	originally unique to it, were its simultaneous support of static
	process groups, communication contexts, and virtual topologies,
	forming the "mailer" data structure. Point-to-point and collective
	operations reference the underlying group, and use contexts to avoid
	mixing up messages. Recently, we have added "gather-send" and
	"receive-scatter" semantics, based on persistent Zipcode "invoices,"
	both as a means to simplify message passing, and as a means to reveal
	more potential runtime optimizations. Key features in Zipcode appear
	in the forthcoming MPI standard.

/parallel/environments/mpi/unify/reports/Message-Passing/zipcode_pvm.ps.Z
	"Integrating Zipcode and PVM: Towards a Higher-Level Message-Passing
	Environment"
	by Li-wei H. Lehman, NSF Engineering Research Center for
	Computational Field Simulation, Mississippi State University,
	Mississippi State, Mississippi, 39762, USA. December 10, 1993.
	ABSTRACT:
	This paper describes the architecture and implementation of an
	integrated message-passing environment consisting of Zipcode and PVM.
	Zipcode is a highlevel message-passing system for multicomputers and
	homogeneous networks of computers. PVM is a relatively low-level
	message-passing system designed for multicomputers and heterogeneous
	networks of computers. Although PVM provides a workable and easyto-use
	message-passing system, it does not have some of the high-level
	constructs, such as communication contexts, required to develop
	reliable and scalable parallel libraries and large-scale distributed
	software. By porting Zipcode's high-level constructs on top of PVM,
	the integrated environment facilitates existing PVM applications to
	gradually migrate from a low-level to a high-level message-passing
	paradigm. In the integrated environment described here, Zipcode is
	ported on top of PVM via an intermediate layer, the emulated Cosmic
	Environment/Reactive Kernel (CE/RK) layer. Such an integrated
	environment allows programmers to utilize high-level Zipcode
	constructs as well as low-level PVM calls. Implementation details of
	the CE/RK primitives are presented. Planned future enhancements
	include performance improvement of the integrated system as well as
	adding heterogeneous environment support.

/parallel/environments/pvm3/adsmith/adsmith.tar.gz
	Updated: ADSMITH alpha distribution: A Heterogeneous
	Distributed Shared Memory Environment on PVM written by
	William W. Y. Liang <wyliang@solar.csie.ntu.edu.tw> of
	National Tsing Hua University, HsinChu, Taiwan.

/parallel/environments/pvm3/adsmith/adsmray.tar.gz
	ADSMITH Ray Tracer


26th May 1995

/parallel/events/applied-par-in-science
	"Workshop on Applied Parallel Computing in Physics, Chemistry and
	Engineering Science"
	Call for papers for tutorials and workshops being held from 21st-24th
	August 1995 at Danish Computing Centre for Research and Education, The
	Technical University of Denmark, Lyngby, Denmark. Deadlines:
	Abstracts: 15th June 1995; Papers: 31st July 1995. See also
	<URL:ftp://ftp.denet.dk/uni-c/unijw/para95/>.
	Author: Jerzy Wasniewski <unijw@unidhp1.uni-c.dk>.

/parallel/libraries/random/dantowitz/
	Random numbers for parallel processors by David Dantowitz

/parallel/libraries/random/dantowitz/README
	Overview of code

/parallel/libraries/random/dantowitz/code.c
	Source code

/parallel/applications/genetic-algorithms/stanford-report
	"Parallel Genetic Programming on a Network of Transputers"
	by John R. Koza and David Andre, Stanford University. See
	<URL:ftp://elib.stanford.edu/pub/reports/cs/tr/95/1542/>
	ABSTRACT:
	This report describes the parallel implementation of genetic
	programming in the C programming language using a PC 486 type computer
	(running Windows) acting as a host and a network of transputers acting
	as processing nodes. Using this approach, researchers of genetic
	algorithms and genetic programming can acquire computing power that is
	intermediate between the power of currently available workstations and
	that of supercomputers at a cost that is intermediate between the two.
	A comparison is made of the computational effort required to solve the
	problem of symbolic regression of the Boolean even-5-parity function
	with different migration rates. Genetic programming required the least
	computational effort with an 8% migration rate. Moreover, this
	computational effort was less than that required for solving the
	problem with a serial computer and a panmictic population of the same
	size. That is, apart from the nearly linear speed-up in executing a
	fixed amount of code inherent in the parallel implementation of
	genetic programming, parallelization delivered more than linear
	speed-up in solving the problem using genetic programming.
	Author: David Andre <phred@leland.Stanford.EDU>.

/parallel/languages/fortran/f90/f77tof90.html
	Fortran 90 for the Fortran 77 Programmer. See
	<URL:http://www.nsc.liu.se/f77to90.html>


22nd May 1995

/parallel/environments/mpi/unify/reports/Message-Passing/Elros_paper1_26mar94.ps.Z
	"Collective Operations Using ELROS and Sockets"
	by Kishore Viswanathan and Anthony Skjellum
	ABSTRACT:
	ELROS, an acronym for Embedded Language for Remote Operation
	Services, was developed at LLNL for programming distributed
	applications. ELROS statements can be embedded in a conventional
	language such as C. This makes it easier to develop distributed
	applications. ELROC supports both synchronous and asynchronous
	operations. ELROC also provides a good exception handling facility.
	Although ELROS provides a good programming interface, it does not
	provide primitives for collective operations. In this paper we present
	some of the collective operations that were writting using ELROS. We
	also present programs to demonstrate how these operations can be
	implemented using sockets. Advantages and disadvantages of using ELROS
	over sockets are also discussed.

/parallel//environments/mpi/unify/reports/Message-Passing/PVM-v-MPI-round1.ps.Z
	"Message Passing in the 1990's: Performance, Safety, Correctness"
	by Anthony Skjellum, MSU and Brian K. Grant, U. Washington. Contains:
	Origins of Message Passing, Evolution and comparison of PVM with MPI.

/parallel/environments/pvm3/tape-pvm/patch01.tgz
	Patch 01 for Tape/Pvm

/parallel/events/mpi-hpf-in-practice
	Call for attendance at MPI and HPF Tutorial being held from 26th-27th
	June 1995 at Wartburg Hotel, Mannheim, Germany. Includes goals, target
	groups and timetable of the tutorial. Lecturers are Prof. Jack
	Dongarra of University of Tennessee & ORNL; Charles Koelbel of CRPC at
	Rice University and Wacker, Director of Data Processing and
	Information Technology at DLR. See also:
	<URL:http://parallel.rz.uni-mannheim.de/sc/sc95.html>
	Author: Hans Werner Meuer <meuer@rz.uni-mannheim.de>.

/parallel/events/pvm-fortran
	"Parallel Tools - PVM with Fortran"
	Details of 1 day course being on 10th July 1995 at School of Maths,
	University of Greenwich, London, UK. Course will provide an
	introduction to the principles behind PVM such as message passing and
	heterogeneous computing, and a description of PVM's interface for
	Fortran programmers. The course closes with PVM program demonstrations
	and a simple PVM exercise. Run under the auspices of the London &
	South-East centre for High Performance Computing (SEL-HPC). SEL-HPC is
	a JISC funded consortium. See <URL:http://www.lpac.ac.uk/SEL-HPC/> for
	details. Cost: FREE to members of all HEFC funded Universities and
	Colleges in London and the South East but is open to members of other
	UK HEFC institutions.
	Author: Chris Walshaw <C.Walshaw@gre.ac.uk>.

/parallel/events/pdpta95.ascii
/parallel/events/pdpta95.ps.gz
	"International Conference on Parallel and Distributed Processing
	Techniques and Applications"
	Call for papers for conference being held from 3rd-4th November 1995
	at The University of Georgia in Athens, Georgia, USA. Topics:
	Parallel/Distributed architectures; Building block processors;
	Interconnection networks; Reliability and fault-tolerance;
	Parallel/Distributed algorithms; Parallel/Distributed applications;
	Mobile computation and communication; Heterogeneous and multimedia
	systems; Software tools and environments for parallel computers;
	High-performance computing in Computational Science and others.
	Deadlines: Draft Papers: 12th June 1995; Acceptance: 7th July 1995;
	Camera-ready papers: 8th August 1995.
	Author: Hamid R. Arabnia <hra@pollux.cs.uga.edu>.


17th May 1995

/parallel/events/mpidevel
	"MPI Developers Conference"
	Call for papers for the Conference being held from 22nd-23rd June
	1995 at University of Notre Dame, Notre Dame, IN, USA. This is a
	conference for researchers and developers who use the Message Passing
	Interface (MPI) standard and is intended to support the continued
	development and use of MPI and its extensions. The conference will
	provide a forum for developers from national laboratories, industry,
	and academia who are using MPI to present their ideas about, and
	experiences with, MPI. See <URL:http://www.cse.nd.edu/mpidc95/> or
	<URL:ftp://www.cse.nd.edu/mpidc95/cfp.ps>
	Author: Andrew Lumsdaine <lumsdaine.1@nd.edu>.


16th May 1995

/parallel/environments/lam/distribution/lam52-patch.tar.gz
	Updated LAM 5.2 source patch


15th May 1995

/parallel/documents/benchmarks/genesis/IMPORTANT_NOTICE
	IMPORTANT NOTICE - the GENESIS distributed memory
	benchmark suite is no longer being supported.  PARKBENCH v1.0
	which includes some of GENESIS is recommended instead.

/parallel/environments/pade/
	NIST Parallel Applications Development Environment (PADE) is
	a flexible, customizable environment for developing parallel
	applications that uses the PVM (Parallel Virtual Machine)
	message-passing library. It provides an integrated framework
	for all phases of development of a parallel application:
	editing, compilation, execution, debugging, and performance
	monitoring. PADE consists of an intuitive graphical user
	interface, a suite of PVM utilities, and extensive
	documentation in PostScript, ASCII, and HTML formats.

/parallel/environments/pade/README
	Overview of PADE

/parallel/environments/pade/Announcement
	Announcement of PADE 1.2
	Author: Robert R Lipman <lipman@cam.nist.gov>.

/parallel/environments/pade/nist_pade.1.2.0.tar.gz
	PADE version 1.2.0 distribution. Includes documentation and source.
	Requires Tcl/Tk to build

12th May 1995

/parallel/libraries/numerical/omega-calculator/
	The Omega Calculator and Library. The Omega calculator is a
	text-based interface to the Omega library, a set of routines
	developed for manipulating: Presburger formulas, Integer
	tuple sets and Integer tuple relations.

/parallel/libraries/numerical/omega-calculator/README
	Short description of the Omega Library

/parallel/libraries/numerical/omega-calculator/README.FILES
	Overview of files

/parallel/libraries/numerical/omega-calculator/calculator.ps.Z
	Documentation on the Omega Calculator

/parallel/libraries/numerical/omega-calculator/documentation.tar.Z
	Contains interface.ps, calculator.ps, and a C++ source file used as a
	running example in the documentation.

/parallel/libraries/numerical/omega-calculator/interface.ps.Z
	Documentation on the Omega Library.

/parallel/libraries/numerical/omega-calculator/omega-lib-source.tar.Z
	Omega library source

/parallel/libraries/numerical/omega-calculator/README.binaries
	Overview of Omega binary releases

/parallel/libraries/numerical/omega-calculator/omega-calc.decmips-ultrix.tar.Z
	Omega Calculator executable and examples, for Digital Decstations
	running Ultrix 4.2.

/parallel/libraries/numerical/omega-calculator/omega-calc.sparc-sunos4.tar.Z
	Omega Calculator executable and examples, for Sun Sparcstations
	running SunOS 4.1.3.

/parallel/libraries/communication/c4/
	Canonical Classes for Concurrency Control (C4) by Geoffrey
	Furnish <furnish@dino.ph.utexas.edu>, Institute for Fusion
	Studies, University of Texas at Austin, Austin, Tx 78712,
	USA. C4 provides provides objects which implement a variety
	of synchornization and data transmission paradigms. It is not
	a C++ language extension but is a library which can be used
	with any reasonably modern C++ compiler. It is to be used in
	concert with a message passing library such as MPI or NX. See
	also <URL:http://dino.ph.utexas.edu/~furnish>.

/parallel/libraries/communication/c4/Announcement
	Announcement

/parallel/libraries/communication/c4/README
	README

/parallel/libraries/communication/c4/c4.tar.gz
	Latest version of C4 - Canonical Classes for Concurrency Control.

/parallel/libraries/communication/c4/ds++.tar.gz
	Latest version of DS++ - the C++ Data Structure Library.

/parallel/libraries/communication/c4/c4-950503.tar.gz
	C4 of 3rd May 1995.


/parallel/environments/pvm3/pious/
	PIOUS, the Parallel Input/Output System, implements a
	parallel file system for applications executing in a
	parallel-distributed computing environment using PVM 3. PIOUS
	supports parallel applications by providing coordinated
	access to file objects with guaranteed consistency
	semantics. For performance, PIOUS declusters file data to
	exploit the combined file I/O and buffer cache capacities of
	networked computer systems.

/parallel/environments/pvm3/pious/announce1.2.2.txt
	PIOUS 1.2.2 announcement
	Author: Steve Moyer <moyer@mathcs.emory.edu>.

/parallel/environments/pvm3/pious/README
	PIOUS Distribution Information

/parallel/environments/pvm3/pious/BUGRPRT
	Bug Reports on current version - check before installing

/parallel/environments/pvm3/pious/pious1.2.2.tar.z.uu
	PIOUS 1.2.2 source distribution. GNU Library General Public License
	Version 2 (LGPL).

/parallel/environments/pvm3/pious/piousUG1.2.ps.z.uu
	PIOUS Users Guide

/parallel/environments/pvm3/pious/wwwpious.html
	PIOUS WWW Overview

/parallel/environments/enterprise/UserManualTR95-02.ps.Z
	Enterprise v2.4.2 user manual

/parallel/languages/fortran/adaptor/hpf_examples.tar.Z
	Updated High Performance Fortran examples for ADAPTOR.

/parallel/languages/fortran/adaptor/adp_3.0.tar.Z
	Updated ADAPTOR v3.0 source, documentation and examples.


9th May 1995

/parallel/environments/enterprise/
	The Enterprise parallel programming system - an interactive
	graphical programming environment for designing, coding, debugging,
	testing and executing programs in a distributed hardware
	environment. Enterprise code looks like familiar sequential code
	because the parallelism is expressed graphically and independently
	of the code.  The system automatically inserts the code necessary
	to correctly handle communication and synchronization, allowing the
	rapid construction of distributed programs.  Uses either NMP or PVM
	as communications systems and contains binaries for SUN4 and RS6K.

/parallel/environments/enterprise/00-README
	Installation instructions for Enterprise

/parallel/environments/enterprise/00-INDEX
	Index of files

/parallel/environments/enterprise/Papers/
	Papers by the Enterprise team

/parallel/environments/enterprise/Enterprise2.4.2-RS6K.tar.Z
	Binaries for IBM RS6000 running AIX 3.2

/parallel/environments/enterprise/Enterprise2.4.2-SUN4.tar.Z
	Binaries for SUN 4 running SunOS 4.1.3

/parallel/environments/enterprise/Enterprise2.4.2-common.tar.Z
	Shell scripts and include files common to all architectures

/parallel/languages/fortran/adaptor/README
	Announcement of ADAPTOR v3.0, June 1995

/parallel/languages/fortran/adaptor/adp_3.0.tar.Z
	ADAPTOR v3.0 source, documentation and examples.

/parallel/languages/fortran/adaptor/iguide.ps.Z
	ADAPTOR Installation guide.

/parallel/documents/mpi/anl/misc/mpich-1.0.9.tar.Z
	MPI Chameleon implementation version 1.0.9 (7th May 1995).

/parallel/documents/mpi/anl/misc/mpich-exp1.0.9.tar.Z
	MPI Chameleon implementation experimental version 1.0.9 (5th May
	1995).

/parallel/documents/benchmarks/genesis/parkbench-low-level-release.tar.Z
	PARKBENCH Distributed Memory Benchmarks low level Fortran source
	release for GENESIS Benchmarks from University of Southampton.

/parallel/software/folding-editors/README.origami
/parallel/software/folding-editors/origami-1.7.1.tar.gz
/parallel/software/folding-editors/origami-1.7.1.tar.Z
	Origami folding editor version 1.7.1 with many improvements by Vedat
	Demiralp, University of Kent, Canterbury, UK including many bug fixes,
	auto-saving, backup files, better browsing, backwards fold creation,
	fold creation now has a cancel, better display line. Includes Sparc
	SunOS 4.1.3 binary, source, key files and documentation. See
	README.origami for an overview.
	Author: Vedat Demiralp <S.V.Demiralp@ukc.ac.uk>.


4th May 1995

/parallel/occam/projects/occam-for-all/
	The Occam For All (OFA) project between the University of Kent, the
	University of Keele and industrial partners.

/parallel/occam/projects/occam-for-all/case-for-support.html
/parallel/occam/projects/occam-for-all/case-for-support.ps
/parallel/occam/projects/occam-for-all/case-for-support.txt
	Occam FOr All - Case for support

/parallel/occam/projects/occam-for-all/kroc/
	Kent Retargetable Occam Compiler (KROC) distribution
	area. KROC is a development of the "Occam For All" EPSRC
	project at the University of Kent at Canterbury and the
	University of Keele. It will provide a portable occam
	compiler for Sparc, Alpha and PowerPC processors. See the
	case for support in the parent directory.

/parallel/occam/projects/occam-for-all/kroc/README.html
/parallel/occam/projects/occam-for-all/kroc/README
	Overview of KROC

/parallel/occam/projects/occam-for-all/kroc/kroc-0.1beta.tar.gz
/parallel/occam/projects/occam-for-all/kroc/kroc-0.1beta.tar.Z
	KROC 0.1 beta BINARY distribution for Sun SPARCs with SunOS 4.1.3U1
	(or related versions)


1st May 1995

/parallel/languages/fortran/adaptor/docs/iguide.ps.Z
	"Adaptor Installation Guide Version 3.0"
	Author: T. Brandes.

/parallel/languages/fortran/adaptor/docs/language.ps.Z
	"Adaptor Language Reference Manual Version 3.0"
	Author: T. Brandes.

/parallel/languages/fortran/adaptor/docs/uguide.ps.Z
	"Adaptor Users Guide Version 3.0"
	Author: T. Brandes.

/parallel/languages/fortran/adaptor/docs/tut_adaptor.ps.Z
	"Adaptor : a public domain HPF compilation System (Tutorial)"
	German National Research Center for Information Technology (GMD),
	Institute for Algorithms and Scientific Computing (SCAI)
	Author: Dr. Thomas Brandes.

/parallel/languages/fortran/adaptor/docs/tut_hpf_language.ps.Z
	"High Performance Fortran (HPF): The Language (Tutorial)"
	Author: Dr. Thomas Brandes.

/parallel/languages/fortran/adaptor/docs/tut_hpf_standard.ps.Z
	"High Performance Fortran (HPF) - The new Standard for Data Parallel
	Programming (Tutorial)"
	Author: Dr. Thomas Brandes.

/parallel/documents/mpi/anl/mpi2/apr24.ps.Z
/parallel/documents/mpi/anl/mpi2/apr24.dvi
	Minutes of MPI meeting held from April 24-26, 1995 at Chicago,
	Illinois, USA. An unedited set of minutes taken during this MPI
	meeting. This contains both a summary of some of the discussions and
	official, binding votes of the MPI Forum.
	Author: William Gropp <gropp@mcs.anl.gov>.


24th April 1995

/parallel/documents/mpi/anl/sut-1.0.15.tar.Z
	Scalable Unix Tools V1.0.15: pps, pls, load, gload, prun, pkill, prm,
	pdistrib, pfind, fps, pfps etc. by Gropp and Lusk. Includes
	paper.


21st April 1995

/parallel/transputer/software/compilers/gcc/yaroslavl/changes6
	Changes in alpha 6 version

/parallel/transputer/software/compilers/gcc/yaroslavl/gcc-2.6.3-t800.6.dif.gz
	Alpha 6 version

/parallel/transputer/software/compilers/gcc/yaroslavl/patch6.gz
	Patch from alpha 5 to alpha 6

/parallel/environments/chimp/release/chimp.tar.Z
	Updated CHIMP distribution

/parallel/documents/hippi/hippi-atm_1.5x.ps.gz
	HIPPI over ATM document Version 1.5

/parallel/documents/hippi/hippi-atm_1.5x_changes.ps.gz
	HIPPI over ATM changes

/parallel/documents/hippi/minutes/apr95_hippi_min.ps.gz
/parallel/documents/hippi/minutes/apr95_hippi_min.txt
	Minutes for April 1995 HIPPI meeting

/parallel/documents/pario/papers/Kotz/kotz:explore.ps.Z
	"Exploring the use of I/O Nodes for Computation in a MIMD
	Multiprocessor"
	by David Kotz and Ting Cai, Department of Computer Science, Dartmouth
	College, Hanover, NH 03755, USA. fdfk,tcaig@cs.dartmouth.edu
	ABSTRACT:
	As parallel systems move into the production scientific-computing
	world, the emphasis will be on cost-effective solutions that provide
	high throughput for a mix of applications. Costeffective solutions
	demand that a system make effective use of all of its resources. Many
	MIMD multiprocessors today, however, distinguish between compute and
	I/O nodes, the latter having attached disks and being dedicated to
	running the file-system server. This static division of
	responsibilities simplifies system management but does not necessarily
	lead to the best performance in workloads that need a different
	balance of computation and I/O. Of course, computational processes
	sharing a node with a file-system service may receive less CPU time,
	network bandwidth, and memory bandwidth than they would on a
	computationonly node. In this paper we begin to examine this issue
	experimentally. We found that highperformance I/O does not necessarily
	require substantial CPU time, leaving plenty of time for application
	computation. There were some complex file-system requests, however,
	which left little CPU time available to the application. (The impact
	on network and memory bandwidth still needs to be determined.) For
	applications (or users) that cannot tolerate an occasional
	interruption, we recommend that they continue to use only compute
	nodes. For tolerant applications needing more cycles than those
	provided by the compute nodes, we recommend that they take full
	advantage of both compute and I/O nodes for computation, and that
	operating systems should make this possible.


20th April 1995

/parallel/software/simulators/chaos/docs/minimal.ps.Z
	"Performance Analysis of a Minimal Adaptive Router"
	by Thu Duc Nguyen and Lawrence Snyder, Dept. of Computer Science and
	Engineering, University of Washington, Seattle, Washington, USA. In
	Proceedings of the 1994 Parallel Computer Routing and Communication
	Workshop, May 1994, pp. 31-44. Copyright 1994, Springer-Verlag.
	ABSTRACT:
	Two classes of adaptive routers, minimal and non-minimal, are
	emerging as possible replacements for the oblivious routers used in
	current multicomputer networks. In this paper, we compare the
	simulated performance of three routers, an oblivious, a minimal, and a
	non-minimal adaptive router, in a twodimensional packet switching
	torus network. The non-minimal adaptive router is shown to give the
	best performance and the oblivious router the worst. Significantly,
	however, for many traffic patterns, the minimal adaptive router's
	performance degrades sharply as the network saturates. Based on an
	analysis made using several visualization tools, we argue that this
	performance drop results from nonuniformities introduced for deadlock
	prevention. Furthermore, this analysis has led us to believe that
	network balance is an important performance characteristic that has
	been largely overlooked by designers of adaptive routing algorithms.

/parallel/software/simulators/chaos/docs/ebn.ps.Z
	Updated: "The Express Broadcast Network: A Network for
	Low-Latency Broadcast of Control Messages"
	by Kevin Bolding and William Yost, Dept. of Computer Science and
	Engineering, University of Washington, Seattle, Washington, USA.
	November 28, 1994.
	ABSTRACT:
	We present the Express Broadcast Network (EBN), a network used for
	quick and reliable broadcast of control messages in multicomputer
	networks. The EBN can be implemented with a single extra wire per
	network link and with minimal extra hardware at each routing node.
	However, it provides very fast broadcast mechanisms that take
	advantage of all redundancy in the network to deliver messages
	regardless of faulty network components. We present extensions of the
	basic network to include multiple-wire, multiple-bit, and
	bidirectional wire support, as well as describing basic methods of
	using the EBN for various applications.

