Newsgroups: comp.sys.transputer,comp.parallel.pvm,comp.parallel.mpi
From: D.J.Beckett@ukc.ac.uk (Dave Beckett)
Subject: Internet Parallel Computing Archive (HENSA Unix) - NEW FILES 1/2
Summary: New files since 12th Sep 1995. See ADMIN article for other info.
Keywords: transputer, occam, parallel, archive, anonymous ftp, www, gopher
Organization: University of Kent at Canterbury, UK.
Date: Fri, 13 Oct 95 14:49:08 GMT
Message-ID: <66@mint.ukc.ac.uk>

		 Internet Parallel Computing Archive
		 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
			 Funded by JISC NTSC
		 Hosted at HENSA Unix (JISC funded)

			New Files Part 1 of 2

The archive is available via these access methods:

 * World Wide Web: <URL:http://www.hensa.ac.uk/parallel/>

 * FTP site: <URL:ftp://unix.hensa.ac.uk/pub/parallel/>
   which means: anonymous ftp to unix.hensa.ac.uk and look in /pub/parallel

 * gopher to unix.hensa.ac.uk port 70 and go to
  "Internet Parallel Computing Archive"

 * One of the mirror sites:
   AUSTRALIA:  <URL:ftp://pcrf.anu.edu.au/HENSA/>
   FRANCE:     <URL:ftp://ftp.ibp.fr/pub/parallel/>
               <URL:ftp://ftp.jussieu.fr/pub/parallel/>
   JAPAN:      <URL:ftp://ftp.center.osaka-u.ac.jp/parallel/hensa/>

   See <URL:http://www.hensa.ac.uk/parallel/www/mirror-sites.html>
   for full details.

Please also consult the accompanying article if you cannot use any of
the above methods of access or for further information.

Dave


NEW FEATURES
~~~~~~~~~~~~

* New areas:

  /parallel/oses/	Operating systems

  /parallel/tools/	Compilers, editors, debuggers and other tools


* Updates to packages:

    TKPVM, kroc, winmpi, tape-pvm, mpi (ANL), xab3 [pvm], hippi, lam
  
  see below for full details


NEW FILES between 4th October 1995-13th October 1995
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13th October 1995

/parallel/events/par-strat-sci-comp
	"Basic Parallel Strategies for Scientific Computing"
	Call for participation in workshop being held on 29th November 1995
	at University of Greenwich, UK. Organised by SEL-HPC, funded by JISC.
	The course is aimed at computational scientists, engineers and
	mathematicians with programming experience in Fortran or C who are
	interested in exploiting HPC facilities. It is also aimed at lecturers
	and teaching staff who wish to introduce the principles of HPC into
	their own courses. It is provided primarily for members of HEIs in
	London and the South East, although members of HEIs in other parts of
	the UK are welcome to attend.
	Cost: Free to members of UK HEFCEs but register by 24th November.
	See also <URL:http://www.lpac.ac.uk/SEL-HPC/>, the SEL-HPC home page.

/parallel/events/spaa96
	"8th Annual ACM Symposium on Parallel Algorithms and Architectures"
	by Robert Cypher <cypher@maldives.cs.jhu.edu>
	Call for papers for symposimum being held from 24-26th June 1996 at
	Padua, Italy. Sponsored by ACM SIGACT and SIGARCH and organised by
	EATCS.
	Topics: novel approaches to parallel computing; new ideas in parallel
	algorithms or architecuters (including networks); models for
	accounting for costs on parallel machines; the interaction of parallel
	algorithms, languages and architectures and others.
	Deadlines: Papers: 16th January 1996; Notification: 29th February
	1996; Final papers: 26th March 1996.
	See also <URL:http://www.cs.jhu.edu/Conferences/SPAA/>

/parallel/events/dynamic-load-bal-mpp
	"Dynamic Load Balancing on MPP Systems: Progress, Challenge and
	Issues"
	by Kevin Maguire <K.Maguire@daresbury.ac.uk>
	Call for participation and programme for workshop being held on 27th
	November 1995 at Daresbury Laboratory, Warrington, UK. Organised by
	CCP12 and the ERCOFTAC Special Interest Group for Parallel Computing
	in CFD, with support from the DL HPCI Centre.
	Objectives: Seek to identify the current status of dynamic load
	balancing on parallel systems, focussing on T32, Pargon and SP2
	although the issues also apply to clusters. Establish current practice
	and illustrate the challenges facing both academic and industrial
	researchers. The workshop will also seek to identify the issues
	involved in achieving efficient dynamic load balancing strategies on
	existing or future machines.
	Deadlines: Early Registration: 20th November 1995.
	See also
	<URL:http://www.dl.ac.uk/TCSC/Staff/Hu_Y_F/MEETING/meeting.html>

/parallel/events/pact96
	"4th International Conference on Parallel Architectures and
	Compilation Techniques"
	by Chip Weems <weems@cs.umass.edu>
	Call for papers for conference being held from 21-23rd October 1996
	at Boston, Massachusetts, USA. Sponsored by BIGNAMES.
	Topics: Novel computation models for fine and medium grain
	parallelism; Architectures and compilers for fine and medium grain
	parallelism; Compiler / hardware techniques for exploitation of
	fine-grain parallelism in massively parallel machines; Support for
	medium-grain parallelism via low-latency processor interconnection
	networks; New programming languages and paradigms for fine and medium
	grain parallelism; Insights into compilation techniques or
	architectural mechanisms via application studies; Exploitation of fine
	and medium grain parallelism in application-specific architectures
	using data-flow, multi-threaded and other novel approaches and others.
	Deadlines: Papers: 8th March 1996; Notification: 17th June 1996.
	See also <URL:http://www.cs.umass.edu/~pact96>

/parallel/events/hpdc5
	"5th IEEE International Symposium on High Performance Distributed
	Computing"
	by Manish Parashar <parashar@cs.utexas.edu>
	Call for papers for symposium being held from 6th-9th August 1996 at
	ON Center, Syracuse, New York, USA. Sponsored by IEEE, NPAC, NYSCATCA
	CASE in cooperation with ACM SIGCOMM and Rome Laboratory.
	Topics: Software environments and language support for high
	performance distributed computing; Parallel and distributed algorithms
	to solve computationally intensive problems across a LAN, MAN, or WAN;
	High performance I/O and file systems; Fault tolerance; Architectural
	support for high-speed communications or interconnection networks;
	Efficient communication interfaces for distributed computing; Gigabit
	network architectures; Networking for multimedia data; HPDC
	applications and case studies and others.
	Deadlines: Papers: 9th February 1996; Notification: 26th April 1996;
	Camera-ready papers: 31st May 1996.

/parallel/events/spdt96
	"Symposium on Parallel and Distributed Tools"
	by Barton Miller <bart@cs.wisc.edu>
	Call for papers for conference being held from 22-23rd May 1996 at
	Pennsylvania, Philadelphia, USA. Sponsored by ACM/SIGMETRICS.
	This conference has grown out of several very successful workshops,
	including the ACM/ONR Workshop on Parallel and Distributed Debugging,
	Workshop on Debugging and Performance Tuning for Parallel Computing
	Systems, and Supercomputer Debugging `9x. The conference will bring
	together researchers, system designers, implementors, and users in a
	common forum to discuss program monitoring, debugging, and control for
	parallel and distributed systems.
	Topics: static and dynamic analysis techniques; performance
	prediction; program visualization, auralization, and animation;
	perturbation analysis debugging/tuning parallelized code; tools for
	high-level parallel languages race detection; architectural support
	for measurement & debugging; program instrumentation; network
	measurement and debugging; new debugging and monitoring paradigms;
	experiences in debugging/tuning large applications; descriptions of
	interesting research or commercial debuggers and others.
	Deadlines: Papers: 1st December 1996; Notification: 1st March 1996;
	Final papers: 12th April 1996.
	See also <URL:http://www.cs.wisc.edu/~paradyn/spdt96.html>

/parallel/journals/Wiley/trcom/msword-styles/
	MS Word 6.0 template file and guidelines for styling Transputer
	Communications papers.

/parallel/journals/Wiley/trcom/msword-styles/ttci01.dot
	MS Word 6.0 template file for styling Transputer Communications
	papers.

/parallel/journals/Wiley/trcom/msword-styles/ttcinst.doc
	MS Word 6.0 guidelines for styling Transputer Communications papers.


12th October 1995

/parallel/occam/projects/occam-for-all/kroc/kroc-0.5beta-sparc-sun-sunos4.1.3_U1.tar.gz
/parallel/occam/projects/occam-for-all/kroc/kroc-0.5beta-sparc-sun-sunos4.1.3_U1.tar.Z
	KROC 0.5 beta BINARY distribution for Sun Sparcs with SunOS 4.1.3U1
	(or related versions). Compiles the occam 2.1 language (RECORDS and
	DATA TYPEs) has an occam/C interface tool and has better separate
	compilation. Includes missing INT16 and INT64 operations that were
	accidently not included in 0.4. Passes all 'CG tests' except #19.
	Author: Occam For All Group <ofa-bugs@ukc.ac.uk>

/parallel/occam/projects/occam-for-all/kroc/kroc-0.5beta-sparc-sun-solaris2.3.tar.gz
/parallel/occam/projects/occam-for-all/kroc/kroc-0.5beta-sparc-sun-solaris2.3.tar.Z
	KROC 0.5 beta BINARY distribution for Sun Sparcs with Solaris 2.3
	(SunOS 5.3) (or related versions). Compiles the occam 2.1 language
	(RECORDS and DATA TYPEs) has an occam/C interface tool and has better
	separate compilation. Includes missing INT16 and INT64 operations that
	were accidently not included in 0.4. Passes all 'CG tests' except #19.
	Author: Occam For All Group <ofa-bugs@ukc.ac.uk>


10th October 1995

/parallel/environments/pvm3/tkpvm/
	tkPvm Updated to support Tcl 7.5a1+, tk4.1a1+, tk4.1a1dash,
	tk4.0p2+, tcl7.4p2+

/parallel/environments/lam/distribution/mpi-poll.txt
	MPI Poll '95 Results from Ohio Supercomputer Center to find out how
	programmers are using MPI, what extensions are needed and how to
	prioritize future work.
	There were 129 responses.
	Also available at <URL:http://www.osc.edu/Lam/mpi/mpi_poll.html>

/parallel/standards/hippi/hippi-sc_2.9.ps.gz
	"High-Performance Parallel Interface - Physical Switch Control"
	A maintenance copy of ANSI X3.222-1993. Sept 28, 1995.
	ABSTRACT:
	This standard provides a protocol for controlling physical layer
	switches which are based on the High-performance Parallel Interface, a
	simple high-performance point-to-point interface for transmitting
	digital data at paek data rates of 800 or 1600 Mbit/s between
	data-processing equipment.

/parallel/languages/code/GBL2Paper.ps.Z
	"A High Level Language for Specifying Graph Based Languages and their
	PRogramming Environments (Draft)"
	by M.F. Kleyn <kleyn@cs.utexas.edu> and J.C. Browne
	<browne@cs.utexas.edu>.
	ABSTRACT:
	This paper describes a high level language for specifying programming
	environments for programming languages that are based on directed
	attributed graphs. The high level language allows the specifier to
	describe views of portions of a program written in such a graph-based
	language, the editing operations used to create the program,
	animations of the execution of the program, and sufficient detail of
	the execution semantics to support the animations. We demonstrate the
	use of the specification language with two simple examples of
	graph-based languages: Petri Nets, and an extension of Petri Nets
	which includes the ability to nest nets hierarchically. We further
	describe how to generate the programming environment for graph-based
	languages from descriptions made in the specification language. This
	work is the basis for developing a compiler for generating programming
	environments for graph-based languages automatically. We wish to
	remedy the add-hoc re-inventing of such systems by providing the
	high-level domain-specific set of abstractions for specifying them.
	The specification language is based on using a grammar to describe the
	components of the graph-based language and using a first-order logic
	based language to describe state changes in editing, execution, and
	animation.


9th October 1995

/parallel/events/pvm-fortran
	"Introduction to PVM with Fortran"
	Call for attendance for course being held on 1st November 1995 at
	University of Greenwich, UK. Organised by SEL-HPC
	This one-day course provides an introduction to the principles behind
	PVM such as message passing and heterogeneous computing, and a
	description of PVM's interface for Fortran programmers. The course
	closes with PVM program demonstrations and a simple PVM exercise.
	See also <URL:http://www.lpac.ac.uk/SEL-HPC/>

/parallel/environments/pvm3/xab3/Dome.ps.Z
	"Distributed Object Migration Environment"
	by Adam Beguelin <adambg@cs.cmu.edu>, School of Computer Science,
	Carnegie Mellon University, Pittsburgh, PA 15213, USA / Pittsburgh
	Supercomputing Center
	A talk about Dome and Distributed Objects for Parallel Programming.
	Dome is a C++ library of data parallel objects that are automatically
	distributed in a heterogeneous computing environment.

/parallel/environments/pvm3/xab3/ipps96.ps.Z
	"Dome: Parallel programmin in a distributed computing environment"
	by Arabe, Beguelin, Loewekam, Seligman, Starkey and Stephan, School
	of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213,
	USA
	ABSTRACT:
	The Distributed object migration environment (dome) addresses three
	major issues of distributed parallel programming: ease of use, load
	balancing, and fault tolerance. In order to make parallel programming
	easier, Dome handles process control, data distribution,
	communication, and synchronization for Dome programs running in a
	heterogeneous distributed computing environment. The parallel
	programmer writes a C++ program using Dome objects. These objects are
	automatically partitioned and distributed over a network of computers.
	Methods for operating on Dome objects take advantage of this
	distribution in performing operations wherever possible

/parallel/environments/pvm3/xab3/ckpt_ipps96.ps.Z
	"High Level Fault Tolerance in Distributed Programs"
	by Erik Seligman <eriks@cs.cmu.edu>, Intel Corporation; Adam Beguelin
	<adambg@cs.cmu.edu>, School of Computer Science, Carnegie Mellon
	University, Pittsburgh, PA 15213, USA / Pittsburgh Supercomputing
	Center and Peter Stephan <pstephan@cs.cmu.edu>, School of Computer
	Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
	ABSTRACT:
	We have developed high level checkpoint and restart methods for use
	with the Distributed object migration environment (Dome), a C++
	library of data parallel objects that are automatically distributed in
	a heterogeneous computing environment. Fault tolerance mechanisms for
	use with Dome can be implemented at various levels of programming
	abstraction. In a high level method the checkpoint and restart
	mechanisms are built into the C++ objects. This package provides
	highly portable checkpointing. However, it is not transparent to the
	application programmer, and the user's program structure is
	constrained. Another high level method uses a preprocessor to insert
	most of the checkpoint and restart calls automatically. This is also
	highly portable and is much more transparent to the programmer. Low
	level checkpointing methods periodically save the program's memory
	image upon interrupt. The low level methods are completely transparent
	to the programmer but are not portable. Because portability of both
	the fault tolerance package and the checkpoints produced is an
	important goal, this paper focuses on the high level checkpointing
	methods. In addition, an implementation of high level fault tolerance
	that has been executed on multiple architectures

/parallel/environments/pvm3/xab3/ecoipps.ps.Z
	"ECO: Efficient Collective Operations for Communication on
	Heterogeneous Networks "
	by Bruce B. Lowekamp <flowekamp@cs.cmu.edu>, School of Computer
	Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA and
	Adam Beguelin <adambg@cs.cmu.edu>, School of Computer Science,
	Carnegie Mellon University, Pittsburgh, PA 15213, USA.
	ABSTRACT:
	PVM and other distributed computing systems have enabled the use of
	networks of workstations for parallel computation, but their approach
	of treating a network as a collection of point-to-point connections
	does not promote efficient communication| particularly collective
	communication. ECO is a package which solves this problem with
	programs which analyze the network and establish efficient
	communication patterns which are used by a library of collective
	operations. The analysis is done off-line, so that after paying the
	one-time cost of analyzing the network, the execution of application
	programs is not delayed. This paper gives performance results from
	using ECO to implement the collective communication in CHARMM, a
	widely used macromolecular dynamics package. ECO facilitates the
	development of data parallel applications by providing a simple
	interface to routines which use the available heterogeneous networks
	efficiently. This approach gives a naive programmer the ability to use
	the available networks to their full potential without acquiring any
	knowledge of the network structure.

/parallel/standards/mpi/anl/
	MPI Chameleon implementation version 1.0.11 release

/parallel/standards/mpi/anl/README
	Details of MPI Chameleon package, installation and build
	instructions.

/parallel/standards/mpi/anl/mpich-1.0.11.tar.Z
	MPI Chameleon implementation version 1.0.11 (29th September 1995).

/parallel/standards/mpi/anl/userguide.ps.Z
	"Users' Guide to mpich, a Portable Implementation of MPI"
	by Patrick Bridges; Nathan Doss; William Gropp; Edward Karrels; Ewing
	Lusk and Anthony Skjellum.
	July 31, 1995.
	ABSTRACT:
	MPI (Message-Passing Interface) is a standard specification for
	message-passing libraries. mpich is a portable implementation of the
	full MPI specification for a wide variety of parallel computing
	environments. This paper describes how to build and run MPI programs
	using the MPICH implementation of MPI.

/parallel/standards/mpi/anl/install.ps.Z
	"Installation Guide to mpich, a Portable Implementation of MPI"
	by Patrick Bridges; Nathan Doss; William Gropp; Edward Karrels; Ewing
	Lusk and Anthony Skjellum.
	August 1st, 1995.
	ABSTRACT:
	MPI (Message-Passing Interface) is a standard specification for
	message-passing libraries. mpich is a portable implementation of the
	full MPI specification for a wide variety of parallel computing
	environments, including workstation clusters and massively parallel
	processors (MPPs). mpich contains, along with the MPI library itself,
	a programming environment for working with MPI programs. The
	programming environment includes a portable startup mechanism, several
	profiling libraries for studying the performance of MPI programs, and
	an X interface to all of the tools. This guide explains how to
	compile, test, and install mpich and its related tools.

/parallel/standards/mpi/anl/manwww.tar.Z
	HTML versions of the manual pages for MPI and MPE functions.

/parallel/standards/mpi/anl/nupshot.tar.Z
	Nupshot: A performance visualization tool that displays logfiles in
	the 'alog' format or the PICL v.1 format. Requires TCL 7.3 and TK 3.6
	to build.
	Author: Ed Karrels <karrels@mcs.anl.gov>

