Newsgroups: comp.parallel,comp.parallel.mpi
From: rvdg@cs.utexas.edu (Robert van de Geijn)
Subject: InterCom: Lean Mean Collective Communication Machine
Organization: CS Dept, University of Texas at Austin
Date: Wed, 31 Aug 1994 19:36:59 GMT
Message-ID: <342m4r$2nc@earth.cs.utexas.edu>

We would like to announce the official release of the Interprocessor
Collective Communications (InterCom) Library, iCC release R1.0.0 for
the Intel family of Supercomputers.  This library is the result of an
ongoing collaboration between Mike Barnett (University of Idaho),
David Payne (Intel SSD), Satya Gupta (Intel SSD), Lance Shuler (Sandia
National Laboratories), Robert van de Geijn (University of Texas at
Austin), and Jerrell Watts (California Institute of Technology),
funded by the Intel Research Council, Intel SSD, and the University of
Texas Center for High Performance Computing.

The library implements a comprehensive approach to collective
communication.  The results are best summarized by the following
performance table, representative of improvements obtained on a 
16 X 32 Paragon, running under R1.1 of the O/S:

                   vector        NX        InterCom      ratio
     Operation     length       (sec)       (sec)     (NX/InterCom) 
   -----------------------------------------------------------------

     Broadcast     8 bytes      0.0012      0.0013        0.92
                 64K bytes      0.031       0.012         2.58
                  1M bytes      0.51        0.10          5.10

       Collect     8 bytes      0.42        0.0076       55.3
 (unknown len)   64K bytes      0.47        0.017        27.6  
                  1M bytes      1.11        0.080        13.9

Similar, or better, improvement is obtained under R1.2.  Attaining the
improvement in performance is as easy as linking in a library that
automatically translates NX collective communication calls to iCC
calls.  Furthermore, the iCC library gives additional functionality
like scatter, gather, a number of global combine (summation) variants,
and more general "gopf" combine operations.  An MPI-like group
collective communications interface is planned for the next release.

We would like to note that this library is not intended to compete 
with MPI.  It was started as a research project into techniques 
required to develop high performance implementations of the MPI
collective communication calls.  We are making this library available
as a service to the user community, with the hope that these techniques
eventually are incorporated into efficient MPI implementations.

The library is available from netlib and via anonymous ftp.

Questions about the library should be addressed to

                 intercom@cs.utexas.edu

More information can be found on the World Wide Web:  

              http://www.cs.utexas.edu/~rvdg/intercom





