Newsgroups: comp.parallel.mpi
From: Robert van de Geijn <rvdg@cs.utexas.edu>
Subject: Re: Q:  Performance of collective communications?
Organization: University of Texas at Austin
Date: 24 Sep 1996 20:39:32 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <529gu4$2ur@news.cs.utexas.edu>

paullu@sys.toronto.edu (Paul Lu) wrote:
>
>Hello:
>
>I'm wondering if there are any (recent?) papers discussing
>the performance for collective communications in different
>MPI implementations.  Currently, I'm using MPICH and I'm
>particularly interested in functions for, say, broadcast
>(exploiting broadcast mediums, say), gather, scatter, etc.
>
>These functions exist in the interface with the hope that they can be
>optimized in implementations, so I'm wondering how much of that hope
>has been realized.
>
>A look at the MPI web pages at MS State and ANL and David Walker's
>bibliography of papers didn't turn up much.
>
>Any pointers would be appreciated.  Thanks in advance.
>
>	...Paul

To gain insight into what happens underneath the Paragon MPI implementation,
look at our InterCom webpage:

      http://www.cs.utexas.edu/users/rvdg/intercom

The InterCom collective communication is underneath the MPI and NX
collective communication implementation on the Paragon.  When 
properly tuned (and we cannot guarantee that the version on a given
Paragon is properly tuned) this library gives performance within
a factor two of optimal on mesh architectures.

Robert

=========================================================================

Robert A. van de Geijn                  rvdg@cs.utexas.edu  
Associate Professor                     http://www.cs.utexas.edu/users/rvdg
Department of Computer Sciences         (Work)  (512) 471-9720
The University of Texas                 (Home)  (512) 251-8301 
Austin, TX 78712                        (FAX)   (512) 471-8885


