Newsgroups: comp.parallel.mpi,comp.answers,news.answers
From: doss@ERC.MsState.Edu (Nathan Doss)
Subject: Message Passing Interface (MPI) FAQ
Summary: This posting contains a list of common questions (and their answers) about the Message Passing Interface standard (also  known as MPI).
Keywords: FAQ, MPI, Parallel & Distributed Computing
Organization: Mississippi State University NSF Engineering Research  Center for Computational Field Simulation
Date: 08 Aug 1994 15:14:31 GMT
Message-ID: <doss-mpi-faq-7-1994@ERC.MsState.Edu>

Archive-Name: mpi-faq
Last-Modified: Mon, Aug 08 1994
Posting-Frequency: monthly
Version: $Id: mpi-faq.bfnn,v 1.13 1994/07/19 20:27:10 doss Exp doss $

This is the list of Frequently Asked Questions about the MPI (Message
Passing Interface) standard, a set of library functions for message
passing [see Q1.1 `What is MPI?' for more details].

MPI questions/answers and pointers to additional MPI information are
actively sought.  Contributions are welcome!

You can skip to a particular question by searching for `Question n.n'.
See Q6.2 `Formats in which this FAQ is available' for details of where to
get the PostScript, Emacs Info, and HTML versions of this document.

For a list of recent changes to this FAQ, see Q1.7 `Recent changes to the
FAQ.'.

===============================================================================

Index

 Section 1.  Introduction and General Information
 Q1.1        What is MPI?
 Q1.2        What is the MPI Forum?
 Q1.3        Who was involved in creating the MPI standard?
 Q1.4        The history of MPI
 Q1.5        Are there plans for an MPI-2?
 Q1.6        How do I send comments about MPI to MPIF members?
 Q1.7        Recent changes to the FAQ.

 Section 2.  Network sources and resources
 Q2.1        What newsgroups and mailing lists are there for MPI?
 Q2.2        Where do I obtain a copy of the MPI document?
 Q2.3        What MPI implementations are available and where do I get them?
 Q2.4        What information about MPI is available through the WWW?
 Q2.5        MPI-related papers
 Q2.6        MPI-related books
 Q2.7        Where can I find the errata for the MPI document?
 Q2.8        Are the MPI Forum mailing lists archived somewhere?
 Q2.9        Are the minutes from the MPIF forum meetings available?
 Q2.10       Where can I get example MPI programs?

 Section 3.  Common Tasks in MPI
 Q3.1        How do I send variable-sized structures?

 Section 4.  Frequently Encountered Programming Errors 
 Q4.1        Lines longer than 72 columns in FORTRAN
 Q4.2        Use language appropriate datatypes
 Q4.3        Missing error argument 
 Q4.4        Wrong arguments to MPI_INIT in C

 Section 5.  How to get further assistance
 Q5.1        You still haven't answered my question !
 Q5.2        What to put in a posting about MPI

 Section 6.  Administrative information and acknowledgements
 Q6.1        Feedback is invited
 Q6.2        Formats in which this FAQ is available
 Q6.3        Where can I obtain a copy of this FAQ
 Q6.4        Authorship and acknowledgements
 Q6.5        Disclaimer and Copyright

===============================================================================

Section 1.  Introduction and General Information

 Q1.1        What is MPI?
 Q1.2        What is the MPI Forum?
 Q1.3        Who was involved in creating the MPI standard?
 Q1.4        The history of MPI
 Q1.5        Are there plans for an MPI-2?
 Q1.6        How do I send comments about MPI to MPIF members?
 Q1.7        Recent changes to the FAQ.

-------------------------------------------------------------------------------

Question 1.1.  What is MPI?

MPI stands for Message Passing Interface.  The goal of MPI, simply stated,
is to develop a widely used standard for writing message-passing programs.
As such the interface should establish a practical, portable, efficient,
and flexible standard for message passing.

Message passing is a paradigm used widely on certain classes of parallel
machines, especially those with distributed memory. Although there are
many variations, the basic concept of processes communicating through
messages is well understood. Over the last ten years, substantial progress
has been made in casting significant applications in this paradigm. Each
vendor has implemented its own variant. More recently, several systems
have demonstrated that a message passing system can be efficiently and
portably implemented. It is thus an appropriate time to try to define both
the syntax and semantics of a core of library routines that will be useful
to a wide range of users and efficiently implementable on a wide range of
computers.

In designing MPI the MPI Forum sought to make use of the most attractive
features of a number of existing message passing systems, rather than
selecting one of them and adopting it as the standard. Thus, MPI has been
strongly influenced by work at the IBM T. J. Watson Research Center,
Intel's NX/2, Express, nCUBE's Vertex, p4, and PARMACS. Other important
contributions have come from Zipcode, Chimp, PVM, Chameleon, and PICL.

The main advantages of establishing a message-passing standard are
portability and ease-of-use. In a distributed memory communication
environment in which the higher level routines and/or abstractions are
build upon lower level message passing routines the benefits of
standardization are particularly apparent.  Furthermore, the definition of
a message passing standard, such as that proposed here, provides vendors
with a clearly defined base set of routines that they can implement
efficiently, or in some cases provide hardware support for, thereby
enhancing scalability.

Source: MPI Document

-------------------------------------------------------------------------------

Question 1.2.  What is the MPI Forum?

Message Passing Interface Forum

The Message Passing Interface Forum (MPIF), with participation from over
40 organizations, has been meeting since November 1992 to discuss and
define a set of library interface standards for message passing. MPIF is
not sanctioned or supported by any official standards organization.

Source: MPI Document

-------------------------------------------------------------------------------

Question 1.3.  Who was involved in creating the MPI standard?

The technical development was carried out by subgroups, whose work was
reviewed by the full committee. During the period of development of the
Message Passing Interface ( MPI), many people served in positions of
responsibility and are listed below.

* Jack Dongarra, David Walker, Conveners and Meeting Chairs

* Ewing Lusk, Bob Knighten, Minutes

* Marc Snir, William Gropp, Ewing Lusk, Point-to-Point Communications

* Al Geist, Marc Snir, Steve Otto, Collective Communications

* Steve Otto, Editor

* Rolf Hempel, Process Topologies

* Ewing Lusk, Language Binding

* William Gropp, Environmental Management

* James Cownie, Profiling

* Anthony Skjellum, Lyndon Clarke, Marc Snir, Richard Littlefield, Mark
  Sears, Groups, Contexts, and Communicators

* Steven Huss-Lederman, Initial Implementation Subset

See the MPI document for a list of other active participants in the MPI
process not mentioned above.

Source:  MPI Document

-------------------------------------------------------------------------------

Question 1.4.  The history of MPI

The MPI standardization effort involved about 60 people from 40
organizations mainly from the United States and Europe. Most of the major
vendors of concurrent computers were involved in MPI, along with
researchers from universities, government laboratories, and industry. The
standardization process began with the Workshop on Standards for Message
Passing in a Distributed Memory Environment, sponsored by the Center for
Research on Parallel Computing, held April 29-30, 1992, in Williamsburg,
Virginia. At this workshop the basic features essential to a standard
message passing interface were discussed, and a working group established
to continue the standardization process.

A preliminary draft proposal, known as  MPI1 , was put forward by
Dongarra, Hempel, Hey, and Walker in November 1992, and a revised version
was completed in February 1993. MPI1 embodied the main features that were
identified at the Williamsburg workshop as being necessary in a message
passing standard. Since MPI1 was primarily intended to promote discussion
and ``get the ball rolling,'' it focused mainly on point-to-point
communications. MPI1 brought to the forefront a number of important
standardization issues, but did not include any collective communication
routines and was not thread-safe.

In November 1992, a meeting of the MPI working group was held in
Minneapolis, at which it was decided to place the standardization process
on a more formal footing, and to generally adopt the procedures and
organization of the High Performance Fortran Forum. Subcommittees were
formed for the major component areas of the standard, and an email
discussion service established for each. In addition, the goal of
producing a draft MPI standard by the Fall of 1993 was set. To achieve
this goal the MPI working group met every 6 weeks for two days throughout
the first 9 months of 1993, and presented the draft MPI standard at the
Supercomputing 93 conference in November 1993. These meetings and the
email discussion together constituted the MPI Forum, membership of which
has been open to all members of the high performance computing community.

Source: MPI Document

-------------------------------------------------------------------------------

Question 1.5.  Are there plans for an MPI-2?

It was decided at the final MPI meeting (Feb. 1994) that plans for
extending MPI should wait until people have had some experience with the
current version of MPI.  The MPI Forum plans an informal meeting at
Supercomputing '94 to discuss the possibility of an MPI-2 effort.

A discussion of possible MPI-2 extensions was held at the end of the
February 1994 meeting.  The following items were mentioned as possible
areas of expansion.

* I/O

* Active messages

* Process startup

* Dynamic process control

* Remote store/access

* Fortran 90 and C++ language bindings

* Graphics

* Real-time support

* Other "enhancements"

-------------------------------------------------------------------------------

Question 1.6.  How do I send comments about MPI to MPIF members?

You can send comments to mpi-comments@cs.utk.edu.  Your comments will be
forwarded to MPIF committee members who will attempt to respond.

Source: MPI Document

-------------------------------------------------------------------------------

Question 1.7.  Recent changes to the FAQ.

Major changes include:

* Added recent changes section.

* The newsgroup, comp.parallel.mpi, has now been created.

* Information about where to find the MPI document errata.

* A new MPI implementation is available called UNIFY.

* This FAQ is now posted to comp.parallel.mpi instead of to comp.parallel
  and comp.parallel.pvm.

* This FAQ is now available by anonymous ftp from rtfm.mit.edu.

===============================================================================

Section 2.  Network sources and resources

 Q2.1        What newsgroups and mailing lists are there for MPI?
 Q2.2        Where do I obtain a copy of the MPI document?
 Q2.3        What MPI implementations are available and where do I get them?
 Q2.4        What information about MPI is available through the WWW?
 Q2.5        MPI-related papers
 Q2.6        MPI-related books
 Q2.7        Where can I find the errata for the MPI document?
 Q2.8        Are the MPI Forum mailing lists archived somewhere?
 Q2.9        Are the minutes from the MPIF forum meetings available?
 Q2.10       Where can I get example MPI programs?

-------------------------------------------------------------------------------

Question 2.1.  What newsgroups and mailing lists are there for MPI?

An MPI-specific newsgroup (comp.parallel.mpi) was recently been created by
a  vote of 506 to 14 .  The RFD for comp.parallel.mpi was originally
posted to comp.parallel, comp.parallel.pvm, and news.announce.newgroups on
April 4, 1994.  The CFV was issued June 15, 1994.  The voting results,
RFD, and CFV can be retrieved by anonymous ftp from aurora.cs.msstate.edu
as pub/mpi/comp.parallel.result, pub/mpi/comp.parallel.mpi.rfd and
pub/mpi/comp.parallel.mpi.cfv.

The MPI Forum ran several mailing lists which are now archived [see Q2.8
`Are the MPI Forum mailing lists archived somewhere?'] on netlib.  These
are no longer active.

-------------------------------------------------------------------------------

Question 2.2.  Where do I obtain a copy of the MPI document?

The official postscript version of the document can be obtained from
netlib at ORNL by sending a mail message to netlib@ornl.gov with the
message "send mpi-report.ps from mpi".

It may also be obtained by anonymous ftp from the following sites:

* netlib2.cs.utk.edu/mpi/mpi-report.ps

* aurora.cs.msstate.edu/pub/mpi/mpi-report.ps.Z

* info.mcs.anl.gov/pub/mpi/mpi-report.ps.Z

* tbag.osc.edu/pub/lam/mpi-report.ps.Z

Argonne National Lab also provides a hypertext version available through
the WWW at http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html .

-------------------------------------------------------------------------------

Question 2.3.  What MPI implementations are available and where do I get them?

* IBM MPI-F implementation

  MPI-F is an experimental native high performance MPI implementation on
  the IBM-SP1 utilizing the High Performance switch or UDP.

* Argonne National Laboratory/Mississippi State University implementation.

  Available by anonymous ftp from info.mcs.anl.gov in  pub/mpi .

* Edinburgh Parallel Computing Centre CHIMP implementation.

  Available by anonymous ftp from ftp.epcc.ed.ac.uk as
  pub/chimp/release/chimp.tar.Z .

* Mississippi State University UNIFY implementation.

  The UNIFY system provides a subset of MPI within the PVM environment,
  without sacrificing the PVM calls already available.

  Available by anonymous ftp from ftp.erc.msstate.edu under unify .

* Ohio Supercomputer Center LAM implementation.

  A full MPI standard implementation for LAM, a UNIX cluster computing
  environment.

  Available by anonymous ftp from tbag.osc.edu under pub/lam .

-------------------------------------------------------------------------------

Question 2.4.  What information about MPI is available through the WWW?

The following is a list of URL's which contain MPI related information.

* Netlib Repository at UTK/ORNL (http://www.netlib.org/mpi/index.html)

* Argonne National Lab (http://www.mcs.anl.gov/mpi)

* Mississippi State University, Department of Computer Science
  (http://www.cs.msstate.edu/dist_computing/mpi.html)

* Ohio Supercomputer Center, LAM Project (http://www.osc.edu/lam.html)

-------------------------------------------------------------------------------

Question 2.5.  MPI-related papers

A bibliography (in BibTeX format) of MPI related papers is available by
anonymous ftp from  aurora.cs.msstate.edu in /pub/mpi/papers/MPI.bib .
Additions and corrections should be sent to doss@ERC.MsState.Edu.

-------------------------------------------------------------------------------

Question 2.6.  MPI-related books

Rusty Lusk and Bill Gropp (Argonne National Lab) and Anthony Skjellum
(Mississippi State University) are writing an application-oriented book
`Using MPI'that describes both C and Fortran uses of MPI with many
examples.  It is being published by MIT Press, for release by
Supercomputing '94.

Steve Otto (Oregon Graduate Institute of Science & Technology) and others
are currently writing an Annotated Reference Manual for MPI.

-------------------------------------------------------------------------------

Question 2.7.  Where can I find the errata for the MPI document?

An early version of an errata can be obtained by anonymous from ftp at
aurora.cs.msstate.edu as /pub/mpi/mpi-errata.ps .

-------------------------------------------------------------------------------

Question 2.8.  Are the MPI Forum mailing lists archived somewhere?

Yes.  They are available from netlib.  Send a message to netlib@ornl.gov
with the message "send index from mpi".  You can also ftp them from
netlib2.cs.utk.edu in /mpi .

The following archived lists are available:

* whole committee (mpi-comm 2364K)

* core MPIF members (mpi-core 609K)

* introduction subcommittee (mpi-intro 41K)

* point-to-point subcommittee (mpi-pt2pt 3862K)

* collective communication subcommittee (mpi-collcomm 1539K)

* process topology subcommittee (mpi-ptop 1193K)

* language binding subcommittee (mpi-lang 211K)

* formal language description subcommittee (mpi-formal 72K)

* environment inquiry subcommittee (mpi-envir 140K)

* profiling subcommittee (mpi-profile 112K)

* context subcommittee (mpi-context 4618K)

* subset subcommittee (mpi-iac 433K)

-------------------------------------------------------------------------------

Question 2.9.  Are the minutes from the MPIF forum meetings available?

The minutes from some of the MPIF meetings are available from netlib.
Send a message to netlib@ornl.gov with the message "send index from mpi".
You can also ftp them from netlib2.cs.utk.edu in /mpi .

There are minutes from the following meetings:

* January, 1993

* February, 1993

* April, 1993

* August, 1993

-------------------------------------------------------------------------------

Question 2.10.  Where can I get example MPI programs?

Most implementations mentioned in Q2.3 `What MPI implementations are
available and where do I get them?' are distributed with some example
programs.  As people begin to use MPI, more MPI code will start showing
up.

A small tutorial, MPI: It's Easy to Get Started, with an example program
is available through the WWW from the  Ohio Supercomputer Center at
"http://www.osc.edu/Lam/mpi/tutorial1.html".

===============================================================================

Section 3.  Common Tasks in MPI

 Q3.1        How do I send variable-sized structures?

-------------------------------------------------------------------------------

Question 3.1.  How do I send variable-sized structures?

The November 2, 1993 version of the MPI draft did not have a means of
packing and sending variable-size structures, the final document does.
The functions MPI_PACK and MPI_UNPACK can be used for this purpose.

===============================================================================

Section 4.  Frequently Encountered Programming Errors

 Q4.1        Lines longer than 72 columns in FORTRAN
 Q4.2        Use language appropriate datatypes
 Q4.3        Missing error argument 
 Q4.4        Wrong arguments to MPI_INIT in C

-------------------------------------------------------------------------------

Question 4.1.  Lines longer than 72 columns in FORTRAN

Most FORTRAN compilers do not allow lines longer than 72 lines.

-------------------------------------------------------------------------------

Question 4.2.  Use language appropriate datatypes

MPI defines some datatypes that are language-specific.  For example,
MPI_INTEGER and MPI_INT are two different types in MPI.   MPI_INTEGER
should not be used in C programs -- MPI_INT should not be used in FORTRAN
programs.  MPI_CHARACTER and MPI_CHAR are similarly different.
MPI_CHARACTER should only be used in FORTRAN.  MPI_CHAR should only be
used  in C.

-------------------------------------------------------------------------------

Question 4.3.  Missing error argument

Don't forget that most of the FORTRAN MPI functions require an  error
parameter as the last argument.

-------------------------------------------------------------------------------

Question 4.4.  Wrong arguments to MPI_INIT in C

In the November 2, 1993 draft, the arguments to MPI_INIT (C binding only)
were "MPI_Init(int *argc, char **argv)."  In the final version of the
document, an extra level of indirection is added to the "argv" argument;
i.e., "MPI_Init(int *argc, char ***argv)".

===============================================================================

Section 5.  How to get further assistance

 Q5.1        You still haven't answered my question !
 Q5.2        What to put in a posting about MPI

-------------------------------------------------------------------------------

Question 5.1.  You still haven't answered my question !

Try posting your MPI related questions to the comp.parallel.mpi newsgroup.

-------------------------------------------------------------------------------

Question 5.2.  What to put in a posting about MPI

Questions will probably deal with a certain MPI implementation, MPI
document clarifications, `how-to' type questions, etc.  Use a clear,
detailed Subject line.  Don't put things like `MPI', `doesn't work',
`help' or `question' in it --- we already knew that !  Save the space for
the subject the question relates to, a fragment of the error message,
summary of the unusual program behaviour, etc.

Put a summary paragraph at the top of your posting.

Remember that you should not post email sent to you personally without the
sender's permission.

For problems with a specific implementation, give full details of the
problem, including

* Enough information about the implementation you are using including the
  version number if one and say where you got it.

* The exact and complete text of any error messages printed.

* Exactly what behaviour you were expecting, and exactly what behaviour
  you observed.  A transcript of an example session is a good way of
  showing this.

* Details of what hardware you're running on, if it seems appropriate.

You are in little danger of making your posting too long unless you
include large chunks of source code or uuencoded files, so err on the side
of giving too much information.

Source:  Modified from the Linux FAQ

===============================================================================

Section 6.  Administrative information and acknowledgements

 Q6.1        Feedback is invited
 Q6.2        Formats in which this FAQ is available
 Q6.3        Where can I obtain a copy of this FAQ
 Q6.4        Authorship and acknowledgements
 Q6.5        Disclaimer and Copyright

-------------------------------------------------------------------------------

Question 6.1.  Feedback is invited

Please send me your comments on this FAQ.

I accept submissions for the FAQ in any format;  All contributions
comments and corrections are gratefully received.

Please send them to doss@ERC.MsState.Edu (Nathan Doss).

-------------------------------------------------------------------------------

Question 6.2.  Formats in which this FAQ is available

This document is available as ASCII text, an Emacs Info document and
PostScript.  It is also available on the world wide web (WWW) at
http://www.cs.msstate.edu/dist_computing/mpi-faq.html .

The ASCII, Emacs Info, and HTML versions are generated automatically by a
Perl script which takes as input a file in the Bizarre Format with No
Name.  Mosaic is used to create the postscript version from the HTML
version.

The output files mpi-faq.ascii, .info, .html, and .ps and a tarfile
mpi-faq.source.tar.gz, containing the BFNN source and Perl script
converter, are available in the  pub/mpi/faq directory on
aurora.cs.msstate.edu.

-------------------------------------------------------------------------------

Question 6.3.  Where can I obtain a copy of this FAQ

In addition to finding it in those places listed in  Q6.2 `Formats in
which this FAQ is available' , the ascii version is posted monthly  to
comp.parallel.mpi, news.answers, and comp.answers.

The ascii version can also be obtained through anonymous ftp from
rtfm.mit.edu in pub/usenet/news.answers/mpi-faq or those without FTP
access can send e-mail to mail-server@rtfm.mit.edu with "send
usenet/news.answers/mpi-faq"  in the message body.

-------------------------------------------------------------------------------

Question 6.4.  Authorship and acknowledgements

This FAQ was compiled by Nathan Doss (doss@ERC.MsState.Edu), with
assistance and comments from others.

Thanks to the MPI Forum and those who gave feedback about the MPI document
for giving us something to write about !

The format of this FAQ, the wording of the Disclaimer and Copyright, and
the original Perl conversions scripts were borrowed (with permission) from
Ian Jackson ijackson@nyx.cs.du.edu who maintains the "Linux Frequently
Asked Questions with Answers" document.

-------------------------------------------------------------------------------

Question 6.5.  Disclaimer and Copyright

Note that this document is provided as is.  The information in it is *not*
warranted to be correct; you use it at your own risk.

MPI Frequently Asked Questions is Copyright 1994 by Mississippi State
University.  It may be reproduced and distributed in whole or in part,
subject to the following conditions:

* This copyright and permission notice must be retained on all complete or
  partial copies.

* Any translation or derivative work must be approved by me before
  distribution.

* If you distribute MPI Frequently Asked Questions in part, instructions
  for obtaining the complete version of this manual must be included, and
  a means for obtaining a complete version free or at cost price provided.

Exceptions to these rules may be granted, and I shall be happy to answer
any questions about this copyright --- write to Nathan Doss, P.O. Box
6176, Engineering Research Center, Mississippi State, MS 39762 or email
doss@ERC.MsState.Edu.  These restrictions are here to protect the
contributors, not to restrict you as educators and learners.

===============================================================================

