Newsgroups: comp.parallel.mpi
From: joern@samba.uni-paderborn.de (Joern Gehring)
Subject: Heterogenous MPI
Organization: Universitaet Paderborn, Germany
Date: 13 Mar 1995 14:49:00 +0100

The Paderborn Center for Parallel Computing (PC^2) is a Supercomputing Center
in Germany, that has the aim to make supercomputing available to the common
user. Together with several other centers in Europe, we are investigating the
potentialities of heterogeneous metacomputing. I. e. one program running on
different supercomputers simultaneously and using their different abilities
for subproblems with different demands.
By this we hope to make an enormous amount of computing power available
in a very cost-effective way. - At least for large problems, that have an
inherent heterogeneous structure. But most problems out there in the _real_
world are of this kind.

Of course, there has to be a uniform programming environment, for every
system, that is part of the metacomputer. Hoping, that it will become
the common standard, we chose MPI as the basic layer.
At the moment, all participating computers are running MPICH.
What we need is an intercommunicator between the different machines.
In the "MPICH ADI Implementation Manual" it is said, that an implementation,
that is able to handle different ADIs, was planned for the end of 1994.
So we are very interested in any information about this work.

Another problem will be, that in the near future the MPPs will have
special MPI-implementations. So, how can we hope to connect them?
The best way would be if the standard covers at least parts of
the intercommunicator structure, so that communicators from external
sources could be added to every MPI-implementation.
Are there any plans to extend the standard in this way? Or will MPI
always be restricted on applications, that run on one (or a couple
of similar) architecture(s)?

Any hints or suggestions are welcome!

	Joern Gehring

