Newsgroups: comp.parallel.mpi,comp.parallel.pvm
From: dave@szechuan.ucsd.edu (David Hessler)
Subject: Need a message passing system for distributed application
Organization: UCSD Microscopy and Imaging Resource
Date: 18 Nov 1994 00:16:23 GMT
Message-ID: <3agron$i8s@network.ucsd.edu>


Hello!

I am looking for a message passing system for a distributed application.
Either I've been looking in the wrong places so far or parallel processing is the
only field that uses message passing.

I have looked at the documents as closely as I could, but have not found answers
to my questions.  If what I am looking for is indeed documented, I apologise
for failing to adequately 'RTFM'.

Any help and pointers to aid me in my quest would be much appreciated!

Here's what our application needs:

1) Just message passing, no central control.  Is it possible with PVM or MPI
(or any other message passer) to take advantage of the message passing
facility without being bound to the process control of the 'virtual machine'?
Our application will have useres starting client applications that will
conenct to already running processes on other hosts.  Ideally we would 
not want to have to do this by asking 'central command' to start the process
for us on the client machine.

2) Can we get unscheduled input from other sources than the message passer?
Our client programs will use the X window system, and as such will need to
check for X events.  Some of our server programs will be conencted to 
instruments through a serial line, and will get input at irregular
intervals.  Can we do this and check for incoming messages without
going into a busy wait loop of checking each in turn?  Ideally, we would
be able to tell the message paser to select on additional file descriptors
in the way X does.  Alternatively, access to the file descriptors used
by the message passer would allow us to do our own selection.

3) May new connections to other processes on other hosts be initiated and 
dropped at any time?  Client processes will need different resources at 
different times, and these needs will not be predictable at process startup time.

4) We will be passing datasets around with sizes on the order of up to 100
megabytes.  These datasets may be subdivided, but the smallest practical
divisions will still be anywhere from 2 to 8 megabytes in size.  Can we
send messages that large?  Can we do so without removing control from the 
calling process for a significant amount of time?  I have seen phrases in
the documentation like 'the asynchronous send function will return 
before the message is recived, but only after the message is safely on its way.
Does this mean that the caller must wait until the entire buffer is 
copied?  Is there any danger of deadlock in two processes both sending
very large messages to one another?

5) MPI has a facility for defining dervied types.  Is this facility recursive?
I.e., can I define a VECTOR as three FLOATs, then define a RAY as two VECTORs?

Thanks in advance!

David Hessler
-- 
-----------------------------------------------------
David Hessler
San Diego Microscopy and Imaging Resource
University of California, San Diego

