Newsgroups: comp.parallel.pvm
From: Richard Barrett <rbarrett@lanl.gov>
Subject: Re: Cray T3D & PVM
Organization: Los Alamos National Laboratory
Date: Tue, 23 Jul 1996 16:07:59 -0600
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="------------7C347787608"
Message-ID: <31F54D3F.2948@lanl.gov>

This is a multi-part message in MIME format.

--------------7C347787608
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Raymond Scott Fellers wrote:
> 
> Hi,
> 
> I'm just starting a project a project on a Cray T3D using PVM and am
> looking for specific information on how to use PVM on this machine.  I
> have limited experience using PVM with Linux PC's and RS6000s and have
> been told that PVM software written for these are not portable on this
> Cray.  Any information or pointers on how to make the transition between
> platforms is appreciated.
> 
> Raymond Fellers
> rsf@uclink3.berkeley.edu
> 
> --

--------------7C347787608
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="PVM_T3D_nonportability"

Raymond,

Cray T3D PVM programs will run using other PVM libraries with some exceptions
(I'll also include performance cautions here):

1. There is no spawning on the T3D. You simply "tell it" how many processes
   you want, and they are started up.

   So to make your code portable, simply put #ifndef _CRAYMPP around
   your startup code that does the spawning. 

2. One process per processor, numbered 0 to NPROCS-1. Process 0 is the parent, 
   and runs on processor 0.

3. Because processes/processors are numbered 0 to NPROCS-1, you can avoid
   the use of tids by using pe numbers. (Returned using pvm_get_PE( mytaskid ).)
   However, this isn't portable, so use tids as usual. Still, the
   pe numbers are convenient for debugging, so we set them using a global
   variable. Ok, on all platforms we assign pe numbers to pvm tasks according 
   to their position in the tids array (i.e. order of spawning), so pvm_get_PE
   is not necessary anyway. (We send using "tids[target_pe]").

4. Be careful with the use of PvmDataInPlace. It is a documented T3D feature 
   that pvm_send/pvm_psend in this mode may return before the data is safely 
   on its way to the target pe. So it's possible to overwrite the data you
   want to send. And Cray doesn't provide a polling function to find out
   if the data is safe. We use have to use PvmDataRaw, incurring a data 
   copy/performance penalty.

5. No process groups are possible. Reductions, broadcasts, etc. involve 
   all processes. This global group is called NULL. Again, #ifdef portable.

6. pvm_channels for fast I/O are not available anywhere else.

7. pvm_pack functions are a real dog on this machine, or a least more
   noticeable than we could tolerate given that we make repeated calls
   to pvm_pack. In doing our own data management (i.e. packing our own array
   and making one call to pvm_pack), we got a 25 times speedup, and this
   even though our code is not (at least with this method) communication
   intensive. 

If any of this is incorrect, I hope someone will correct me.

You know, if you are starting from scratch, you might consider using
MPI...I believe all of the above problems disappear, and you might find 
programming simpler and performance better (especially in light of the
PvmDataInPlace problem).

Richard

--------------7C347787608--


