Newsgroups: comp.parallel.mpi
From: peter@usfca.edu (Peter Pacheco)
Subject: Re: Problems reading a file from each MPI process
Organization: University of San Francisco
Date: 20 Sep 1995 06:11:46 GMT
Message-ID: <43obb2$6le@noc.usfca.edu>

John Lindal (jafl@cco.caltech.edu) wrote:
: I'm running MPI on a network of Sparc's.  Each process needs a lot of
: initial data before it can start cranking (after which, they pretty much
: crank independently).  It would seem that the easiest way to do this is
: to have the main process write a file containing all the data, and then
: let each process read this file.  The file is written in the same
: directory as the executable, so each Sparc has access to it.

: The problem is, it doesn't work.  The worker processes complain that they
: can't find the file.  (But when I use system("pwd"), each process prints the
: correct directory!)  The problem is also intermittent.

: Is there anything in the implementation of MPI that prevents each process
: from reading the same file?  (They only open the file for reading.)

: How do others handle sending large amounts of data to each worker process?

: Thanks for any help.  John Lindal

Do the hosts running the worker processes NFS mount files from the host
on which the main process is run?  If so, could NFS be taking too long 
to update what the remote hosts see?

Let us know what you find out.

Peter Pacheco

mounted 

