Newsgroups: comp.parallel.mpi
From: jafl@cco.caltech.edu (John Lindal)
Subject: Re: Problems reading a file from each MPI process
Organization: California Institute of Technology, Pasadena
Date: 20 Sep 1995 22:42:09 GMT
Message-ID: <43q5c1$hld@gap.cco.caltech.edu>

>: I'm running MPI on a network of Sparc's.  Each process needs a lot of
>: initial data before it can start cranking (after which, they pretty much
>: crank independently).  It would seem that the easiest way to do this is
>: to have the main process write a file containing all the data, and then
>: let each process read this file.  The file is written in the same
>: directory as the executable, so each Sparc has access to it.

>: The problem is, it doesn't work.  The worker processes complain that they
>: can't find the file.  (But when I use system("pwd"), each process prints the
>: correct directory!)  The problem is also intermittent.

>: Is there anything in the implementation of MPI that prevents each process
>: from reading the same file?  (They only open the file for reading.)

>: How do others handle sending large amounts of data to each worker process?

>: Thanks for any help.  John Lindal

>Do the hosts running the worker processes NFS mount files from the host
>on which the main process is run?  If so, could NFS be taking too long 
>to update what the remote hosts see?


This seems to have been the case.  When I use system("ls") before
trying to open the file, I have no problems.

Thanks to all who responded.

John Lindal

