Newsgroups: comp.parallel.mpi
From: Ben Keeping <umeca60>
Subject: Re: Problems reading a file from each MPI process
Organization: Imperial College, London, UK
Date: 27 Sep 1995 14:46:03 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <44bo3b$mkk@oban.cc.ic.ac.uk>

First of all, is this really the best way to solve the problem? For example,
for every line in the master program where you write to this file, you could
use an MPI_Pack call instead, followed by a single MPI_Bcast at the end (with
corresponding Bcast and an Unpack for each 'read' in the slave processes.)

If you must do it this way, it seems likely that your workers are trying to
open the file before the file system knows it has been created. You could use
MPI_Barrier to ensure they don't try before it HAS been created! And make sure
the master closes the file before its call to MPI_Barrier. If there
are still problems, you could try something like this in the slaves:

int trys;
FILE *fp;
trys=0; fp=NULL;
while (trys<5 && fp==NULL) {
   if (!fp=fopen("thefile","r")) sleep(1);
   trys++;
}
if (fp==NULL) {
   fprintf(stderr,"Still not there after 5 seconds!\n");
    exit(1);
}
.. read from the file ...     

If you have a lot of slaves, you may be hitting some sort of system
limit on the number of processes that can read the same file (I'm not 
a Unix guru!). If so, the Bcast approach is the best answer.

Note - you'd have to remove the file before running the program to be quite
sure the above would work correctly...

PS Sorry if this has been sorted out long ago, my newsreader doesn't show
any replies to your posting.

Ben Keeping
Imperial College
London UK


