Newsgroups: comp.parallel.mpi
From: trobey@arc.unm.edu (Thomas H. Robey)
Subject: Re: MPI question
Organization: Spectra Research Institute
Date: Sat, 01 Jun 96 04:30:39 GMT
Message-ID: <4ooh1f$89g_001@arc.unm.edu>

In article <31AF7007.5541@megafauna.com>,
   Steve Barnard <steve@megafauna.com> wrote:
-We have an MPI application that has a puzzling bug.  Maybe someone can 
-help.
-
-It's an iterative sparse solver package for unstructured linear systems.  
-It runs correctly for small problems on "many" processors (i.e., 8 -- 
-not that many, really), and for large problems on a few processors, but 
-it bombs for large problems on many processors.
-
-The odd thing is that it bombs when run in "development" mode, but not 
-in "testing" mode, which is a lot slower.  I suspect that it's flooding 
-some internal MPI buffers when running fast, but not when running slow.
-
-We're considering trying to keep track of and to limit the number of 
-outstanding messages.  These are all non-blocking sends and receives.
-
-Two questions:
-
-(1) Is this a reasonable hypothesis?
-
-(2) If so, is there some way to configure MPI so that it has more buffer 
-resources?
-
-Please reply by email as well as to the newsgroup.
-
-	Thanks,
-	Steve Barnard
-
-P.S. We're not accessing the send buffers before the send completes.

The function MPI_Buffer_attach will attach a buffer of a specified size.  The 
symptoms you describe could be caused by too small of a buffer.



