Newsgroups: comp.parallel.mpi
From: Steve Barnard <steve@megafauna.com>
Reply-To: steve@megafauna.com
Subject: MPI question
Organization: megafauna
Date: Fri, 31 May 1996 14:17:43 -0800
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <31AF7007.5541@megafauna.com>

We have an MPI application that has a puzzling bug.  Maybe someone can 
help.

It's an iterative sparse solver package for unstructured linear systems.  
It runs correctly for small problems on "many" processors (i.e., 8 -- 
not that many, really), and for large problems on a few processors, but 
it bombs for large problems on many processors.

The odd thing is that it bombs when run in "development" mode, but not 
in "testing" mode, which is a lot slower.  I suspect that it's flooding 
some internal MPI buffers when running fast, but not when running slow.

We're considering trying to keep track of and to limit the number of 
outstanding messages.  These are all non-blocking sends and receives.

Two questions:

(1) Is this a reasonable hypothesis?

(2) If so, is there some way to configure MPI so that it has more buffer 
resources?

Please reply by email as well as to the newsgroup.

	Thanks,
	Steve Barnard

P.S. We're not accessing the send buffers before the send completes.

