Newsgroups: comp.parallel.pvm
From: tmf@ipp-garching.mpg.de (Tom MacFarland)
Subject: Re: PVM buffering
Organization: Rechenzentrum der Max-Planck-Gesellschaft in Garching
Date: 29 Jun 1995 18:45:14 +0200
Message-ID: <3sulaqINNv0@uts.ipp-garching.mpg.de>



George,
	I had similar problems porting codes onto the T3D, and these were partly
corrected 
by, as another replier suggested in a reply to you, changing the default
buffer size by 
setting PVM_MAX_PACK to something large, depending on the problem size. 
Depending on the
size messages your code tries to send, this may only get you so far.  The
moral of the 
story seems to be not to expect PVM to send arbitrarily large messages, or
messages whose 
sizes scale with system size, but build in some provision for breaking
large messages up.  
It might be worth your while to switch from PVM to Crays shared memory
routines too (this 
isn't as big a job as it at first might seem to be), though maybe this
isn't the appropriate
list for such a suggestion ;-)  Anyway, good luck.
						Tom MacFarland






In article <3srh5i$h0t@george.rutgers.edu>, katsaros@george.rutgers.edu
(Georgios Katsaros) writes:
>
>Hi,
> I'm using pvm to parallelize a ship hydrodynamics program on Cray T3D. 
> The program runs correctly for a number of iterations, and then hangs, 
> with the message "Operand out of range". A similar problem occurs when 
> using workstation clusters instead of the Cray.
>
> The number of iterations I get depends on the size of messages passed 
> using pvm calls - I only use a simple master-slave model. I suspect that
> there must be a problem with accumulating messages. Is there any way to
> control the message buffer sizes? Or perhaps to flush buffer space?
>
> If anyone has had any experience in a similar problem, I'd appreciate any
>> comment.
>
>  thanks a lot,
> 
>  George Katsaros
>  Rutgers University.

