Newsgroups: comp.parallel.pvm,comp.parallel.mpi
From: rdaoud@magnus.acs.ohio-state.edu (Raja B Daoud)
Subject: MPI layering (Was: ANNOUCE: Para++ release ...)
Organization: Ohio Supercomputer Center
Date: 7 Jul 1995 04:20:56 GMT
Message-ID: <3ticn8$2jk@charm.magnus.acs.ohio-state.edu>

ZHGFJ  <Serpent_Mage@MSN.COM> wrote:
>Of course every layer that is added on causes extra time.

Not if the "layer" is made out of macros (as some are).  But yes a
few function calls and a few if()s do take a bit of time.  I think
it's a small cost to pay for having maintainable/readable/portable/
"fill in your favourite software engineering buzzword" and hopefully
easy to use system.  After all, programmers are still supposed to do
their best to reduce communication by not calling these message passing
functions too much! :-)

>good idea because it does standardise the function calls.  I ended up 
>writing my own MPI implementation because it turned out to be faster.

I'd be interested to learn some more about this implementation and how
it achieves faster performance on a network of transputers (my interest
comes from having co-developed Trollius (a transputer OS)).  Is it a
full MPI implementation?

Regards,

--Raja

-=-
Raja Daoud				raja@tbag.osc.edu
Ohio Supercomputer Center		http://www.osc.edu/lam.html

