Newsgroups: comp.sys.transputer
From: eugene@wilbur.nas.nasa.gov (Eugene N. Miya)
Subject: Re: Conundrum: Where is the parallelism lost?
Organization: NAS - NASA Ames Research Center, Moffett Field, CA
Date: Fri, 23 Sep 1994 20:49:52 GMT
Message-ID: <CwLoJ5.6ow@cnn.nas.nasa.gov>

Jerzy Michal Pawlak writes:
a very nice article askinng "what's the problem?"
> So - well, I'd love to get a student who knows nothing about programming
> (very difficult these days, when kids have 'computers' at home), teach him
> OCCAM as the first programming language and see what comes out...

This latter is an old amusing thing.  In the past it was LISP or APL
depending who's computers who had, in a world where business was done in
COBOL and science was done in FORTRAN.  And you can see many fine COBOL
ex-LISP programmers out there.  (yeah, right, I have land in Florida...)

The gist of the problem is whether programming parallel machines is
any easier or harder than "other programming."  One side, like Anita Jones,
believes that the answer is no.  It is no harder or easier.  The other side
asserts that parallel programming is akin to "systems" programming and
harder than sequential programming.  I think the balance is to the latter.
[McGraw]

The problem is that "parallel programming" 'isn't.'  Parallelism is a nice
geometric way of thinking about hardware, but the reality is that parallel
lines or planes never meet.  "PP" has lots of meetings called synchronizations
and there are lots of codes and issues having to do with side effects,
especially in old codes.  Keeping it straight, the asynchronous,
non-determinism is merely the a part of the problem.  Anyone with experience
in a beginning operating systems class on a sequential machine will appreciate
these problems.  People confuse parallelism with synchroncity.  Weird
race conditions happen on real world machines which can't be simulated
in software.  People tend to like consistent results.

Parallelism doesn't yet have enough of a performance advantage over certain
problems to justify sinking a lot of money into it.  Not yet.

The best analogy I can think of to parallel processing is fusion.  Not cold
fusion, but serious fusion research where they talk of "break even."
We call it (inappropriately): linear speed up.  It should be "unitary
speedup." We have yet to reach consistent computational break even.
And this is a research hang up.

Meanwhile the end-user community is waiting for drop-in parallelism,
sort of like "automatic programming" of the 50s and 60s.  And I doubt it's
going to happen soon.

--eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  Resident Cynic, Rock of Ages Home for Retired Hackers
  {uunet,mailrus,other gateways}!ames!eugene
My 3rd favorite use of a flame thrower is "Fahrenheit 451."
A Ref: Uncommon Sense, Alan Cromer, Oxford Univ. Press, 1993.

