Newsgroups: comp.parallel
From: eugene@pioneer.arc.nasa.gov (Eugene N. Miya)
Subject: Re: Massively Parallel "Pizza Box"
Organization: NASA Ames Res. Ctr. Mtn Vw CA 94035
Date: 29 Sep 1995 17:31:53 GMT
Message-ID: <44hai9$62l@usenet.srv.cis.pitt.edu>

In article <43k05g$27t@usenet.srv.cis.pitt.edu> alberto@moreira.MV.COM
(Alberto C Moreira) writes:
>             Software can be written, and often is. If there's one thing 
>             we computer professionals shouldn't be scared of, that's
>             lack of software. It's not a problem, but rather a 
>             great opportunity. 

Great, I'm happy for you.
But, it's vaporware until it is.
And you have never wondered why its written or not been written?
Note: I distinguish application code from necessary system code like
for routing.  We need both.  Are you also proposing automatic
parallelization software as well, too?

>            Dimensions have shrunk this way for a couple of decades
>            now. I'm not talking architecture, but physics.  

Paraphrasing Ivan Sutherland:
	a CPU can only really work on one word of memory at a time.
	placing a bigger memory on a CPU means N-1 other words aren't
	being used when N is getting larger.  There must be a better
	way.
I leave out long standing ideas like PIM because these are merely
different scalings of the same problem.  Hillis was one of Ivan's
students.

The long time readers of this group are all aware of the speed of light.

>            I'm old enough to remember people going around with a 
>            "one-nanosecond" wire in their pocket and using it to

Yes, her name was Adm. Grace Hopper.

>            place an upper bound on future achievable computing
>            speed. Events proved them wrong; because technology
>            has always found a way to do lateral steps around these
>            pseudo upper-bounds, and prove every forecaster wrong.

The readers of this group are mostly beyond the predictions of the
Watson letter or Olsen's personal computer declaration.  Actually, I
believe that you are mischaracterizing Adm. Hopper, nor was she
attempting to forecast with her wires or her salt crystals.

>            Right now there's still plenty of mileage to be achieved
>            by scaling down chip dimensions. Even within the current
>            architectural limits, there's no reason why we can't increase
>            our clock rates by a fair amount,

Quantify please?  Please justify your answer.  Picoseconds, femtoseconds?
Explain the limitation.

>            If a SuperSparc today can handle
>            256 Megabytes or 1 Gigabyte of storage and be an efficient
>            computing engine, I don't see why the Von Neumann
>            bottleneck would prevent the same configuration from being
>            achieved inside a chip. When that's reality and I can buy such
>            a chip for the same price I buy an 8051 today - $1 or so - I will
>            build a 64-node computer for less than $100, and a 4096-node
>            one for less than $5,000.

I just finished reading a good book:
	Filters against Folly by biologist G. Hardin
My comment to you on this is (from the book)
	There is no such thing as a free lunch.

SO what if the readers of this group chipped together $99, you will build
us a 64-node parallel machine of unknown topology with unknown software (NT?)?
I can afford $1.  I just donated $1,000 to a local university.
So you build it, then what?  It's going to run my users' CFD codes?
I think I have enough pull on the net to get the other $98.
This will be good.  I would love to have a working parallel machine.

>            When that happens, today's concept of "supercomputer" will
>            be as obsolete as the concept of mainframe is today.

Oh?

PLEASE GO ON.

