Newsgroups: comp.parallel
From: alberto@moreira.MV.COM (Alberto C Moreira)
Subject: Re: Massively Parallel "Pizza Box" really the ICE box
Organization: MV Communications, Inc.
Date: 18 Sep 1995 14:36:32 GMT
Message-ID: <43k05g$27t@usenet.srv.cis.pitt.edu>

In article <43c4jv$5el@usenet.srv.cis.pitt.edu> eugene@pioneer.arc.nasa.gov (Eugene N. Miya) writes:

>In article <439ep4$pv1@usenet.srv.cis.pitt.edu> alberto@moreira.MV.COM
>(Alberto C Moreira) writes:
>>Snip out Russell's good point

>>         It's my impression - I may be wrong - that the SHARC is mostly a
>>         SIMD node. Each chip has 4 Megabits of on-chip memory; except for 
>>         the steep price, there's no reason why a parallel machine 
>>         can't be made out of lots of SHARCS and no additional memory. 

>You are wrong. The reason is called software (or lack of).
           
             Software can be written, and often is. If there's one thing 
             we computer professionals shouldn't be scared of, that's
             lack of software. It's not a problem, but rather a 
             great opportunity. 

>>         Every generation quadruples the number of gates/chip;

>You need to learn a little about the von Neumann bottleneck.
>Ivan Sutherland would be a little irked by the inadequacy of this point.

            Dimensions have shrunk this way for a couple of decades
            now. I'm not talking architecture, but physics.  

            I'm old enough to remember people going around with a 
            "one-nanosecond" wire in their pocket and using it to
            place an upper bound on future achievable computing
            speed. Events proved them wrong; because technology
            has always found a way to do lateral steps around these
            pseudo upper-bounds, and prove every forecaster wrong.

            Right now there's still plenty of mileage to be achieved
            by scaling down chip dimensions. Even within the current
            architectural limits, there's no reason why we can't increase
            our clock rates by a fair amount, with the corresponding
            increase in performance. If a SuperSparc today can handle
            256 Megabytes or 1 Gigabyte of storage and be an efficient
            computing engine, I don't see why the Von Neumann
            bottleneck would prevent the same configuration from being
            achieved inside a chip. When that's reality and I can buy such
            a chip for the same price I buy an 8051 today - $1 or so - I will
            build a 64-node computer for less than $100, and a 4096-node
            one for less than $5,000.

            When that happens, today's concept of "supercomputer" will
            be as obsolete as the concept of mainframe is today.


                                                             _alberto_         
               

