Newsgroups: comp.parallel
From: rick@cs.arizona.edu (Rick Schlichting)
Subject: Kahaner Report: Books: (1) Par AI (2) Japanese financing of research
Organization: University of Arizona CS Department, Tucson AZ
Date: 3 Mar 1995 21:12:36 -0700
Message-ID: <3ja736$csi@usenet.srv.cis.pitt.edu>

  [Dr. David Kahaner is a numerical analyst currently heading the Tokyo
   office of the Asian Technology Information Program (ATIP). The
   following is the professional opinion of David Kahaner and in no 
   way has the blessing of the US Government or any agency of it.  All 
   information is dated and of limited life time.  This disclaimer should 
   be noted on ANY attribution.]

  [Copies of previous reports written by Kahaner can be obtained using
   anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports
   or on the World Wide Web (WWW) at URL

          http://www.cs.arizona.edu/japan/www/kahaner_reports.html

  ]


To: Distribution
From: D.K.Kahaner, ATIP-Tokyo [kahaner@cs.titech.ac.jp]
Re: Books: (1) Parallel AI (2) Japanese financing of research
03/02/95 [MM/DD/YY]
This is file name "j-books3.95"

Dr. David K. Kahaner
Asian Technology Information Program (ATIP)
Harks Roppongi Building 1F
6-15-21 Roppongi
Minato-ku, Tokyo 106
 Tel: +81 3 5411-6670; Fax: +81 3 5411-6671

ATIP: A collaboration between
   US National Institute of Standards and Technology (NIST)
   University of New Mexico (UNM)
------------------------------------------------------------------------

ABSTRACT. Description of two books relevant to Japan. (1)  Massively
Parallel Artificial Intelligence, ed by Kitano & Hendler, (2) A
researcher's Guide to Japanese Money: Fellowships, Grants, and Jobs from
Japanese Sources, by Oberlander & Schonbach.


(1) --------------------------------------------------------------------

"Massively Parallel Artificial Intelligence" was edited by

Dr. Hiroaki Kitano
Sony Computer Science Lab
Takanawa Muse Building, 3F
3-14-13 Higashigotanda
Shinagawa-ku, Tokyo 141
 Tel: +81 3 5448-4380, Fax: +81 3 5448-4273
 Email: KITANO@CSL.SONY.CO.JP

and

Prof James A. Hendler
Dept of Computer Science
Univ of Maryland
College Park, MD 20742
 Tel: (301) 405-2696; Fax: (301) 405-6707
 Email: HENDLER@CS.UMD.EDU


Prof Hendler has written several reports on parallel computing and
artificial intelligence research in Japan. Dr Kitano is one of Japan's
foremost researchers in AI. The book described below, is not exclusively
about Japan, but gives an excellent overview of relevant research in the
use of parallel computing technologies (hardware and software) to the field
of AI. Abstracts of the papers are given. For further information contact
the authors.

			 				
	      Massively Parallel Artificial Intelligence
	     Hiroaki Kitano and James A. Hendler, editors
			 AAAI/MIT Press, 1994
				
				
       Massively   parallel artificial  intelligence   is a new
       research field enabled   by the  emergence  of  powerful
       multiprocessor computers.   The enormous computing power
       and large  memory   space  of these   parallel computers
       opened  a  new   horizon  for  artificial   intelligence
       researchers.  This   has  led to  exciting  new  results
       ranging from  new  versions  of  standard AI  tools  and
       techniques to novel approaches to traditional areas such
       as natural language processing and vision.  In addition,
       areas  such  as genetic  algorithms  and artificial life
       have flourished under the  aegis of  this new  computing
       technology.  This book presents several paper which will
       introduce the  reader to the panoply of  new AI research
       growing out of this new technology.
				
				
			      ABSTRACTS

FOREWORD
 Dr. David Waltz
 NEC Corp and Brandeis University

At its beginnings, AI was conceived with a very ambitious agenda: what
would it take to match or exceed human performance? This question was
central, both for specific tasks, such as general problem solving,
world champion-level chess playing and the Turing Test's "imitation
game," and for the brain in general. But these are different times. AI
has gone through a phase where expert systems were the great hope of
both the computer science and investor communities, and are still
suffering the chill of the AI winter. Whether spring is close or not
is unclear. In response, AI's goals have become either very short-term
and applied (e.g. an expert system for some application area -- shells
are passe) or very narrow (e.g. non-monotonic logic). Nonetheless,
grand goals are still alive within a few pockets of AI, in particular
within Massively Parallel AI (and of course in science fiction, which
never lost the vision). Within AI, MPAI probably comes closest to
recapturing the grand goals and mind-expanding excitement of the
field's early days.

		
THE CHALLENGE OF MASSIVE PARALLELISM
 Hiroaki Kitano
 Sony Computer Science Laboratory and
 Carnegie Mellon University

Artificial Intelligence has been the field of study for exploring the
principles underlying thought, and utilizing their discovery to
develop useful computers.  Traditional AI models have been,
consciously or subconsciously, optimized for available computing
resources which has led AI in certain directions.  The emergence of
massively parallel computers liberates the way intelligence may be
modeled.  Although the AI community has yet to make a quantum leap,
there are attempts to make use of the opportunities offered by
massively parallel computers, such as memory-based reasoning, genetic
algorithms, and other novel models.  Even within the traditional AI
approach, researchers have begun to realize that the needs for high
performance computing and very large knowledge bases to develop
intelligent systems requires massively parallel AI techniques.  In
this paper, I will argue that massively parallel artificial
intelligence will add new dimensions to the ways that the AI goals are
pursued, and demonstrate that massively parallel artificial
intelligence is where AI meets the real world.

MASSIVELY PARALLEL MATCHING OF KNOWLEDGE STRUCTURES
William A. Andersen, James A. Hendler, Matthew Evett, and Brian Kettler
University of Maryland

As knowledge bases used for AI systems increase in size, access to
relevant information is the dominant factor in the cost of inference.
This is especially true for analogical (or case-based) reasoning, in
which the ability of the system to perform inference is dependent on
efficient and flexible access to a large base of exemplars (cases)
judged likely to be relevant to solving a problem at hand.
In this chapter, we discuss a novel algorithm for efficient
associative matching of relational structures in large semantic
networks.  The structure matching algorithm uses massively parallel
hardware to search memory for knowledge structures matching a given
probe structure.  The algorithm is built on top of PARKA, a massively
parallel knowledge representation system which runs on the Connection
Machine.  We are currently exploring the utility of this algorithm in
CaPER, a case-based planning system.

ADVANCED UPDATE OPERATIONS IN MASSIVELY PARALLEL KNOWLEDGE REPRESENTATION
James Geller
New Jersey Institute of Technology
Newark, NJ 07102

Class hierarchies are of fundamental importance in Knowledge
Representation, and increasingly also in non-AI branches of computer
science such as object- oriented programming languages.  The current
paper introduces three important update problems and corresponding
massively parallel update operations for class trees: (1)
Interpolation of a class in an IS-A relation; (2) Tree restructuring
by subtree movement; (3) Group specialization by multiple subtree
movements under a new parent node.  Special purpose parallel
algorithms for these operations, as well as for an auxiliary update
operation called "Group Movement" are discussed and their validity is
established.  Tests of a Connection Machine implementation of these
algorithms are reported.

SELECTING SALIENT FEATURES FOR MACHINE LEARNING FROM LARGE CANDIDATE
POOLS THROUGH PARALLEL DECISION-TREE CONSTRUCTION
Kevin J. Cherkauer and Jude W. Shavlik
University of Wisconsin--Madison

The particular representation used to describe training and testing
examples can have profound effects on an inductive algorithm's ability
to learn. However, the space of possible representations is virtually
infinite, so choosing a good representation is not a simple task.
This chapter describes a method whereby the selection of a good input
representation for classification tasks is automated.  This technique,
which we call DT-SELECT ("Decision Tree feature Selection"), builds
decision trees via a fast parallel implementation of ID3 [Quinlan,
1986], which attempt to correctly classify the training data.  The
internal nodes of the trees are features drawn from very large pools
of complex general- purpose and domain-specific constructed features.
Thus, the features included in the trees constitute compact and
informative sets which can then be used as input representations for
other learning algorithms attacking the same problem.  We have
implemented DT-SELECT on a parallel message-passing MIMD architecture,
the Thinking Machines CM-5, enabling us to select from pools
containing several hundred thousand features in reasonable time.  We
present here some work using this approach to produce augmentations of
artificial neural network input representations for the molecular
biology problem of predicting protein secondary structures.


A PARALLEL COMPUTATIONAL MODEL FOR INTEGRATED SPEECH AND NATURAL
LANGUAGE UNDERSTANDING
Sang-Hwa Chung, Dan I. Moldovan, and Ronald F. DeMara
University of Southern California

We present a parallel approach for integrating speech and natural
language understanding.  The method emphasizes a
hierarchically-structured knowledge base and direct memory-access
parsing techniques.  Processing is carried out by passing multiple
markers in parallel through the knowledge base.  Speech-specific
problems such as insertion, deletion, substitution, and word boundary
detection have been analyzed and their parallel solutions are
provided.  Results on the SNAP-1 multiprocessor show an 80
recognition rate for the Air Traffic Control (ATC) domain.
Furthermore, speed-up of up to 15-fold is obtained from the parallel
platform which provides response times of a few seconds per sentence
for the ATC domain.


EXAMPLE-BASED TRANSLATION AND ITS MIMD IMPLEMENTATION
Satoshi Sato
School of Information Science
Japan Advanced Institute of Science and Technology, Hokuriku

This paper proposes a new Example-Based Translation system, MBT3,
which is designed for the translation of technical terms (noun
phrases).  We have implemented the system on a SparcStation 2, with a
translation database consisting of 7000 technical terms in Computer
Science.  In a preliminary evaluation translation accuracy is 99
known terms, 78
presents MBT3n, a MIMD implementation of MBT3 on an nCUBE2 processor.
In MBT3n, parallel best match retrieval is used, as opposed to
the sequential best match retrieval which is used in the
original (sequential) MBT3 system.  The translation performance of
MBT3n improves as the number of processors it runs on increases.
MBT3n with 256 processors on an nCUBE2 is about ten times faster than
the original MBT3 system on SparcStation2 in the best case.

LANGUAGE LEARNING VIA PERCEPTUAL/MOTOR ASSOCIATION: A MASSIVELY PARALLEL
MODEL
Valeriy I. Nenov and Michael G. Dyer
University of California Los Angeles

DETE is a massively parallel, language learning system, designed to
model early stages of child language learning, in which it postulated
that the meanings of verbal utterances are acquired through repeated
association with perceptual and motor experience.  During learning,
DETE is presented with visual input consisting of mono-colored, 2-D
homogeneous and somewhat noisy shapes ("blobs") of varying sizes and
colors, moving about on a (simulated) visual screen.  DETE may also
receive commands to move its single effector ("finger") and/or
move/zoom its single retina ("eye").  DETE is presented with a
concurrent sequence of simplified phonemes, representing words/phrase
utterances that describe its visual/motor input.  After learning, DETE
demonstrates its language understanding by performing: (a)
Verbal-to-[visual/motor] association - given a verbal sequence, DETE
generates internal representations of the visual/motor sequence being
described.  (b) [Visual/motor]-to-verbal association - given a
visual/motor event, DETE generates a verbal sequence describing it.
DETE's learning abilities result from a novel neural network
architecture, called Katamic memory.  DETE contains over 80 Katamic
memory modules consisting of over 1 million artificial neural
elements.  The model was developed and runs on a 16K processor CM-2
Connection Machine.  DETE has been tested successfully on small,
restricted subsets of English and Spanish - languages that differ in
inflectional properties, word order, and how they categorize
perceptual reality.

MASSIVELY PARALLEL SEARCH FOR THE INTERPRETATION OF AERIAL IMAGES
Larry S. Davis
Computer Vision Laboratory, Center for Automation Research
University of Maryland

P.J. Narayanan
The Robotics Institute
Carnegie Mellon University

In this paper, we present a parallel search scheme for model-based
interpretation of aerial images, following a focus-of-attention
paradigm.  Interpretation is performed using the gray level image of
an aerial scene and its segmentation into connected components of
almost constant gray level.  Candidate objects are generated from the
window as connected combinations of its components.  Each candidate is
matched against the model by checking if the model constraints are
satisfied by the parameters computed from the region.  The problem of
candidate generation and matching is posed as searching in the space
of combinations of connected components in the image, with finding an
(optimally) successful region as the goal.  Our implementation
exploits parallelism at multiple levels by parallelizing the
management of the open list and other control tasks as well as the
task of model matching.  We discuss and present the implementation of
the interpretation system on a Connection Machine CM-2.


MASSIVELY PARALLEL, ADAPTIVE, COLOR IMAGE PROCESSING FOR AUTONOMOUS
ROAD FOLLOWING
Todd M. Jochem and Shumeet Baluja
Carnegie Mellon University

In recent years, significant progress has been made towards achieving
autonomous roadway navigation using video images.  None of the systems
developed take full advantage of all the information in the 512 x 512
pixel, 30 frame/second color image sequence.  This can be attributed
to the large amount of data which is present in the color video image
stream (22.5 Mbytes/second) as well as the limited amount of computing
resources available to the systems.  We have increased the computing
power available to the system by using a data parallel computer.
Specifically, a single instruction, multiple data (SIMD) machine was
used to develop simple and efficient parallel algorithms, largely
based on connectionist techniques, which can process every pixel in
the incoming 30 frame/second, color video image stream.  The system
presented here uses substantially larger frames and processes them at
faster rates than other color road following systems.  This is
achievable through the use of algorithms specifically designed for a
fine-grained parallel machine as opposed to ones ported from existing
systems to parallel architectures.  The algorithms presented here were
tested on 4K and 16K processor MasPar MP-1 and on 4K, 8K, and 16K
processor MasPar MP-2 parallel machines and were used to drive
Carnegie Mellon's testbed vehicle, the Navlab I, on paved roads near
campus.


BIOLAND: A MASSIVELY PARALLEL SIMULATION ENVIRONMENT FOR EVOLVING
DISTRIBUTED FORMS OF INTELLIGENT BEHAVIOR
Gregory M. Werner and Michael G. Dyer
Computer Science Department, UCLA

We have created a simulated world ("BioLand") designed to support
experiments on the evolution of cooperation and competition, with a
specific interest in evolving communication strategies.  Into this
environment we have placed several distinct populations ("species") of
mobile artificial agents (termed "biots") which sense their
environment through "scent" and "sound" gradients.  The behavior of
each biot is controlled by an artificial neural network specified by
its individual genome.  We have allowed biot populations to interact
and evolve over time (via recombination and mutation of parental
genes).  In a variety of experiments we have observed mating, food
finding, herding, prey pursuit, and predator avoidance.

The immense amount of computation required for such simulations (of both
the physics of the environment and the populations of neural networks that
control the biots) makes it necessary to run this system on a massively
parallel machine.  Here we describe BioLand as an approach to developing
distributed forms of intelligence and address some of the issues and
problems encountered while implementing the model on a CM-2 Connection
Machine.

WAFER-SCALE INTEGRATION FOR MASSIVELY PARALLEL AI
Moritoshi Yasunaga
Hitachi, Ltd.

Hiroaki Kitano
Carnegie Mellon University

Massively Parallel AI paradigms, such as neural networks, memory-based
reasoning and genetic algorithms, are based on a large number of
homogeneous processors or data.  Because of the massive parallelism, it is
difficult to make high speed desk-top-size hardware for massively parallel
AI systems by using ordinary VLSI technologies.  On the other hand, WSI
(Wafer Scale Integration) technology is expected to enable the creation of
"dream hardware" -- massively parallel desk-top systems.  Ordinary
computers are very sensitive to defects and the number of defects increases
exponentially as the semiconductor circuit area increases.  Therefore, it
is very difficult to make ordinary computers by WSI technologies.  On the
contrary, Massively Parallel AI seems to be robust against defects because
of the massive parallelism.  Thus, massively parallel AI and WSI seem to be
an ideal marriage of two state-of-the-art technologies.  In this chapter,
the hardware implementation of neural networks and memory-based reasoning
by WSI are proposed and experimental and simulation results of the
robustness are discussed.


EVOLVABLE HARDWARE
Tetsuya Higuchi, Hitoshi Iba
Electrotechnical Laboratory, Japan

Bernard Manderick
Erasumus University, Rotterdam

In this chapter, we describe a parallel processing architecture for
Evolvable Hardware (EHW) which changes its own hardware structure in
order to adapt to the environment in which it is embedded.  This
adaptation process is a combination of genetic learning with
reinforcement learning.  Our goal by implementing adaptation in
hardware is to produce a flexible and fault-tolerant architecture
which responds in real-time to a changing environment.  If we succeed
we are convinced that EHW will prove to be a key technology for
building autonomous agents.


-----------------------------------------------------------------------

(2) --------------------------------------------------------------------

The book "A researcher's Guide to Japanese Money: Fellowships, Grants, and
Jobs from Japanese Sources" was written and printed privately in 1994, by
two German medical researchers who are working in Japan.

Dr. Christian Schonbach
Dept of Tumor Biology
The Institute of Medical Science
The University of Tokyo
4-6-1 Shirokanedai, Minato-ku
Tokyo 106, Japan
 Email: SCHOENBH@IMS.U-TOKYO.AC.JP

and

Dr. Christian Oberlander
First Department of Surgery
The University of Tokyo Hospital
7-3-1 Hongo, Bunkyo-ku
Tokyo 113, Japan
 Email: CHRIO-TKY@UMIN.U-TOKYO.AC.JP

This 127 page book consists of two main parts. In the first part, general
funding and living conditions in Japan are described. There is an outline
of recent developments in Japanese research funding and some ideas about
strategies for getting to this money. Some points are made about the reform
of Japan's research system, the way research is run in Japanese labs, what
kind of organizational problems applicants might encounter, etc. There is
also a discussion of problems in daily life, with special emphasis on
topics not normally covered elsewhere. There is also a checklist that
presents an idealized procedure for getting a stay in Japan started.

The second part of the book is a large reference section. It contains basic
information and also comments on 100 funding programs, with details of
applicable fields, prerequisites, applications procedure, time span,
financial terms of awards, and the number of people/projects given awards.
There is also a list of 44 research institutes in the science city of
Tsukuba which are potential postdoc and/or employment sites. All the names
and addresses are given, and there is also a section of Internet sources of
information. There is also a reading list.

Although it is very difficult to keep such a comprehensive collection of
information up to date, in its current form (published Dec 1994) this is a
valuable resource. Readers will probably wish that the text was available
on-line. Perhaps the authors will oblige.


-------------------------------END OF REPORT---------------------------

