Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
Artificial Intelligence is no longer the stuff of science fiction. Half a century of research has resulted in machines capable of beating the best human chess players, and humanoid robots which are able to walk and interact with us. But how similar is this 'intelligence' to our own? Can machines really think? Is the mind just a complicated computer program? Addressing major issues in the design of intelligent machines, such as consciousness and environment, and covering everything from the influential groundwork of Alan Turing to the cutting-edge robots of today, Introducing Artificial Intelligence is a uniquely accessible illustrated introduction to this fascinating area of science.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 100
Veröffentlichungsjahr: 2015
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Published by Icon Books Ltd, Omnibus Business Centre, 39–41 North Road, London N7 9DP Email: [email protected]
ISBN: 978-178578-009-7
Text copyright © 2012 Icon Books Ltd
Illustrations copyright © 2012 Icon Books Ltd
The author and illustrator has asserted their moral rights
Originating editor: Richard Appignanesi
No part of this book may be reproduced in any form, or by any means, without prior permission in writing from the publisher.
Cover
Title Page
Copyright
Artificial Intelligence
Defining the AI Problem
What Is an Agent
AI as an Empirical Science
Alien-AI Engineering
Solving the AI Problem
Ambition Within Limits
Taking AI to its Limits: Immortality and Transhumanism
Super-Human Intelligence
Neighbouring Disciplines
AI and Psychology
Cognitive Psychology
Cognitive Science
AI and Philosophy
The Mind-Body Problem
Ontology and Hermeneutics
A Positive Start
Optimism and Bold Claims
Intelligence and Cognition
Mimicry of Life
Complex Behaviour
Is Elsie Intelligent?
Clever Hans: A Cautionary Tale
Language, Cognition and Environment
Two Strands Concerning the Al Problem
Al’s Central Dogma: Cognitivism
What is Computation?
The Turing Machine
The Brain as a Computing Device
Universal Computation
Computation and Cognitivism
The Machine Brain
Functionalist Separation of Mind from Brain
The Physical Symbol Systems Hypothesis
A Theory of Intelligent Action
Could a Machine Really Think?
The Turing Test
The Loebner Prize
Problems with the Turing Test
Inside the Machine: Searle’s Chinese Room
Searle’s Chinese Room
One Answer to Searle
Applying Complexity Theory
Is Understanding an Emergent Property?
Machines Built From the Right Stuff
AI and Dualism
The Brain Prosthesis Experiment
Roger Penrose and Quantum Effects
Penrose and Gödel’s Theorem
Quantum Gravity and Consciousness
Is AI Really About Thinking Machines?
Tackling the Intentionality Problem
Investigating the Cognitivist Stance
Beyond Elsie
Cognitive Modelling
A Model Is Not an Explanation
The Nematode
Really Understanding Behaviour
Reducing the Level of Description
Simplifying the Problem
Decompose and Simplify
The Module Basis
The Micro-World
Early Successes: Game Playing
Self-Improving Program
Representing the Game Internally
Brute Force “Search Space” Exploration
Infinite Chess Spaces
Getting By With Heuristics
Deep Blue
Lack of Progress
Giving Machines Knowledge
Logic and Thought
The CYC Project and Brittleness
Can the CYC Project Succeed?
A Cognitive Robot: Shakey
Shakey's Environment
Sense-Model-Plan-Act
Limited to Plan
New Shakey
Shakey's Limitations
The Connectionist Stance
Biological Influences
Neural Computation
Neural Networks
The Anatomy of a Neural Network
Biological Plausibility
Parallel Distributed Processing
Parallel vs. Serial Computation
Robustness and Graceful Degradation
Machine Learning and Connectionism
Learning in Neural Networks
Local Representations
Distributed Representations
Complex Activity
Interpreting Distributed Representations
Complementary Approaches
Can Neural Networks Think?
The Chinese Gym
The Symbol Grounding Problem
Symbol Grounding
Breaking the Circle
The Demise of AI?
New AI
Micro-Worlds are Unlike the Everyday World
The Problems of Conventional AI
The New Argument from Evolution
The Argument from Biology
Non-Cognitive Behaviour
The Argument from Philosophy
Against Formalism
No Disembodied Intelligence
Agents in the Real World
The New AI
The Second Principle of Situatedness
The Third Principle of Bottom-Up Design
Behaviour-Based Robotics
Behaviours as Units of Design
The Robot Genghis
Behaviour by Design
Collections of Agents
The Talking Heads Experiment
Categorizing Objects
The Naming Game
A Feedback Process
Self-Organization in Cognitive Robots
The Future
The Near Future
The Nearer Future
The Sony Dream Robot
All Singing, All Dancing
The SDR is a Serious Robot
Future Possibilities
Moravec’s Prediction
AI: A New Kind of Evolution?
Evolution Without Biology
A Forecast
Mechanized Cognition
The Future Meeting of the Paths
Further Reading
The Author and Artist
Acknowledgements
Index
Over the past half-century there has been intense research into the construction of intelligent machinery – the problem of creating Artificial Intelligence. This research has resulted in chess-playing computers capable of beating the best players, and humanoid robots able to negotiate novel environments and interact with people.
Many advances have practical applications … Computer systems can extract knowledge from gigantic collections of data to help scientists discover new drug treatments. Intelligent machinery… … can mean Life or death.
Computer systems are installed at airports to sniff luggage for explosives. Military hardware is becoming increasingly reliant on research into intelligent machinery: missiles now find their targets with the aid of machine vision systems.
Research into Artificial Intelligence, or AI, has resulted in successful engineering projects. But perhaps more importantly, AI raises questions that extend way beyond engineering applications.
The holy grail of Artificial Intelligence is to understand man as a machine. Artificial Intelligence also aims to arrive at a general theory of intelligent action in agents: not just humans and animals, but individuals in the wider sense.
The capabilities of an agent could extend beyond that which we can currently imagine. This is an exceptionally bold enterprise which tackles, head-on, philosophical arguments which have been raging for thousands of years.
An agent is something capable of intelligent behaviour. It could be a robot or a computer program. Physical agents, such as robots, have a clear interpretation. They are realized as a physical device that interacts with a physical environment. The majority of Al research, however, is concerned with virtual or software agents that exist as models occupying a virtual environment held inside a computer.
The distinction between physical and virtual agents is not always clear.
Researchers may experiment with virtual agents that occasionally become physically instantiated by downloading themselves into a robotic body. An agent itself may also be of many sub-agents.
Some Al systems solve problems by employing techniques observed in ant colonies. So, in this case, what appears to be a single agent may be relying on the combined behaviour of hundreds of sub-agents.
Artificial Intelligence is a huge undertaking. Marvin Minsky (b. 1927), one of the founding fathers of AI, argues: “The AI problem is one of the hardest science has ever undertaken.” AI has one foot in science and one in engineering.
In its most extreme form, known as Strong AI, the goal is to build a machine capable of thought, consciousness and emotions. This view holds that humans are no more than elaborate computers. Weak AI is less audacious.
The aim of Weak AI is to develop theories of human and animal intelligence, and then test these theories by building working models, usually in the form of computer programs or robots.
The AI researcher views the working model as a tool to aid understanding. It is not proposed that machines themselves are capable of thought, consciousness and emotions.
So, for Weak AI, the model is a useful tool for understanding the mind; for Strong AI, the model is a mind.
Al also aims to build machinery that is not necessarily based on human or animal intelligence.
Such machines may exhibit intelligent behaviour, but the basis for this behaviour is not important. The aim is to design useful intelligent machinery by whatever means.
Because the mechanisms underlying such systems are not intended to mirror the mechanisms underlying human intelligence, this approach to Al is sometimes termed Alien-AI.
So, for some, solving the Al problem would mean finding a way to build machines with capabilities on a par with, or beyond, those found in humans.
Humans and animals may turn out to be the least intelligent examples of a class of intelligent agents yet to be discovered. The goal of Strong Al is subject to heated debate and may turn out to be impossible.
But for most researchers working on Al, the outcome of the Strong Al debate is of little direct consequence.
AI, in its weak form, concerns itself more with the degree to which we can explain the mechanisms that underlie human and animal behaviour.
The construction of intelligent machines is used as a vehicle for understanding intelligent action. Strong AI is highly ambitious and sets itself goals that may be beyond our grasp.
The strong stance can be contrasted with the more widespread and cautious goal of engineering clever machines, which is already an established approach, proven by successful engineering projects.
“We cannot hold back AI any more than primitive man could have suppressed the spread of speaking” – Doug Lenat and Edward Feigenbaum
If we assume that Strong AI is a real possibility, then several fundamental questions emerge.
Imagine being able to leave your body and shifting your mental life onto machinery that has better long-term prospects than the constantly ageing organic body you currently inhabit. This possibility is entertained by Transhumanists and Extropians.
The problem that Strong AI aims to solve must shed light on this possibility. Strong Al’s hypothesis is that thought, as well as other mental characteristics, is not inextricably linked to our organic bodies. This makes immortality a possibility, because one’s mental life could exist on a more robust platform.
Perhaps our intellectual capacity is limited by the design of our brain. Our brain structure has evolved over millions of years. There is absolutely no reason to presume it cannot evolve further, either through continued biological evolution or as a result of human intervention through engineering. The job our brain does is amazing when we consider that the machinery it is made from is very slow in comparison to the cheap electrical components that make up a modern computer.
Brains built from more advanced machinery could result in “super-human intelligence.” For some, this is one of the goals of AI.
“Certum quod factum.” [One is certain only of what one builds] – Giambattista Vico (1668–1744)
What sets AI apart from other attempts to understand the mechanisms behind human and animal cognition is that AI aims to gain understanding by building working models. Through the synthetic construction of working models, AI can test and develop theories of intelligent action.
The big questions of “mental processes” tackled by AI are bound to a number of disciplines – psychology, philosophy linguistics and neuroscience. AI’s goal of constructing machinery is underpinned by logic, mathematics and computer science. A significant discovery in any one of these disciplines could impact on the development of AI.
The objectives of AI and psychology overlap. Both aim to understand the mental processes that underpin human and animal behaviour. Psychologists in the late 1950s began to abandon the idea that Behaviourism was the only scientific route to understanding humans.
Behaviourists believe that explanations for human and animal behaviour should not appeal to unobserved “mental entities”, but rather concentrate on what we can be sure of: observations of behaviour. Instead of restricting the object of study to stimulus-response relationships, those who abandoned Behaviourism began to consider internal “mentalistic” processes, such as memory, learning and reasoning, as a valid set of concepts for explaining why humans act intelligently.
Around the same time, the idea that the computer could act as a model of thought was gaining popularity. Putting these two concepts together naturally suggests an approach to psychology based on a computational theory of mind.
In 1957, Herbert Simon (1916–2001), an AI pioneer, made the prediction … … within 10 years, psychological theories will take the form of computer programs.
By the end of the 1960s, cognitive psychology had emerged as a branch of psychology concerned with explaining cognitive function in information-processing terms, and ultimately relying on the computer as a metaphor for cognition.
It is clear that Al and cognitive psychology have a great deal of common interest.
This has naturally led to a common pursuit known as cognitive science. AI sits alongside cognitive psychology at the core of an interdisciplinary approach to understanding intelligent activity. The concepts in this book therefore rightfully fall within the remit of cognitive science, as well as AI.
Some of the fundamental questions asked by AI have been the hard stuff of philosophers for thousands of years. AI is perhaps unique in the sciences. It has an intimate and reciprocal relationship with philosophy.
In one survey, AI researchers were asked which discipline they felt most closely tied to. The most frequent answer was philosophy.
The mind-body problem dates back to René Descartes