Artificial Intelligence - Yorick Wilks - E-Book

Artificial Intelligence E-Book

Yorick Wilks

0,0

Beschreibung

Artificial intelligence has long been a mainstay of science fiction and increasingly it feels as if AI is entering our everyday lives, with technology like Apple's Siri now prominent, and self-driving cars almost upon us. But what do we actually mean when we talk about 'AI'? Are the sentient machines of 2001 or The Matrix a real possibility or will real-world artificial intelligence look and feel very different? What has it done for us so far? And what technologies could it yield in the future? AI expert Yorick Wilks takes a journey through the history of artificial intelligence up to the present day, examining its origins, controversies and achievements, as well as looking into just how it works. He also considers the future, assessing whether these technologies could menace our way of life, but also how we are all likely to benefit from AI applications in the years to come. Entertaining, enlightening, and keenly argued, this is the essential one-stop guide to the AI debate.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 228

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



ARTIFICIAL INTELLIGENCE

Modern Magic or Dangerous Future?

YORICK WILKS

For Roberta

‘Artificial Intelligence is the pursuit of metaphysics by other means.’

CHRISTOPHER LONGUET-HIGGINS

(AFTER CLAUSEWITZ)

 

‘Within that cloudy thing is another cloudy thing, and within that one is another cloudy thing … within which is yet another cloudy thing, inside which is something perfectly clear and definite.’

OLD SUFI SAYING

CONTENTS

Title PageDedicationEpigraphAcknowledgements  1 Setting out my stall: what is artificial intelligence? 2 How should robots think?: the place of logic in AI 3 Our first encounters with AI: the World Wide Web 4 What is an AI program like? 5 Talking and understanding: AI speech and language 6 Can AI really learn? 7 AI seeing and doing: robots and computer vision 8 Making things personal: Artificial Companions 9 The Companion as a way into the web 10 AI invades its neighbours: war, ethics, automation and religion 11 Shutting up shop: the takeaway message from this book  Further Reading Index About the Author Other Hot Science Titles Available from Icon Books Copyright

ACKNOWLEDGEMENTS

I owe a debt of gratitude to Peet Morris who encouraged me to take on this book, as well as to Ken Ford for his continuing support over the years as Director of the Florida Institute of Human and Machine Cognition. I owe a great deal to the team at Icon (Duncan Heath, Brian Clegg and Rob Sharman) for their criticisms and suggestions. I also owe thanks to the many people who have made comments and criticisms on drafts of the book: Peet Morris, Patrick Hanks, Arthur Thomas, Robert Hoffman, Greg Grefenstette, Sergei Nirenburg, David Levy, John Tait, Tom Wachtel, Alexiei Dingli, Nick Ostler, Christine Madsen, Tomek Strzalkowski, Richard Weyhrauch, Derek Partridge, Angelo Dalli and Ken Lovesy. The errors, of course, are all my own.

1

SETTING OUT MY STALL: WHAT IS ARTIFICIAL INTELLIGENCE?

Wittgenstein wrote that a philosophy book could be written consisting entirely of jokes. In that spirit, an AI book could perhaps be written now consisting entirely of snippets from the daily news making claims about breakthroughs and discoveries. But something would be missing: how can we tell which of them are true, which really correspond to programs or are just science fiction fantasies? In this book, my aim is to describe the essence of AI now, but also to give an account of where it came from over a long period, and speculate about where it may be headed.

This very week one reads of a Belorussian woman who had programmed her dead fiancé’s texts into a ‘neural network’ with which she could now talk posthumously. The same idea appeared in the Black Mirror episode ‘Be Right Back’ in 2013, and in an article I wrote in Prospect magazine in 2010 called ‘Death and the Internet’. The same meme comes back all the time and is appealing, but did the Belorussian programmer really do anything serious? No one knows at the moment, but the need to distinguish research, pouring from companies and laboratories, from speculation, fantasy and fiction has never been greater, and I try to sort them out in this book.

The name ‘artificial intelligence’ was coined by American computer scientist John McCarthy, one of the handful of AI pioneers whose reputation still grows, for a 1956 workshop at Dartmouth College. But doubts about the phrase have grown since then and English code-breaker and computer scientist Donald Michie’s previous version, ‘Machine Intelligence’, which it ousted, is making a comeback, and will be revived when the important journal Nature Machine Intelligence begins publication in 2019. That will be a badge of scientific respectability for a sometimes dubious field, where the word ‘artificial’ has come to have overtones of trickery. McCarthy said firmly that AI should be chiefly about getting computers to do things humans do easily and without thinking, such as seeing and talking, driving and manipulating objects, as well as planning our everyday lives. It should not, he said, be primarily about things that only a few people do well, such as playing chess or Go, or doing long division in their heads very fast, as calculators do. But Michie thought chess was a key capacity of the human mind and that it should be at the core of AI. And the public triumphs of AI such as beating Kasparov – the then world champion – at chess, and more recently by playing world championship Go, have been taken as huge advances by those keen to show the inexorable advance of AI. But I shall take McCarthy’s version as the working definition of AI for this book.

There can, then, be disputes about exactly what AI covers, as we shall see. I shall take a wide view in this book and try to give a quick and painless introduction to its history, achievements and aims – immediate and ultimate. The history is important, because although AI now seems everywhere, at least according to newspapers and media, and is pressing upon every human skill, it has actually been around for a long time and has lapped up around us very slowly. Here is a dramatic example: the road sign below was at the end of the driveway of the Stanford AI laboratory when I was there in the early 1970s.

This vehicle was rarely seen, and it consisted of four bicycle wheels with a wooden tray on top holding a radio aerial, a camera and a computer box. It could be steered by radio but sometimes ran itself round the driveway steered by the onboard computer. It was, though, far more significant than its absurd appearance. It was the beginning of the US government funded ‘Moon Lander’, later the ‘Mars Lander’, project, which was set up because it was known that vehicles on either body would have to be autonomous. That is to say, they would have to drive themselves, because they would be too far away to be radio controlled from Earth; they might fall down a crevasse in the time it took for a radio signal to reach them. That primitive vehicle ran almost 50 years ago, but is the father of all the autonomous vehicles now being tested on our roads and doing millions of miles a year.

Early setbacks to AI

It is important to see how long AI has been gestating, slowly but surely, even though it has been a bumpy ride with major setbacks. For example, in 1972 and 1973 AI suffered two major setbacks: the first was a book called What Computers Can’t Do, by the philosopher Hubert Dreyfus. He called AI a kind of alchemy (forgetting for a moment that alchemy – an early form of chemistry which posited that metals could be transformed into each other – has actually turned out to be true in modern times with the discovery of nuclear transmutation!). Dreyfus’s central point was that humans grew up, learning as they did so, and only creatures that did that could really understand as we do; that is to say, be true AI. Dreyfus’s criticisms were rejected at the time by AI researchers, but actually had an effect on their work and understanding of what they were doing; he helped rejuvenate interest in machine learning as central to the AI project.

The following year, Sir James Lighthill, a distinguished control engineer, was asked by the British government to examine the prospects for AI. He produced a damning report the effect of which was to shut down research support in the UK for AI for many years, though some work was continued under other names such as ‘Intelligent Knowledge Based Systems’. Lighthill’s arguments about what counted as AI were almost all misconceived, as became clear years later. He himself had worked on automated landing systems for aircraft, a great technical success, and which we could easily now consider to be AI under the kind of definition given on page 2: the activity of simulating uniquely human activities and skills.

Lighthill considered that trying to model human psychology with computers was possible, but not AI’s self-imposed task of just simulating human performances that required skill and knowledge. He was plainly wrong, of course – the existence of car-building robots, automated cars and effective machine translation on the web, as well as many AI achievements we now take for granted, all show that. Although a philosopher and an engineer respectively, Dreyfus and Lighthill had something in common: both saw that the AI project meant that computers had to have knowledge of the world to function. But for them, knowledge could not simply be poured into a machine as if from a hopper. AI researchers also recognised this need, yet believed such knowledge could be coded for a machine, though they disagreed about how. We shall revisit this topic – of knowledge and its representation – many times in the course of this book. Dreyfus thought you had to grow up and learn as we do to get such knowledge, but Lighthill intuited a form of something that AI researchers would describe as the ‘frame problem’ and he thought it insoluble.

The frame problem, put most simply, is that parts of the world around us ‘update’ themselves all the time depending on what kind of entity they are: if you turn a switch on, it stays on until you turn it off, but if it rains now, it very likely won’t be raining in an hour’s time. At some point it will stop. We all know this, but how is a computer to know that difference: that one kind of fact true now will stay true, but another will not be true some hours from now. We all learn as we grow up how the various bits of the world are, but can a computer know all that we know, so as to function as we do? At a key point in the film Blade Runner, a synthetic person, otherwise perfect, is exposed as such because it doesn’t know that when a tortoise is turned over, it can’t right itself.

The frame problem is serious and cannot be definitively solved, only dealt with by degrees. There have been many attempts, in AI and in computing generally, to prove that certain things cannot be done. Yet, in almost all cases these proofs turn out to be, not false, but useless because solutions can be engineered to get round the proofs and allow AI to proceed on its way. According to legend, Galileo, when before the Inquisition, where he was told firmly that the Earth could not possibly move, muttered under his breath the words ‘Eppur si muove’ – ‘and yet it moves’! Marvin Minsky at MIT, one of the great AI pioneers, once said that, yes, they ask for AI progress and it’s hard to spot sometimes, but when you come back ten years later you are always astonished at how far it has moved.

The ghosts haunting AI over the years, telling its researchers what they cannot do, recall the ‘proofs’ given that machine translation (MT) was impossible. MT is another computer simulation of a very human skill that we could now consider a form of AI. In 1960, the Israeli philosopher Yehoshua Bar-Hillel argued that MT was impossible, because to translate language the system would have to have an enormous amount of world knowledge. His famous example was The book was in the pen where he argued that a computer would have to know a book could fit into a playpen but not into a writing pen, if it was to get the right sense of the word ‘pen’, and so translate the sentence out of English into some language where those were quite different words. This is an almost exact correspondence with the frame argument mounted against AI. Again, the everyday availability now of free MT of reasonable quality, from sources like Google Translate, shows how wrong Bar-Hillel was, though he was very influential at the time and widely believed.

I led a team in 1997 in New York that entered and won the Loebner competition, where an annual prize was awarded for the best computer conversationalist. The competition was set up so that journalists had short conversations at computer consoles, behind some of which were real people answering back, while behind others were the competitors, computer conversational programs. The job of the jury of journalists was to rate them all on ‘humanness’ and so decide on the best program of the year. How our team did this, and the kinds of tricks we used to fool the judges, included such things as making deliberate spelling mistakes to seem human, and making sure the computer responses came up slowly on the screen, as if being typed by a person, and not instantaneously as if read from stored data. Here was the start of one of the conversations in New York that actually convinced ‘Judge 4’ that our Converse program was a human:

CONVERSE: Hi, my name is Catherine, what’s yours?

JUDGE 4: My name is Elliott.

CONVERSE: Well, it’s nice to meet you, Elliott. Did you see that story on CNN last night about the lesbian couple who came out at a White House party on Sunday?

JUDGE 4: Yes, I did. I think it may all be a publicity stunt for Ellen [DeGeneres].

That output is now over twenty years old, and there hasn’t been a great deal of advance since then in the performance of such ‘chatbots’. This annual circus derived from Alan Turing’s thoughts on intelligent machines in 1950, and his original test of how we might know a machine was thinking. His paper ‘Computing Machinery and Intelligence’ laid the groundwork for 70 years of discussion of the philosophical question ‘Can a machine think?’.

Turing modelled his ‘test’ on a Victorian parlour game in which a contestant would ask questions, via folded notes passed from another room, with the aim of establishing whether the person answering the questions was a man or a woman. In the game Turing proposed, sex detection was still the aim, and if no one noticed when a computer was substituted, then the computer had in some sense won; it had been taken to be a person. The crucial point here is that the game was about men versus women – no one knew a computer might be playing. The irony, when we consider how his test has been adapted to events like the Loebner competition, is that Turing was not trying to say that computers did or ever would think: he was trying to shut down what he saw as useless philosophical discussion and present a practical test such that, if a machine passed it, we could just agree that they thought and so could stop arguing fruitlessly about the issue.

When we talk to others we never ask if they are machines or not. It doesn’t make for a good conversation if you ask your friends that kind of thing. Nor did Turing think it would if we asked that of machines: that issue had to be implicit. Yet now in competitions such as the Loebner, the question ‘Are you a computer?’ has come out into the open and contestant machines are programmed to deal with it and give witty replies, as do the current commercial systems such as Alexa and Siri. But that is no longer any real test of anything except ingenuity.

My reason for mentioning the Loebner competition is that a curious feature of it is that the level of plausibility of the winning systems has not increased much over the last twenty years: systems that win don’t usually enter again, as they have nothing left to prove. So new ones enter and win but do not seem any more fluent or convincing that those of a decade before. This is a corrective to the popular view that AI is always advancing all the time and at a great rate. As we shall see, some parts are, but some are quite static, and we shall need to ask why, and whether the answer lies in part in the optimistic promises researchers constantly make to the public and those who fund them.

Overpromising in AI – a persistent problem

It is important to come to grips with this issue because it is becoming harder to separate what AI has actually done from what it promises, and also from what the media think it promises. There are also science fiction worlds that are close to ours but hard to distinguish from reality. In the recent film Her, Scarlett Johansson’s voice was given to a ‘universal AI girlfriend’ who seemed able to keep up close conversational relationships with millions of men worldwide. Since speaking and listening technologies such as Alexa are being sold all over the world, listen to their owners even when they are not attending to them, and then report their conversations back centrally, one can ask whether the public knows that Alexa exists but that the Johansson fiction does not? And the makers of sex robots are working hard to bring something like Her into existence. We shall need to be clear in what follows about what is known to work; what isn’t – yet; and what may never work, no matter how hard we try.

Sorting these things out is made harder not only by company promises, made to sell products, but by researchers who have to constantly over-promise what they can do in order to win public research grants, a problem in the field since the Second World War. Already in the 1940s, when the capacity of the biggest computer was a millionth that of an iPhone, the papers were full of claims of ‘giant brains’, reasoning and thinking and just about to predict the weather for months to come. As early as 1946, the Philadelphia Evening Bulletin wrote of the technology at its local university that a ‘30-ton electronic brain at U of P thinks faster than Einstein’.

It was all nonsense of course, but there was real progress, too. Someone said recently that the most striking thing about today, to anyone who came here directly from the 1980s, would be that you could have something in your pocket that knew virtually everything there was to know. Think just how astonishing that is, let alone that it also makes phone calls. We marvel now at automatic cars, but computers have been landing planes without problems for nearly 40 years. One of my tasks here will be to convey what parts of AI are moving rapidly and which seem a little becalmed.

Two key questions

Two key questions will run, and hopefully be answered, throughout this book:

First, should AI be just using machines to imitate the performances humans give, or trying to do those things the way we do them, assuming we could know how our brains and bodies work? The two things can be quite different, with the first often thought of as engineering and the second as a way of doing psychology: explaining ourselves to ourselves by using computers. So, for example, some programs that determine the grammar structure of English sentences process them from right to left, i.e. backwards. They imitate our performance, but by methods we can be pretty certain differ from our own, and so could not be models of our own functioning.

It has long been a truism in AI thinking that, since the Wright brothers, aeroplanes fly but not with anything like the mechanism of flapping wings that birds use, and this example has been used to stress the difference between modelling the mechanism of evolution – of birds in this case – and really doing engineering. But more recently, the metaphor has reversed because it is now possible to build drones that do fly as birds do, and moreover to model in them the change of wing shape that enables many manoeuvres birds make but conventional planes cannot.

Secondly, should AI be based on building representations inside computers of how the world is, or should it just be manipulating numbers so as to imitate our behaviour? The current fashion in AI is for the second approach, called machine learning (ML), or even deep learning (DL), and many of the current news items in the media are about applications of this approach, such as the recent successes in diagnosing diseases or where a computer beat the best Go player in the world. Those are approaches based on numbers and statistics. But up until about 1990, the core AI approach used a form of logic to build representations – what I shall sometimes call ‘classical’ AI: structures representing things such as the layout of scenes or rooms. This is still how applications such as satnavs work, by internally examining structures of city streets to find the best route to drive. Such systems are not making statistical guesses about how the streets of London are connected.

This is an ongoing argument in AI research. When John McCarthy was offered statistical explanations back in the 1970s he would say, ‘But where do all these numbers come from?’ At the time, no one could tell him, but now they can, as we shall see in later chapters. One way of looking at this issue of those who want to represent things logically, versus those who think statistics a better guide to doing AI, is to remember how AI emerged; it was once bound up with a subject called cybernetics, a word now rarely used in the English-speaking world, though it is still used in Russia and in parts of Western Europe. Cybernetics was about reaching the goals of AI not with digital computers but with what were then called analogue computers, based not on logics but on continuous electrical processes, such as levels of current. Cybernetics produced things such as ‘smart’ home thermostats, and mechanical tortoises that could learn to plug themselves into wall sockets: they did not have representations in them at all. With the rise of classical, logic-based AI in the 1960s, in which reasoning was a central idea, cybernetics faded away as a separate subject. But the history of AI still has it jostling for space with other close disciplines such as control engineering (which pioneered planes that land automatically), pattern recognition (which introduced forms of machine vision) and statistical information retrieval.

There is nothing odd historically about subjects jostling against each other, disappearing in some cases (such as phrenology), or emerging from each other, as psychology and much of science did from philosophy in the 1800s. It’s a little like the prehistoric times when different tribes of humans – neanderthals, denisovans, Homo sapiens – co-existed, competed and interbred before one won out conclusively.

The case of information retrieval (IR) is important because of its link to the World Wide Web: the system of documents, images and video we now all have access to via our phones and computers. Google still dominates all search on the web, and the company’s founders Sergey Brin and Larry Page conceived their algorithm for searching it in the Stanford AI laboratory as part of PhDs they never finished. Yet, although coming from within an AI laboratory, Brin and Page’s search method was also directly within classic IR, but with a subtle twist I shall describe later on. The relevance of this to our big question is that IR, like cybernetics, does not deal in representations in a way that makes logic central, as ‘classical’ AI did.

Karen Spärck Jones was a Cambridge scientist who developed one of the basic tools for searching the web, and once argued that AI has much to learn from IR. Her main target was classical AI researchers, whom she saw as obsessed with content representations, when they should – according to her – have been making use of the statistical methods available in IR. Her arguments are very like those deployed by older cyberneticians, and more recently those who think machine learning is central to AI. Her questions to AI resolve to this crucial one: how can we capture the content of language except with its own words, or other words we use to explain them? Or, to put it another way, how could there be other representations of what language expresses that are not themselves language? This was a question that obsessed the philosopher Wittgenstein in the 1940s, and he seems to have believed language could not be represented by anything outside itself, or be compressed down into some logical coding. Here is a brief quotation from Spärck Jones in the 1990s that gives the flavour of her case that classical AI is simply wrong in thinking computers can reason with logical representations (what she calls the ‘knowledge base’) rather than by ‘counting words’ (another way of describing doing statistics with texts):

The AI claim in its strongest form means that the knowledge base completely replaces the text base of the documents.

‘Knowledge base’ here means some logical structure a machine then uses to reason with, rather than the ‘text base’, that is, the original words themselves. This issue, of what it is that computers use as their basic representation of the world about which they reason, is still not settled. Most eye-catching developments in recent AI, from medicine, to playing Go, to machine translation on the Internet, are based on ideas closer to Spärck Jones and IR than to the logics and ‘knowledge’ on which AI was based for its first 50 years.

Will AI always be in digital computers or could it be in bodies?

A further question touched on towards the end of the book is about whether the basis of AI should be in digital computers at all, as it has been since cybernetics disappeared in the 1970s, or whether we shall reach AI not by copying how humans do things in computers but by merging computation with the biological, with real human or animal body tissues. For some, and this approach is more popular in Japan than in the West, this implies building up organic tissue-like structures that can perform, an approach one might parody as ‘doing Frankenstein properly’. The alternative, more popular among American thinkers and entrepreneurs such as Elon Musk, is called ‘transhumanism’, the view that we could improve humans as they are now with artificial add-ons so that such beings gradually become a form of AI – and possibly immortal.

All these possibilities are full of religious and ethical overtones: of the creation myths of man in the Bible, of early artificial creatures such as the Golem of Prague, and of the ancient quest for immortality. I shall touch on serious questions such as this in Chapter 10.

The next chapters will describe the basic areas of artificial intelligence, including its relationship to the craft of computer programming, and we shall start with asking how important logic is to AI. McCarthy and others believed that AI was about making computer models of logical reasoning in machines and humans – an idea of the primacy of logic in thinking that goes back to the 1600s and Gottfried von Leibniz, the first man to say such things. I shall discuss the scope of machine logic and its decline and fall with the realisation that people do not seem to use logic much in everyday reasoning, nor even statistics.