An origin story for Artificial Intelligence¶
Artificial Intelligence stands at an intriguing crossroad between math, engineering, and philosophy. At its core, it's all about trying to answer some of the most transcendental questions: what is intelligence, what takes for a system to be able to sustain it, and whether can we build such a system.
What is Artificial Intelligence about? To try and answer this question, let's start at the beginning, or at a beginning because, like every good story, this one has many versions.
In this episode, I want to tell you an origin story for Artificial Intelligence. Is the story of how the dream of a fearless man revolutionized our comprehension of what it means to be, after all, human.
Welcome to Mostly Harmless AI.
The Imitation Game¶
Our origin story starts in 1950, England, home of Alan Turing.
Turing is very well known for at least three different things, each of which would independently be enough to warrant him a major place in the history of Computer Science. When taken together, his contributions make him, in the eyes of many, the most relevant figure in the whole field.
His first major contribution to Computer Science was the definition of a mathematical model for an abstract kind of machine that could potentially perform any sort of computation. He basically defined the minimum requirements to make a computer. This model is now known by the very creative name of "Turing machine", and it lays the theoretical foundations of what can and, more importantly, what cannot be done with a computer, regardless of how powerful technology ever becomes.
Had he ended there, he would have been remembered as the theoretical father of Computer Science. But he went further. During World War 2, he worked on a super-secret project to decipher Nazi cryptography. It was supposed to be uncrackable, but Turing teamed-up with some of the smartest people he could find, and they cracked it. And in doing so, they also built the first physical embodiment of an actual Turing machine, the first real working computer.
So, he both created the foundational theory of the field and engineered the first computer. And then he turned his attention to what he considered was the ultimate question of this new science: can these machines ever think?
In "Computer Machinery and Intelligence", a short technical paper written in 1950, Turing described what came to be known as the Turing Test, a hypothetical experiment to determine whether artificial intelligence could be considered, indeed, intelligent. He called it "The Imitation Game".
The basic idea is something like this: you put both a human and a computer behind closed doors, from where they can communicate with another human, a judge, by a text interface only, like in a chat. The judge's job is to determine who is the human and who is the computer, and both interviews will do their best to convince the judge that they are human. So the real human can write whatever he or she wants: "The human is me", "Don't trust the other", anything. But here is the thing, the computer can also do the same, so in a sense, it has to deceive another human to be considered "intelligent".
The judge can ask questions to trick the computer to reveal itself. Maybe ask it to solve some complex mathematical problem, something that no human could achieve. But the computer can just say "hey, that's impossible to do!", just like any human would. Turing thought the only way for a computer to successfully convince any potential judge, no matter how tricky the questions asked, would be to display such a wide range of creative and human-like responses that we would have to agree, for all practical means, it was indeed showing intelligence.
The Turing Test gives us one possible definition for Artificial Intelligence: a machine that displays human-like responses in every conceivable conversation. It's kind of a tautological answer to the question of "what is intelligence". It's intelligence if you cannot effectively differentiate it from other things that you agree to call intelligence.
There are a lot of issues with this definition, though, and we'll examine a few of them next.
The issue of human biases¶
First, we have to say that Turing conceived this test as a thought experiment, a hypothetical setup to force us into thinking about how can we even start to define what intelligence is about. Turing's idea was not that we would actually implement this test as he described it. It's more a philosophical definition of intelligence than a pragmatic one.
Despite this, we did take his words at face value and implemented the test. The Loebner contest is probably the most famous one. It's a yearly contest in which teams of programmers submit chatbots that are evaluated against real humans in a very similar setting to what Turing proposed.
One pragmatic argument against Turing's Test is that it relies heavily on the ability of a human to accurately judge whether those responses seem human-like, which is a very subjective thing to do. And we know for a fact that humans are lousy at subjectivity. We are full of biases, one of which is precisely our tendency to "anthropomorphize", that is, to see human features in non-human things. We do it with our pets, with inanimate objects, with symbols, ..., we even do it with characters in videogames! So of course we could anthropomorphize the computer behind that text message and assign it a higher degree of intelligence that it really has.
This has happened in the Loebner contest over and over. Programmers design a chatbot that makes spelling mistakes on purpose and justifies itself as a non-native speaker to avoid answering complex questions. These techniques have allowed systems that are, for whatever definition of intelligence you have, definitely not intelligent, to win over 30% of the times. The Turing test, at least implemented in this simplistic scenario in the Loebner contest, seems to be very fragile to human biases.
The issue of semantics¶
Beyond practical or methodological problems, some thinkers have argued that there are fundamental issues with this definition of intelligence. Probably the most famous argument is "The Chinese Room", posed by philosopher John Searle. It's a thought counter-experiment in which a human is placed inside a room with a book that contains instructions for translating messages from Chinese to English, messages that enter the room through a small window. The man, who speaks nothing of Chinese, reads the message, finds the symbols in the book, and writes the corresponding answer. If the book is written in such a way that for every incoming message it will correctly produce the correct translation, any external observer would be convinced the "room" speaks Chinese, while it is obvious, according to Searle, that neither the man nor the book actually understands Chinese.
According to Searle, this thought experiment shows that a system can display the ability to solve a cognitive task without actually possessing the intelligence necessary to do it. It's supposed to show that imitation of intelligence is not the same as actual intelligence.
Some have argued, though, that even if neither the man nor the book understands Chinese, there is a sense in which the whole system, the room with all its content, does understand Chinese. It is the same sense in which Turing's test defines intelligence: if the room (with man and book inside) is indeed able to provide a perfectly plausible translation for any incoming message, what else do we need to convince ourselves it does understand Chinese? The point is, this kind of argument rests on agreeing on the definition of what "knowing" or "understanding" means. If your definition of "understanding" is such that only humans are capable of doing it, then, by your own definition, you cannot believe in Artificial Intelligence.
Turing was a functionalist in this regard. He believed that intelligence is best defined in terms of what it can do, regardless of, for example, how is it made or what is it composed of. In his view, intelligence is being capable of maintaining a coherent conversation about any general topic, independently of the hardware (electronic or biological) that sustains that intelligence. You might disagree, and lots of very smart people do. Some believe intelligence has to be biological in nature. Others, that there is some layer of unknown physics at play inside our brains that cannot be simulated in a classical computer. Turing believed the brain was just a very powerful computer, and thus, in principle, nothing can stop us from eventually making an electronic substitute capable of harbouring intelligence.
The issue of purpose¶
Even if we agree philosophically with this definition of intelligence, it is still problematic in another sense. If we define intelligence as something at the level of what humans can currently do, isn't it kind of pointless to try and develop that? We already have humans, why do we need another system with the same capabilities? Wouldn't it be better if artificial intelligence can solve the problems that we humans cannot solve today? An argument in favour of Turing could be that his definition imposes no limitation to what this intelligence can do. It need not be as dumb as a human. In fact, a super-intelligent being should be perfectly capable of passing the test, the same way an adult can play with a child without actually believing in fairies.
In a sense, we could say that being able to succeed at another human's judgement of character is the ultimate intelligence test because intelligence evolved precisely in this context: we were a bunch of primates in the savannah who needed to understand and trust each other to survive. But this anthropocentric view is not without issues as well. Who's to say that human intelligence is not a dead end? Couldn't we, by trying to imitate us, humans, be actually taking a detour on the path to superintelligence? Maybe our biases are just the evidence that, if there is higher intelligence in the universe, it is not like us. But if that's the case, could we even be able to recognize that intelligence? And could they, or it, recognize us?
Turing's Test is far from perfect. It has practical, philosophical and ethical implications regarding how we define intelligence. In a sense, it is also about how we define ourselves since intelligence is arguable the human trait we are the proudest of. How we choose to define it also dictates how we attempt to imitate it, and how we go about recognizing it in others. Our biases are an intrinsic part of our intelligence, whatever that is.
Turing's imitation game is the first attempt to formally define an overarching goal for artificial intelligence and the then nascent science of computation. However, this goal wasn't set in stone. In the next decades, different paradigms emerged, some of them more focused on what we today call general, or strong, AI, and others more focused on narrow, or weak, AI. The first group of paradigms attempts to solve AI more or less in the sense that Turing envisioned. To create an artificial intelligence so powerful that it can perform any cognitive task a human can, and then some we can't. The second attempts to solve concrete problems, such as vision, or language, or driving a car, that seem to require advanced levels of cognition but are not entirely general.
For most of the history of AI, the narrow approaches have been the most successful. We are today capable of detecting tons of objects in images; translate between most mainstream languages; and play chess, Go, and even StarCraft better than any human that's ever lived. However, we are still very, very far from Turing's vision of achieving near-human capabilities at open-ended conversation. In a sense, we have walked a very long road, and yet we are still at the very beginning of this journey. And despite the dominating pragmatism, and the focus on solving narrow tasks, if you ask most of the people working today in the field, they will tell you that, deep down, they also share Turing's dream.
Turing's life ended sadly. He was a gay man in a terribly unforgiving society, and he suffered physical and psychological humiliation for being unwilling to fit the narrow definition of human that others had decided. His greatest achievement was the dream he left us to pursue. A dream of a future in which humanity is no longer alone in the cognitive universe. A future we will share with our intellectual children, to whom we will show the marvels of the universe, and they will, in turn, help us unravel its deepest mysteries.
Thank you for listening. If you enjoyed this episode, feel free to leave a review, and share it with your loved ones. If you have any questions or suggestions, I would love to hear from you. You can find me on Twitter, and if you're listening on the Anchor app, you can leave a voice message right here.
This was the Mostly Harmless AI podcast.
I'll be back very soon with another episode on the fascinating world of Artificial Intelligence.
Until then, stay curious, and stay safe...