The Turing Test
Hey, today is #PhilosophyFriday 🤔!
How about discussing some philosophical aspects of Computer Science? Today's question is:
❓ How do you know someone (or something) else is intelligent? 🧵👇
Before even asking the question of whether machines can or not ever become "intelligent", we need to ask ourselves: if they do, how would we know?
The underlying question is, of course, what is intelligence? 👇
There are many ways to answer this question, and not a single one will be complete, of course. You have to consider psychology, neurobiology, even philosophy and ethics...
From the computational point of the view, the most famous one is the Turing Test 👇
It goes like this:
- A 👩human and a 💻computer take turns to chat with a 👩⚖️judge.
- 👩⚖️ is free to ask anything to anyone, even ¿are you a computer?
- Both 👩 and 💻 are trying to convince the judge they are human.
👉 If 👩⚖️ fails to identify 👩 as the human, we are forced to recognize 💻 exhibits intelligence.
💡 The intuition behind this test is that, even if we cannot precisely define what intelligence is, we can all recognize it in others.
⚠️ There has been a lot of criticism to Turing's Test, invoking among other issues:
1️⃣ The judge may be easily fooled with cheap tricks, unless well trained. There are plenty of examples of simple chatbots that appear human because we tend to anthropomorphise things.
2️⃣ Human intelligence might not be equivalent to general intelligence, so an intelligent computer (or an alien individual) might not be recognized as such, simply because it doesn't fit our narrow conception of intelligence.
3️⃣ And there is the issue of whether "simulating intelligence" is the same as "being intelligent". This is John Searle's argument, commonly known as the "Chinese Room", which takes a philosophical stance against the computational nature of intelligence.
🤔 The philosophical question we have to ask ourselves here is about the nature of intelligence.
❓ Is intelligence a purely computational process, independent of the underlying hardware? Or does it depend on something inherent to biological brains?
🧠 If you believe intelligence requires biological brains, then you are "biological naturalist", as John Searle.
You don't think is possible for our computers to become truly intelligent irrespective of how powerful they become. At most they could simulate intelligence.
🤖 Otherwise, you are a "computationalist", or "functionalist", as Alan Turing.
You believe that minds are "just" sufficiently powerful computers, so nothing in principle stops our computers from becoming truly intelligent. It's just a matter of time.
👉 Whatever you chose to believe, some very smart people will agree with you.
Either way, even if there are some examples of "winning the Turing Test", if all serious computer scientists agree on something, that is we are still far away from Strong AI.
As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to ❤️ like and 🔁 retweet if you think someone else could benefit from knowing this stuff.
🧵 Read this thread online at https://apiad.net/tweetstorms/philosophyfriday-intelligence
Stay curious 🖖:
- 📚 https://www.csee.umbc.edu/courses/471/papers/turing.pdf
- 📃 https://en.wikipedia.org/wiki/Turing_test
- 📃 https://iep.utm.edu/chineser/
- 🎥 https://www.youtube.com/watch?v=Qbp3LJvcX38
🗨️ You can see this tweetstorm as originally posted in this Twitter thread.