Who are You?

Today is #PhilosophyFriday ๐Ÿค”!

Let's take a break from pragmatism and discuss one of the deepest philosophical questions:

โ“ Who are you?

Are you your body, your brain, your thoughts, your actions? And how does this question connects with AI?


Almost all of use agree on some basic assumptions about individuality.

We all have the perception of free will, the fact that we make some decisions by ourselves.

And we all have this notion of a uniqueness property that applies to the self. There is but one "me".

We know we are ourselves, in part, because of the continuity of our conscious experience.

โ˜๏ธ I am the same person I was a fraction of a second ago, only a fraction of a second older.

And I know this because I haven't stopped thinking during that time.

But every night when you go to sleep, you pause your conscious experience for a few hours.

โ“ How do you know the person that wakes up the next day is actually you, and not just someone that happens to believe it's you?

And yet, we all go to sleep happily every night.

But what if I ask you to go to sleep for a thousand years?

๐Ÿฅถ We would freeze your body to a state in which all celular processes stop, including sinapsis, and wake you up when there's technology to ensure you can be safely brought back to life.

โ“ Is it you who wakes up?

Once there is a discontinuity in our conscious experience of reality, we might not longer be sure that we are actually the same person, instead of someone else with that person's memories.

Let's stretch this idea to see where it breaks for you. ๐Ÿ‘‡

Suppose there is a technology to transplant the brain, like physically moving it, from your old dying body to a younger clon of yourself that has been raised in-vitro and kept in comma.

โ“ Would you say the person that wakes up is actually you?

Now, instead of actually moving the brain, suppose we can brain-scan you and "move" whatever information is in there to a new brain in a new body. Lets assume the old brain gets irreversibly destroyed in the process, and copying is physically impossible.

โ“ Is it still you?

Now, instead of a biological body, suppose we can move that information to a computer program inside a sufficiently powerful hardware, that is somewhow capable of "holding" consciousness in all it's mighty complexity.

โ“ Would you still do it?

The purpose of these questions is to understand what is your notion of "self".

๐Ÿค” At which point you think the discontinuity is large enough that a consciousness simply ceases to exist if it makes that jump.

The reason this is relevant for AI, is that it shapes whether you think is possible to create a fully conscious artificial intelligence or not.

โ˜๏ธ We will be able move that intelligence from an old hardware to a new hardware.

Depending on what you answered before, you have to agree with one of these:

1๏ธโƒฃ There is no way to achieve artificial consciousness.

2๏ธโƒฃ Artificial consciousness dies when moved to a new hardware.

3๏ธโƒฃ You can be uploaded and live in a virtual simulation.

If you believe number 1๏ธโƒฃ then you have to consider there is something special about biological brains that cannot be replicated in silicon.

If you believe number 2๏ธโƒฃ then you have to consider the morals of upgrading an AI, since that would mean killing it.

And if you believe number 3๏ธโƒฃ then you have to consider the possibility that this reality could very well be just a sufficiently advanced simulation.

And that has a bunch of crazy implications as well.

As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to โค๏ธ like and ๐Ÿ” retweet if you think someone else could benefit from knowing this stuff.

๐Ÿงต Read this thread online at https://apiad.net/tweetstorms/philosophyfriday-consciousness