Orthogonality Thesis

Today is #PhilosophyFriday πŸ€”!

I want to talk about the ethical implications of using AI, but this is a HUGE topic. So today I'll focus on one specific issue:

❓ Will super-intelligent beings be nice to us? πŸ§΅πŸ‘‡


πŸ‘½ One of the great fears of humanity, and science fiction authors, in particular, is what would happen if we encounter super-intelligent alien species in the near future.

Would they destroy us, adopt us, teach us, or ignore us?


πŸ˜‡ Some pretty clever people think that sufficiently advanced civilizations have to be "good", otherwise, they would have probably destroyed themselves.

Being a dick would be kind of a Great Filter. Violent civilizations would not survive long enough to conquer the Galaxy.


😈 Others have the opposite view. Super-intelligent civilizations would have to be "nasty". Otherwise, they would have been destroyed by their enemies.

In this view, being a dick would be evolutionarily inevitable.


πŸ˜‘ Others have the view that intelligence and moral are two orthogonal dimensions. You could have super-intelligent beings which are super good, super bad or anything in-between.

Or they could have moral values completely incomparable to ours, unclassifiable as good or bad.


πŸ—ΊοΈ If history tells us anything, there is no evidence that technologically advanced civilizations are any nicer than their less advanced neighbours.

Such an encounter has never ended well for the least advanced civilization.


But who's to say there isn't a technological level after which being good is a condition for further progress?

We may very well be at the brinks of this transition right now.

❓ What do you think?


Why is this relevant? If we invent super-intelligent AI someday, they could be like aliens to us.

πŸ‘ If goodness is a necessary condition of super-intelligence, we have nothing to worry about.

⚠️ Otherwise, we may already have little time left to solve this problem.


And if there is even a small chance that we end up being destroyed by our god-children, shouldn't we be paying more attention to this problem?

☝️ There are lots of pressing issues with AI today, though. Deciding what to prioritize is also important.


As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to ❀️ like and πŸ” retweet if you think someone else could benefit from knowing this stuff.

🧡 Read this thread online at https://apiad.net/tweetstorms/philosophyfriday-orth


Stay curious πŸ––:


πŸ—¨οΈ You can see this tweetstorm as originally posted in this Twitter thread.