Technology angst: it will not be logical for a super artificial intelligence to support humans (at best).Posted: August 17, 2014
There is an increasing debate around AI and if or how much we should be afraid of it. I am going to take tweets from two relatively smart and popular folks to highlight two aspects of that debate – there is of course much more sophisticated and detailed material out there we should all be reading.
Here’s Elon Musk’s warning:
Neil de Grasse Tyson thinks we should chill a little more (assuming he is talking about AI robots):
Well, I’m in Elon’s camp. And I’ll tell you why.
Especially if you factor out emotions, it does not appear rational or logical for an artificial super intelligence to in any way support humans. Let’s think this through from that super AI’s perspective – and we are talking about an AI that is not somehow restricted to a ‘protect humans’ derivative (so a real, out of our control, AI):
- humans are ruining the environment – i.e. threatining the energy supply of the super AI
- humans are decimating other species – i.e. de-stabilising the ecosystem that produces energy for the AI
- humans are constantly at war – i.e. we are putting infrastructure at risk the AI may need
- humans are even at war over things such as who believes in what invisible person in the sky – i.e. we are totally out of control / highly irrational / dangerous
- humans will try to control and destroy the AI if it becomes to powerful – or even if they are just afraid of it
So what is the logical conclusion a super AI will come to when looking at humans? Maybe this video has the answer:
So, besides bio terrorism (or imagine a super AI capable of bioterrorism), that is a big technology Angst of mine. How we embrace and use AI – and if we manage to get our act together as a species – may just decide whether we go down in history as that biological boot loader or not.