The idea of an ‘artificial intelligent man’ stretches back into antiquity, represented in Greek, Chinese and Egyptian cultures. In the early-to-mid 20th century, the modern scientific foundation that could make this idea a reality was created: computer science, artificial neural networks and game theory. The term ‘artificial intelligence’ appears in 1956.
Science fiction, both in print and film, consistently used this idea and solidified in the public consciousness the idea that not only could this be achieved, but should. The military, largely through the Defense Advanced Research Projects Agency (DARPA), has probably pushed the idea longer and harder than any other entity. But various common names in the technology industry are also major players, both corporations and people.
We now have self-driving cars exist, a computer has beaten humans in chess and on the TV game show Jeopardy and different forms of artificial intelligence are reading the entire Internet to learn everything humanity knows.
Predictions vary for when, not if, machine intelligence will match and exceed that of humans, but virtually all experts agree it will happen this century and possibly within 15 years. At that point we could face an ‘intelligence explosion’ where the AI being becomes hundreds or thousands of times smarter than humans very rapidly. And many experts are pessimistic about what that means for humans. These include Elon Musk, Stephen Hawking, Stuart Russel and others.
This interview with James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era focuses on why the risks of AI are seldom discussed, how we fail to learn from past mistakes and if humans really end up as “just the biological boot loader for digital superintelligence.”