This summer, I decided to forgo classes and instead spend my time working, watching movies and reading books. For most, this might be a time to discover some of the timeless classics, to finally read Twain or Tolstoy or perhaps to check out that foreign film that won the Cannes Film Festival.
I went the opposite way. Science fiction and the future have been the focus of my summer. I mean, who doesn’t get excited about the prospect of singularity, an artificial intelligence revolution or the possibility of quantum computing?
Perhaps it’s not a huge group. But nevertheless, I’ve found some interesting ideas regarding what the future will bring. What began this foray into the world of speculative nerding out? For me, it was the massively underrated movie “Transcendence” starring Johnny Depp and the book “Average is Over” by Tyler Cowen.
In “Transcendence,” Johnny Depp’s character, Dr. Will Caster, uploads his consciousness onto a quantum computer in order to use the Internet to solve world problems and make humans immortal. Ultimately, after trying to control people’s minds, he’s killed off by a computer virus, but the movie brings up a few interesting questions.
In an era where we have handheld computers, unmanned drones and the possibility of quantum computers accelerating technological advancement, should we keep moving forward?
Many futurists, including Eliezer Yudkowsky, have taken these concerns seriously. A research fellow at the Machine Intelligence Research Institute, Yudkowsky is a strong advocate for considering the potential drawbacks that technology, especially artificial intelligence, may bring.
On the discussion board of his blog, LessWrong, which focuses on “refining the art of human rationality,” the thought experiment Roko’s Basilisk first appeared, eventually becoming the subject of controversy among futurists. The basic premise of Roko’s Basilisk is that we are in a matrix-like simulation run by a super-intelligent AI, dubbed the basilisk.
In the simulation, we’re given a few choices: We devote our lives to creating Roko’s Basilisk, a creation that will solve many of humanity’s problems through continued technological advancement, we suffer eternal torment, or perhaps nothing happens. It depends on what the AI decides. As weird as it may sound, it’s eerily similar to humanity’s current predicament.
While this seems like a harmless thought experiment, it freaked out many techies, including Yudkowsky. The main issue for them is that if AI reaches the point where it can make its own decisions — even if it’s programmed to help humanity — it may hurt many people in order to help the majority.
At the other end of the spectrum is the book “Average is Over” by Cowen, an economist. His main argument is that with the advancement of technology, the average person must learn how to work with technology and take advantage of it or they will soon be out of a job. He predicts that the wage gap between the rich and poor will continue unless all young people learn basic coding and other computer-related skills. Instead of being afraid of advancement, he feels that everyone should embrace it and learn how to use it.
Regardless of how you feel about the future, we should consider potential threats to humanity. On the other hand, part of me doesn’t care — I’m just wondering how long it will take before I can buy a flying car.