Robert Oppenheimer, the famous physicist who fronted the effort to create the atomic bomb in the United States, said, “In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin, and this is a knowledge which they cannot lose.”
Assessing technological advancement and its impact on the world today, we see a two-sided coin — we have witnessed both the helping hand and the flying punch of this advancement throughout history. This duality of technology is becoming more and more prevalent in the discussion of artificial intelligence, or AI.
In late July, more than 1,000 leading experts — including theoretical physicist Stephen Hawking and tech entrepreneur Elon Musk — signed a petition to discourage the pursuit of AI in warfare and offensive autonomous weapons. They argued that while AI has tremendous applications for the future, developing artificially-intelligent weapons would start an arms race that would outdo our capacity to stay safe.
In their letter, the experts argue that AI weapons are easy to produce and would “become ubiquitous and cheap for all significant military powers to mass-produce,” making it much easier for terrorists or oppressive dictators to use as tools of mass genocide, ethnic cleansing and war.
While I agree wholeheartedly with their concerns, enforcing this petition would be nearly impossible. Instead, this serves as more of a warning of what’s to come in the next several decades. It would only take one country to break the petition for the world to become entrenched in a global arms race.
In fact, some countries are already developing AI weapons. The Iron Dome missile defense system is an Israeli-created defense mechanism that automatically shoots down rockets before they explode terrestrially. The Israeli government has used the system for many years, assisted financially by the U.S.
On that note, the U.S. has some prototypes of its own. The X-47B is a drone that can carry out missions with zero human involvement. These missions include fueling and landing independently.
Furthermore, the United Kingdom has opposed international bans on autonomous weapons, which fuels the possibility that AI-driven war machines will arrive sooner than expected.
The International Atomic Energy Agency was created to regulate nuclear weapons and other nuclear energy sites because no country would agree to get rid of them completely. Similarly, we can’t stop the rise of AI weapons — but we can and should internationally regulate how they’re used, and we should enforce strict guidelines on their misuse.
Doing this would be tough, but not impossible. Just like with rogue nuclear nations, we could use political and economic pressure on noncompliant nations with weaponized AI. But first, we must agree on the level of danger that these weapons pose. At the end of the day, scientists may have come to know more sins in an effort to create a safer world.