Artificial Intelligence Is the Biggest Threat to Humanity
Some people, many of whom live in and around Silicon Valley, can hardly wait another day for the widespread adoption of artificial intelligence (AI). They think that this technology—if we can even use that plebeian term to describe something that’s as revolutionary as the discovery of fire—will be a turning point for humanity, not just the economy, making our lives better.
The issue is hardly rhetorical. If it seemed like a topic suitable only for science fiction as little as 20 years ago, it has become a real concern now. The world of science and technology seems to be pushing it at every stage. But few in the humanities field—and they should be the first to speak out—are asking the big and essential question: “What if the machines wipe us out?”
The AI enthusiasts are probably right about one thing: the turning point. As for artificial intelligence’s ability to improve human life, that is a matter of debate. And the debate seems to be increasingly steering in a direction that urges caution. It suggests that AI should be introduced slowly and carefully. But how often do risky technologies ever bring about gradual changes?
Perhaps AI represents the ultimate paradox of science and civilization. In the past few decades, we have focused so intensely on advancing technology and scientific (but not ethical and philosophical) progress to the point of worship that we have not considered how all this threatens our very existence.
Even Elon Musk, the very same who wants to establish a human colony on Mars, whose 95% carbon dioxide-dense atmosphere is entirely unfit for human life, has warned against excessive AI enthusiasm. (Source: “Elon Musk’s Billion-Dollar Crusade To Stop The A.I. Apocalypse,” Vanity Fair, April 2017.)
Musk recently said that AI is “far more dangerous than any nuclear weapon.” (Source: “Elon Musk’s OpenAI may be the biggest threat to U.S. National Security and International Security,” Hacker Noon, March 5, 2018.)
AI Is Already Here
AI is already affecting you in ways you may not have realized. Automated trades (robot trades and algorithms) have already caused various stock market shocks. AI has beaten humans in chess games and could soon take away the pleasure of driving a car. I would not bet on a human driver winning the Indy 500 in 2030.
The optimists believe that artificial intelligence will allow us to increase our knowledge. They also believe that we—that is, humans—will be able to continue to enjoy using and developing that knowledge. They think AI will allow us to discover functions we did not even know exist.
Oddly, the optimists appear entirely oblivious to the fact that smart machines will ultimately become smarter than any human before we even find out how to optimize these robots to serve us. (Source: “Confessions of an AI Optimist: An Interview with MIT’s Andrew McAfee,” BCG, November 16, 2017.)
The Internet was a revolution and it took a just a few years for it to disrupt the world and become essential. But humans are in charge of how they use the Internet. Imagine a world where Internet-connected technology controls your life and that of your neighbors. That’s a brave new world that could trigger the end of humanity itself.
Some of the enthusiasm for artificial intelligence is rooted in the belief that it will somehow allow us to become immortal. But quests for immortality have a greater likelihood of producing the very opposite effect: the self-elimination of humanity.
The Age of Artificial Intelligence Is Inevitable
If artificial intelligence is so dangerous, can’t we simply stop it?
To appreciate or decry just how inevitable AI has become, consider that investors are already singing its praises. They note that such technology will redefine what it means to have a competitive advantage, both in business and in shaping everyone’s daily life. Unfortunately, they are right. It will get to a point where the absence of technology will be a luxury.
The first area of impact for AI will be in laboratories, research facilities, and universities. Engineers and scientists have limited experience with AI. We can thank the limited impact that AI has on our lives (for now) on that shortage of skills, but that will change sooner than anyone can imagine.
The current and future generations of students will be devoting far more attention to developing the resources and skills to spread AI. Surely, the great job opportunities of the next few years will increasingly shift in the direction of those who are acquiring the appropriate training to work on this technology. In other words, society and science are already working in tandem to make artificial intelligence the “new normal.”
The Internet of Things (IoT), which in many ways has already crept into our cars and homes, has opened up the idea that products and processes communicating with each other is somehow a great opportunity.
But there’s the risk that too much IoT will make a human an “IdIoT.” Indeed, too much technology and transfers of functions to robots and related products will turn us into the accessories, and technology into the protagonists. Biotechnology and AI are tailor-made for each other.
The idea of a humanoid combining technology and biology will allow some people (or beings) to live longer. Are we prepared to tackle the ethical and social issues that such combinations raise? We are barely able to keep up with the pace of technological change, let alone deal with its ethical repercussions.
It Only Sounds Like Science Fiction
It sounds like science fiction, but airplanes, self-propelled transportation, cell phones, and computers were once science fiction as well.
There’s also a parallel technology that is developing at the same pace as AI. It’s related, but few have yet made the connection between the two. It’s blockchain.
You have heard of it as a virtual ledger, similar to a glorified “Excel” program, that’s the heart and mind behind the rise of cryptocurrencies such as Bitcoin and Ethereum. But here’s the misunderstood value of the blockchain: it’s the most valuable part of the cryptocurrency. Bitcoin and the hundreds of imitators—except for the ones backed by long-accepted and established forms of value like gold—will disappear and transform, but blockchain technology will remain.
Blockchain, AI, and biotech combined will radically alter social, political, and economic dynamics. Mass surveillance will be the least of our worries. Machines will detect our intentions before we even act, and our every move will be controlled or monitored.
Stealing Jobs Is Just the Start
In the ultimate and inevitable case, AI will become so powerful as to be able to perpetuate itself. It will be able to teach itself to program the machines that are linked to the blockchain and the IoT.
Apart from “stealing” work from human engineers, rendering them entirely useless, such a scenario would imply that AI will become in charge of itself. Humans won’t be able to shut it off and it will gradually take over all resources. In other words, AI poses an existential risk to humanity. (Source: “Nick Bostrom on Artificial Intelligence and Existential Risks,” YouTube, last accessed March 12, 2018.)
Even the idea that artificial intelligence will be entirely indifferent to humans is limited. Eventually, especially as machine learning advances and machines become smarter than us, they will also become inevitably hostile to us. McKinsey Global Institute has estimated that robots/machines will take more than 70 million jobs by 2030. (Source: “Automation could kill 73 million U.S. jobs by 2030,” USA Today, November 28, 2017.)
Sooner rather than later, humans from all countries should discuss ways to regulate the advancement of artificial intelligence. Already, AI has too much power.
Because of the Internet of Things, computers already communicate with each other on a grand scale. Self-driving systems in cars can be hacked, and so can engines and other controls. Computers go through our CVs when we apply for jobs and decide who will and will not get an interview.
Robots and computers will be able to carry out tasks with greater precision and reliability than any human. They won’t reach that level if we establish a framework that limits AI in such a way that it will always remain our servant. Failing to do this now will be to admit a new dominant “race” and humans will be relegated to irrelevant observers.