The world’s top Go player Lee Sedol (R) puts his first stone during the last match with Google’s artificial intelligence program AlphaGo in Seoul, South Korea, on March 15, 2016.
Last year, Google’s AlphaGo program won a historic match against Go grandmaster Lee Sedol of South Korea, sparking a heated discussion on the future of artificial intelligence (AI).
In my opinion, human beings are playing with fire nowadays, especially with biological technology and AI. Though the former tends to raise more alarms about ethics, the latter is considerably more dangerous.
It is crucial for the human race to realize that AI is nothing like the existing technology, and we should at least put the brakes on it before it advances beyond our ability to control it.
To be specific, AI poses threats to mankind in the short and long term, including the potential to wipe out our species.
Unemployment, arms race
The short-term threats of AI are already starting to surface.
For one, both white- and blue-collar workers will be impacted as AI increasingly approaches a human level. Indeed, low-level artificial intelligence has already been adopted by factories on a large scale.
Some say it is natural since throughout history, automation and technology have repeatedly created more jobs overall than they have destroyed and the unemployed will surely embark on a new career path. However, that is possibly not the case with AI.
As some media outlets have observed, AI researchers are positive that intelligent, human-like robots will soon be used to do routine work, while some people will continue to work and others will be displaced. If that is true, we are facing a massive unemployment scenario. Is our society ready for that?
At present, the majority of people has to work and support the minority, however, if the opposite becomes true, it will be beyond the capacity of current social systems and moral structure.
Just imagine this: A large proportion of the population, say 50 percent, if not 90 percent, as some AI researchers predict, lose their jobs and remain idle. With nothing else to do, they have plenty of time to accumulate dissatisfaction, stay angry, and plot revolution, thus causing instability in society.
No matter if it is the East or the West, and regardless of the social institutions we have in place, such a massive unrest will be almost impossible to handle.
In addition, robots that kill—especially “intelligent” ones—are very much on the minds of those who worry most about military applications of AI. In a way, the weaponization of AI is similar to atomic bombs because it is, in essence, a more effective killing tool, which is why Elon Musk, the billionaire who brought us PayPal and the Tesla car, called AI “our biggest existential threat.” More lethal weapons are definitely not beneficial to mankind.
Ideally, major powers on the world stage could come together and negotiate an international agreement that restricts or bans AI. Though there are initiatives at the moment, they are mostly confined to academia, whereas country-level protocols and actions have yet to form.
Robot rebellion
In the not-so-distant future, AI might outstrip human capacity to control them. In response to such a threat, experts in the AI field have assured the public that AI research is still in a nascent stage. Though a program defeated a Go master, it is only capable of playing games, composing a poem or writing a novel. The public are told that they do not need to worry about such primitive technology, but they are still not quite convinced.
Experts have begun to seek ways to write moral codes into AI and help it develop a conscience. Again, we should be skeptical of the outcome. Aren’t we humans also insistent on teaching our children to be kind and do good deeds? We all know how that has worked out for us. If we are unsuccessful at instilling ethics into all humans, how can we expect robots to also follow a rigid moral code? Let’s not forget that human society is regulated by laws and it has not stopped villains from emerging.
To make things worse, we will be dealing with super-intelligent entities. The small fraction of villains is not powerful enough to overturn the society and can still be contained, but what if the villains are superhuman? Are humans ready for them?
Given the rising enthusiasm about AI across the globe, scientists probably will not stop until they actually build a superhuman.
When talking about the mid-term threat of AI, we must consider the horrifying prospect of the synergy between AI and the Internet.
The Internet will give an individual robotic machine full access beyond their physical limits, such as storage and computing capacity. AI programmed with learning abilities has the potential to take off on its own and redesign itself at an exponential rate.
If this were to happen, the childish consolation that we can simply unplug the power sounds even less comforting. However, this is exactly the direction we are heading toward. Most enterprises yearn for AI that gets rid of all physical existence with the aid of highly advanced social services, such as customization, logistics, and express delivery.
When an AI becomes a “ghost” online, without a body or form, there will not be any power to unplug at all.
End of humanity
Robotic rebellion may not be our biggest concern once the technology reaches its final stage, where it poses tremendous existential threat to human race.
Isaac Asimov, an American science fiction writer, explicitly said in his Foundation series that AI will eventually destroy humans, even if they never spin out of control, because the technology will eliminate the meaning of human survival.
If robots are able to do all the work, what is the point of human existence? People soon become parasites with a drastic decline in terms of intelligence and stamina. We will coddle ourselves and might slip into the scenario the movie The Matrix portrayed.
If you want to be happy, the diligent and obedient robots will serve you by putting you in a box and typing in some fake happy codes.
Therefore, if AI is really what we are looking for, almighty and submissive, the human race faces inevitable extinction because life will become meaningless.
All in all, AI is extremely perilous whether it is rebellious or mild. I think major powers should enact strict agreements to restrict AI development. If possible, the terms should be stricter than the US-Russia nuclear convention.
In the past, science was naive and natural, but not today. It waved goodbye to its innocence when it became enmeshed in capital. Driven by huge commercial interests, AI development is growing at an astonishing speed.
At any specific period of time, science should have a forbidden zone that can evolve. For a certain technology, when human society is ready and we have appropriate morality or laws to regulate it, we can open that zone. However, when things are still at a premature stage, it is better to keep that door closed and put our curiosity at ease for the sake of the greater good.
Jiang Xiaoyuan is dean of the Department of History and Philosophy of Science at Shanghai Jiao Tong University.
LINK
High-profile figures, including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking, along with 1,000 AI and robotics researchers signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons” in 2015.
The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, states: “AI technology has reached a point where the deployment of [autonomous weapons] is–practically if not legally–feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The authors argue that AI can be used to make the battlefield a safer place for military personnel, but offensive weapons that operate on their own would lower the cost of going to war and result in greater loss of human life.
Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.
“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the authors said.