Ethics, democracy should guide AI development

By By Wang Youran / 09-11-2015 / (Chinese Social Sciences Today)

Robert Sparrow is a professor of philosophy at Monash University. His research interests cover the fields of bioethics, political philosophy and applied ethics. His representative works include Drones, Courage and Military Culture and The Ethical Challenges of Military Robots.



US Close-in Weapon System has already realized autonomously shooting down incoming missiles without the assistance of humans. But scholars attending the 24th International Joint Conference on Artificial Intelligence called attention to the potential risks of autonomous weapons. 

 

From July 25 to 31, the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015) was held in Buenos Aires, Argentina. At the opening of the IJCAI 2015 conference on July 28, there was broad support for an open letter from the Future of Life Institute in the US calling for the prohibition of autonomous weapons. The letter recognizes the potential benefits of artificial intelligence (AI), but argues that a ban is needed to prevent an AI arms race. Recently, Sparrow talked with a CCST reporter to express his opinions on AI.


CSST: What do you think are the chances that the development of AI weapons will lead to a global arms race? What impact is this open letter likely to make on the scientific community and beyond? 

 

Sparrow: I think it's inevitable that the development of autonomous weapon systems (AWS) will lead to a global arms race unless governments and the international community can agree on a global treaty regime prohibiting the weaponizing of AI systems. The open letter presented at the IJCAI 2015 conference lends weight to the calls to develop such a treaty issued by organizations such as the International Committee for Robot Arms Control, of which I was a co-founder, and the Campaign to Stop Killer Robots. It demonstrates that the AI community is taking the threat posed by the development of AWS seriously, as does the status and very high profile of some of the signatories. It places political pressure on governments to endorse the project of developing such a treaty. Finally, it provides resources for researchers within the AI research community to resist the temptation to work on military projects, where funding tends to be more available.
 

CSST: In your opinion, what are the most significant risks and benefits that progress in AI and robotics could bring to mankind?

 

Sparrow: I am actually pretty cynical about the pursuit of technological solutions to the problems facing humanity today. The time remaining for humanity to prevent further, even more dangerous, climate change as a result of our CO2 and methane emissions is frighteningly short and arguably less than the lead-time required to realize some of the purported benefits of AI and robotics.
 

Moreover, both the origins of many of our most pressing problems, including anthropogenic climate change, and their solutions are ultimately social and political rather than technological. In many cases we already know what we need to do to solve them. That's not to say that technological development doesn't have a role to play but to insist that, without social and political change, new technologies are equally—perhaps more—likely to exacerbate these challenges rather than contribute to solving them.
 

Finally, the fantasy that some new technology, such as AI, is going to solve all of our problems often serves to allow people to stick their heads in the sand rather than take the actions necessary to begin dealing with the current crisis.
 

Having said that, it is important to acknowledge that AI systems may well produce significant social benefits by helping to solve problems that are currently beyond the capacities of human beings operating without the assistance of such tools. In particular, pharmaceutical development and genomics are areas where neural networks and deep learning systems might make a very valuable contribution.
 

CSST: Do academics in both natural science and humanities and social science have some rough idea of the way to minimize the risks and maximize the benefits?

 

Sparrow:  One of the strange aspects of conversations about technology and the future is that most of the participants typically don't think that it's possible to prevent technologies from being invented and adopted. That is, they believe any technology that can be developed will be developed. Were this true, you might wonder if it was worth having the conversation at all.
 

I think the history of technology proves this to be false. However, it must be admitted that contemporary societies typically struggle to regulate technology and have few mechanisms available to allow democratic input into the decisions that are shaping the technologies that will in turn shape society in the future. So, the first thing we need to do to ensure that new technologies, including AI, benefit rather than harm people is to allow people themselves to have a say in the decision as to what kind of things get funded and developed.
 

Dedicating research to meeting genuine human needs rather than military ends is also an obvious way in which we might maximize the benefits we receive from technological development. Even those developing military systems admit that the best possible result is that they go unused. In the worst case, they are used to kill and maim people. It would be much better to devote the resources currently dedicated to the military to civilian ends. The nasty thing about arms races is that where they occur, they can effectively guarantee that both sides will do things that they themselves would rather not be doing. So trying to pre-empt an arms race with AI by enacting an international ban on weaponizing it is in the interests of all parties.
 

Finally, it's worth observing that debating the ethics of technology solely in terms of "risks"  and "benefits" already artificially narrows the scope of the conversation and prejudices its conclusions. Technologies reshape relations of power and  have political implications that cannot simply be fleshed out in terms of risks and benefits. Technologies also shape the way we perceive the world, which in turn alters what we think of as a risk or benefit, with the result that our current thinking about risks and benefits is an unreliable guide to the world we will find ourselves in if we adopt them.


Last but not least, technologies have implications at the level of meaning. Technological change can alter how we understand the world and our place in it. Answering the question of whether we want our understandings changed in this way requires a richer conversation about our ideas and values, which tends to be cut short by reducing everything to risk or benefit.


CSST: As technology advances, AI will probably become increasingly smart and powerful, capable of causing many kinds of harm to humans. But we will also be able to use more advanced technologies to better control AI. So in some sense, the real problem is always with us, the humans, rather than certain types of technology or products. Given that ethical guidelines for research in AI and robotics may not be enough to prevent the potential dangers of AI to our society, do we need specific laws and regulations, international conventions, and independent supervisory bodies as well? 

 

Sparrow: If, by AI becoming increasingly smart and powerful, you mean that they will be capable of thinking for themselves and acting in their own interests, then I worry that we will have no way to control them. If they are thousands of times smarter than we are, we may not even be able to understand them. There are some researchers working on what is called the "friendly AI problem," which is the task of trying to ensure that whatever super-intelligent machines we do bring into existence are disposed to think kindly of us. Unfortunately, the friendly AI problem turns out to be very difficult, not least because we don't really understand what consciousness consists of or how we might create it, let alone how we could control what conscious machines felt about us. I don't think we can afford to find ourselves in the position where we are waiting to see whether super-intelligent machines want to pet us or experiment on us. So, yes, regulation, laws, and international agencies are absolutely necessary if we think that there is any danger of this.


Even if one believes, as I (mostly) do, that for the foreseeable future, AI research will simply give us faster computers and more efficient search engines, there is need for regulation and oversight to try to make sure that ordinary people are not left more impoverished and with fewer democratic rights when these technologies are used by corporations and dictatorships to enhance their wealth, social power and control. The open letter presented at the IJCAI 2015 conference calling for a ban on offensive autonomous weapons is a small but important step in the project of developing AI to enrich and benefit ordinary people rather than threaten them.

 

Wang Youran is a reporter at the Chinese Social Sciences Today.