A smart industrial park in Zhangye City, Gansu Province, Mar. 2 Photo: CFP
AI governance re-emphasizes the need for humans to ensure control over the autonomous decision-making behavior of intelligent machines. But in today’s non-direct operational sense, how can we define the control relationship? How can we maintain ethical norms and human responsibilities? To answer these questions, there is an emerging ethical governance principle of AI called “Meaningful Human Control” (MHC) to guide practices.
Theoretical framework
In recent years, the ethical design concept of embedding values in the research and development stage emphasized in research fields, such as responsible innovation and value sensitive design, has dominated global AI governance. However, even if value is embedded in the early stage of technology development, potential risks caused by AI applications may still be unavoidable. On one hand, AI is a revolutionary technology that has the potential to fully penetrate all aspects of life and reshape social production relations. Therefore, the values guiding AI applications are dynamically evolving. On the other hand, the iteration speed of AI technologies is very fast, which will have particularly far-reaching implications as it gradually replaces the control power of human decision-making. However, humans and institutions, rather than artificial technological entities and their algorithms, should ultimately control the decision-making behavior of high-level autonomous systems, and bear ethical and legal responsibilities.
The fundamental ethical challenge of AI is the issue of the responsibility gap. The overall relationship between humans and intelligent machines is a novel “distributed control” relationship, which requires a new theoretical framework as the basis for attribution of responsibility.
Future improvements
The concept of MHC has two basic components: “meaningful” and “control.” “Meaningful” refers to the basic conditions for humans to control the behavior of intelligent machines and assume moral responsibilities. “Control” involves the diversification of the forms of control humans have over intelligent machines. The collection of different forms of control and responsibility is conducive to the implementation of intelligent machine behavior monitoring mechanisms. This provides a theoretical entry point for resolving the responsibility gap. Filippo Santoni de SiO and Jeroen van den Hoven made a groundbreaking argument on the moral philosophical basis of MHC, providing a regulatory framework for the two conditions of “tracking and tracing” from the perspective of socio-technical systems.
The “tracking” principle involves the relational nature between human controllers and controlled systems. “Tracing” is the supplement for “tracking,” referring to the ability of tracing at least one responsible agent for system behaviors.
However, there are some difficulties in translating MHC’s normative instructions into feasible operations: the operational standard of the tracking principle requires determining who those agents are and what motivates their actions; the operational standard of the tracing principle requires identifying whether these agents have both technical expertise and ethical awareness.
In response to the operability dilemma of the tracking principle, Giulio Mecacci et al. proposed a “proximate scale of reasons” that can explain the relationship between reasons of human actions and system behaviors.
Going forward, there is a need to refine the types of technical expertise and ethical awareness in the controlled system. Furthermore, it is necessary to conduct a perceptual survey for the system control ability in knowledge, embodiment, and morality.
In general, the implication of MHC as an ethical principle is to reaffirm the right of human security in AI applications. The rapid advancement of MHC research indicates that global governance of AI has entered a new stage. China is in a period of vigorous development opportunities for AI, and it is necessary to pay attention to cutting-edge ethical research on AI in the international community, and improve applicable local policies and regulations. In the framework of MHC’s moral philosophy, local ethical elements and accountability mechanisms can be incorporated to expand the theoretical foundation, thereby guiding and supervising the entire life cycle of AI.
Yu Ding is from the School of Social Sciences at Tsinghua University.
Edited by ZHAO YUAN