Law-based governance of recommendation algorithms
A display area for Hangzhou Court of the Internet at the 8th China International Copyright Expo held in Hangzhou on Oct 16, 2021. Photo: CFP
As AI develops day by day, algorithms have also fused with our digital economy and digital life. The Communist Party of China (CPC) Central Committee published an outline in pursuit of building a law-based society (2020–2025), which proposed new regulations to improve algorithmic recommendations, deep fakes, and other new applications. Under such a background, a new set of rules on algorithm recommendation services, jointly issued by the Cyberspace Administration of China, and three other departments, came into effect on March 1 (hereinafter referred to as the regulation).
The regulation specifies norms related to the design of recommendation algorithms, as they cover important issues like algorithmic bias, big data-enabled price discrimination against existing customers, and leading causes of addiction. It serves as China’s contribution to the world’s comprehensive management of recommendation algorithms. The regulation conveys a clear vision: encouraging technology to do good, making algorithmic behaviors traceable, ensuring enforceable algorithmic supervision and effective algorithmic regulation, to modernize the systematic governance of algorithms and cyberspace.
More ethical technology
The regulation calls on algorithmic recommendation service providers to promote mainstream values and spread positive energy while optimizing algorithmic recommendation services and making algorithms more ethical. This philosophy should be reflected throughout the entire process. The regulation stipulates that algorithmic recommendation service providers shall not use technology to engage in illegal activities or spread illegal information, and shall take measures to prevent the dissemination of harmful online content. It prohibits algorithmic recommendation service providers from generating fake news or disseminating information from unauthorized sources. It states that algorithm technology shall not be used to influence online public opinion, evade supervision and management, and engage in activities of monopoly and unfair competition.
Algorithmic recommendation services can provide users with more personalized experiences and easier access to the information they need, based on individual users’ interests, hobbies, and needs. However, this is not without problems. Algorithmic recommendations without proper regulation could easily trigger issues like black box algorithms, algorithmic bias, addictive algorithms, and more. It is necessary to chart the right course for technological innovation within the framework of the law, to balance the need to innovate and the protection of rights and interests.
When handling its first case related to “cell phone internet traffic hijacking,” Hangzhou Court of the Internet (HCI) took into full account the subjective intent behind the algorithmic recommendation service that uses phone APPs wake-up strategy to hijack users’ cell phone traffic. Having given consideration to different legal interests, including online users’ right to know and select APPs, as well as interests in market competition, the Court eventually ordered the defendant to stop unfair competition and practices involved in the case, while compensating for users’ financial loss. The order delineated a reasonable legal boundary for technological innovation.
The case is a living example and a vivid reflection of Article 4 in the regulation, which stipulates that in providing algorithm-recommended services, providers shall comply with laws and regulations, respect social morality and ethics, and observe business and professional ethics. It may even inspire more ways to make algorithms more ethical going forward.
Rule of law, self-governance
It is important to recognize platforms’ dual identities as both the subject and object of governance. Namely, we need to ensure our efforts in improving algorithmic governance reach, as deep as the underlying architecture of platforms, so that platforms can act as rule-makers and performers under the supervision of law.
One of the biggest highlights of the regulation is that it has explicitly built an accountability mechanism for algorithms in the platform economy scenario, built on top of the logic of algorithmic technology. Through legislation, the regulation reveals how “technological neutrality” in automatic algorithmic decision-making is an illusion.
For instance, Article 8 stipulated that an algorithm-recommended service provider shall regularly review, assess, and verify algorithm mechanisms and mechanics, models, data, and application results, among others, and shall not set up algorithm models which induce users to indulge or engage in overconsumption, or otherwise violate laws, regulations, or ethics. This makes algorithms the direct target of regulation as they are the underlying logic of platforms, while the subjective intent of designers and algorithm deployers are likely to become the accountable subjects.
The first case that HCI heard regarding public data utilization was about how a platform published information that the complainant was experiencing bankruptcy liquidation—without verification. Even though the information was automatically drawn and produced by the algorithm, the platform still needs to fulfill its duty of care by supervising algorithmic design and operation, as well as reviewing released content.
“Technological neutrality” is not the same as “value neutrality.” Technology itself is neutral, but it becomes accountable with the human touch of design. The algorithmic duty of platform design, deployment, and application, lies in algorithmic recommendation service providers. Having specified that, it is important to fully examine platforms’ actual capacity to contain infringement, foresee potential results, adjust afterwards, as well as its subjective intent. This is how we can establish the doctrine of liability fixation for algorithms, with which duty and responsibilities are balanced. As such, platform self-governance can be improved within the framework of law.
For example, in regard to AI algorithms in media, we can consider requiring platforms that provide algorithmic recommendation services to continuously optimize their operational algorithms, while reconciling trending algorithms and personalized algorithms, both of which are closely linked to public discourse. This will help reduce the effect of information cocoons when providing services.
Classified and targeted regulation
The regulation demonstrates its legislative philosophy: grading and classification as well as targeted governance. It adopts classified management based on the nature of public consensus, social mobilization ability, content categories, number of users, importance of data handled by algorithmic recommendations, as well as the degree of intervention in user behaviors. Meanwhile, the regulation has also set different standards for algorithmic recommendation service providers on different platforms.
The regulation suggests the concept of “providers of algorithmic recommendation services with public opinion properties or social mobilization capabilities.” It clearly stated that providers shall report the provider’s name, form of service, application domain, algorithm type, algorithm self-assessment report, content to be publicized, and other such information, through the internet information service algorithm filing system and carry out filing formalities.
One way to facilitate classified governance is to classify service providers according to their powers, functions, and audience size. However, the exact definition and logical boundary for “providers of algorithmic recommendation services with public opinion properties or social mobilization capabilities” awaits further clarification in judicial practice. Ambiguities should be avoided. This includes transitions regarding the nature of these service providers in different application scenarios, as well as facilitating the effective connection between the grading and classification management systems and the content of the algorithm and strengthening dynamic management. These are the spaces that legislation has reserved for judiciary innovation further down the road.
In regard to classifying targets of judicial protection, the regulation makes more targeted regulation for minors, the elderly, workers, and consumers. For the underage, it emphasizes not using recommendation services to lead minors to online addiction. For the elderly, the regulation emphasizes safety and convenience.
HCI explored ways to encourage developing algorithmic recommendation services that suit these two age groups when it handled a public welfare case involving an APP’s infringement of minors’ rights and interests online. During the hearing, the court found that when designing the APP, the developers may have targeted non-specific minors’ privacy and their information rights. The infringement act covers the entire process of collecting, storing, using, and sharing personal information.
The court not only ordered the platform to compensate for loss of public interests and extend an apology, but also urged it to conduct a series of rectification tasks, including shielding minors from algorithmic recommendation services involved with minors’ privacy. In addition, HCI also took the lead to establish non-profit foundations to protect minors’ rights and interests online.
Driven by various rights
In society’s transition to an internet-based, digitized, and intelligent new landscape, algorithms will become an important factor that affects people’s way of thinking, decision-making, and values. The regulation stipulates the duty of care and protection of user rights for “algorithm gatekeepers,” namely algorithmic recommendation service providers. The regulation also puts forward three-dimensional management and control requirements in terms of algorithmic recommendation technology, information content management, recommendation page displays, and specific user targets. However, in addition to establishing more regulations, the regulation also needs to highlight the pursuit of “digital rights” in the digital era through the governance of judicial rules, so as to achieve a dynamic balance among social welfare, technological innovation, and users’ rights via reasonable user empowerment.
Take the protection status of digital human rights and the balance of legal benefits as an example. From the perspective of judicial practices, some takeaway platforms, in their algorithmic order distribution design, remuneration composition, payment system, working hours, rewards, and punishments, have overly pursued the efficiency of food delivery to meet their own economic benefits and the rights and interests of users. They have, to a certain extent, neglected protection of the rights and interests of deliverymen and other digital human labor rights.
In the first case of a dispute between a consumer and an e-commerce platform over search services, HCI found that the e-commerce platform’s search services could be below search engine standards for an “exact match.” This not only took into account the consumer’s reliance on the platform’s recommendation results and strictly examined the search algorithm logic, but also respected the platform’s right to operate independently.
The regulation systematically grants users the right to choose, the right to know, and the right to obtain clarification and remedy actions throughout the entire cycle of the operation of algorithmic recommendation systems, allowing individuals to counteract algorithmic control through digital rights in legal mechanisms. However, the forms of public participation could be more diverse. In addition to our current provisions, consideration could be given to embedding social commentary mechanisms and expert technical ethics review mechanisms to create opportunities for the public to effectively participate in the deployment of platform algorithm systems and evaluate the consequences of their use, and to promote the continuous optimization of platform algorithm architecture based on public values.
Hong Xuejun is president of Hangzhou Court of the Internet.
Edited by WENG RONG