Channeling algorithm with mainstream value orientation
Passengers on a metro indulge in their own information bubbles. Photo: CFP
Currently, there are still many problems with smart recommendation algorithms, such as deviations in their values, discrimination, and non-neutrality about the value of artificial intelligence. Other examples include aggravated information cocoons and information addiction, data security, user privacy, copyright issues, and asymmetric conflicts between the rapidly expanding smart media industry and its regulation. It is therefore imperative to guide algorithms using mainstream values.
Problems
The values of algorithms deviate due to a lack of mainstream value systems’ guidance. Society only functions well and develops healthily when guided by shared values. As a technology, algorithms have no values or ability to make moral choices, but people who build recommendation algorithms do. The values and choices of recommendation algorithms depend on their motivation and morality. For example, when it becomes acceptable that “people with mental issues should be locked up because of their violent tendencies,” the invisible support of an algorithm would bolster the inclination to treat mental patients badly. A bias that runs counter to mainstream values is conveyed by the algorithm, the invisible hand, and affects people’s thoughts.
For example, some Western media outlets have long regarded smearing China as politically correct and led international opinions in a wrong direction. Major Western media platforms follow this smearing trend characterized by topics such as human rights and freedom with the help of recommendation algorithms. Consequently, there is an increasing amount of negative press which is difficult to distinguish from real news. Essentially, recommendation algorithms are man-made. If the initial reporting is biased, then artificial intelligence will be misled and make biased decisions through algorithm models.
Personalized recommendations exacerbate information cocoons, information addiction, and information bias. The public, after sustained homogeneous information input, might become polarized. In the age where smart media is dominated by recommendation algorithms, problems such as information cocoons, addiction to homogeneous information, and information bias escalate. For example, platforms such as TikTok and Taobao designed their recommendation algorithms to maintain user attention and dependency. As a result, active information acquisition has been turned into passive information feeding; internet information is no longer three-dimensional and diverse, but flat and unipolar; users have begun seeing diminished self-awareness and less information choice. If recommendation algorithms dictate public opinions for much longer, it will result in a one-sided and homogeneous knowledge structure, making it difficult to accept different viewpoints. In the long run, the public will be trapped in a narrow “information cocoon” with information bias.
Data security, user privacy, and copyright protection are all under serious threat. To some extent, our attention and personal data have become products sold by media platforms. Recently, the State Internet Information Office examined the collection and use of personal information by short video apps, browsers, and job-seeking apps, in accordance with relevant regulations. The examination results revealed that short video apps such as TikTok and Kuaishou, browser apps such as 360 and Baidu, and job-seeking apps such as Zhilian Zhaopin have “violated the necessary principles to various degrees by collecting information irrelevant to the services they provided.” Privacy security is under serious threat due to manipulation by recommendation algorithms.
The quality of content recommended by algorithms cannot be guaranteed, as we have seen an increasing amount of tasteless information. Customized recommendations generate information feeds based on user preference. However, due to certain preferences, information cocoons, information addiction, and information bias, the quality of recommended information varies. If users are fed a constant flow of popular yet false and tasteless information, media values would then deviate from mainstream values. For example, Bilibili, an internet community and video platform where Generation Z gathers, was reprimanded by the Cyberspace Administration of China due to the tasteless content on the platform. According to user complaints, some hosts on the Bilibili live streaming channel show too much skin and use tasteless language. They lure users to click, browse, and spend money. Bilibili was founded as a platform for unimpeded supply of quality content, but now it has deviated from its original intention.
There is a conflict between the rapid expansion of the smart media industry and underdeveloped regulation. Recommendation algorithms are applied to smart media, leading to an asymmetric structure in the regulation of the traditional media industry. That means algorithm-based smart media platforms are already ubiquitous, but the regulation remains fragmented. Algorithms highly depend on the structural characteristics of input data, so they will promote information that receives more clicks and forwards, based on big data analysis. This results in the phenomenon of “bad money driving out good” in the media industry.
Meanwhile, due to lax content copyright regulations, some media platforms aggregate content in a predatory manner, thus limiting the flow of high-quality information. Researchers analyzed algorithm-recommended news from 12 platforms in the US and found that there were inconsistencies in recommended reporters, and high-quality information cannot be recommended by the algorithms if it fails to enter the algorithmic pool. Therefore, in the face of this algorithmic dilemma, a corresponding regulatory system should be established for the smart media industry.
Improving methods
We should harness the logic of recommendation algorithms as they collect information about users’ interests, and automatically recommend popular content, improve the proportion of content underpinned by mainstream values on short video platforms, and maintain the strength of mainstream values. With these considerations, mainstream media should leverage the advantages of content creation, generate more content that reflects mainstream values and public consensus, and increase the frequency of recommendation content that displays mainstream values. In doing so, mainstream content would be recommended more often and would be brought onto the fast track of new technologies. In addition, facing algorithm anxiety, content should adhere to mainstream values, and play an exemplary leading role.
We should make full use of recommendation algorithms’ feature of “not pre-determining values and offering no judgment,” and reinforce manual oversight, as well as enhance and review “recommendation algorithm + reviewer” models. Currently, the smart media recommendation algorithm has largely reduced the manual screening workload, but the algorithm has very limited capability to understand language and analyze emotions. Some information content still requires manual review.
In this regard, it is recommended to guide platforms to set up additional “manual supervision posts” to make value judgments. For example, Toutiao, China’s popular news app, added more than 2,000 positions in early 2018 to carry out manual reviews of their information content. TikTok has also taken steps to increase the number of manual reviewers, strengthen supervision of sensitive content and recommendation feeds, and enhance human-machine coordination.
We should manage the algorithm’s vulnerabilities to law-breaking behaviors and improve regulations regarding algorithm discrimination, algorithm-enabled price discrimination, and personal data protection. In May 2021, the State Internet Information Office reported that a total of 33 apps, including Sogou Pinyin, iFlytek Input, Baidu Type, Gaode Map, and Baidu Map, collected and used personal information illegally. To address the worsening trend of algorithm discrimination and algorithm abuse, it is urgent to improve the laws and regulations regarding price discrimination, algorithm discrimination, and personal information protection. When formulating or amending laws and regulations on privacy protection and personal data protection, we should pay extra attention to the prevention of algorithm infringement and balance the relationship of algorithms with market efficiency and rights protection.
We must hold social media platforms accountable for the values shaped by their algorithms, encourage platforms to make positive interventions, ensure that the platforms shoulder their due responsibilities, and avoid tasteless content. First, it is recommended that we gather and acknowledge the attempts made by media platforms. For example, Tencent has launched an anti-pandemic campaign, an anti-pandemic channel, a “Classes at Home” program, and “Cloud Recording.” It has continued to ensure the supply of quality content and uplifts users through positive content. In doing so, the platform champions mainstream values while generating profit. Second, optimize the algorithmic mechanism for governing bad information, and give users the right to “downgrade the channel” and choose not to watch it.
We should focus on the interactive nature of algorithms—algorithms recommend based on user behavior and enhance users’ capability to identify quality information and shape algorithms. The values of algorithms depend on the values of users. We should promote user self-discipline and information literacy in the internet era. An “algorithmic portrait” should be generated for users, which allows them to “see” their algorithmic image and generate internal motivation to correct and optimize their “algorithmic portrait” with the help of behavioral feedback. In particular, it is important to improve young people’s intelligent media literacy, enhance their ability to search for, screen, and select digital information, and introduce intelligent media literacy education into primary and secondary school curricula. Parents and teachers should be encouraged to act as models regarding media literacy.
The government should create closed-loop algorithm regulations. First, multi-party regulation should be implemented with the participation of smart media platform companies and other parties. Meanwhile, algorithm regulations should be improved, to make the governance of recommendation algorithms more targeted.
Second, an effort typified by “Operation Qinglang” should be made to regulate the dissemination of news and information driven by recommendation algorithms, guide internet platforms to optimize information filtering, ranking, and recommendation mechanisms, and carry out technical inspections and assessments of algorithm security.
Third, relevant ethical or value guidelines should be issued to guide the formulation of laws and rules related to algorithm governance. Technical rules should be incorporated into the social norm system and should be integrated into the whole chain of the big data industry.
Fourth, with a focus on “transparency” and “interpretability” of algorithms, a mechanism should be established for data review, accountability, and post-event relief, to facilitate the reading and review of algorithms by regulators, incorporate data transparency and review into regulatory tools for algorithm governance, and improve the post-event relief and accountability system.
Jiang Ling is deputy director of the Department of Cultural Industry and Management from the School of Media and Communication at Shanghai Jiao Tong University.
Edited by WENG RONG