Four misconceptions on AI
More social scientists should be involved in AI design. Photo: TUCHONG
In a broad sense, Artificial Intelligence (AI) refers to a socio-technical system facilitated by computers and algorithms, aimed at imitating, enhancing, or replacing humans. The societal impact of AI remains undetermined. Although AI practitioners, including technology companies and designers, have proposed the slogan “AI for Good,” there are still diverse opinions on what constitutes “social goodness.” One approach to contribute to this discussion from the perspective of social sciences is to promote the shift in the design and governance principles of AI product development from “human-centered” to “society-centered.” Before this shift, it is beneficial to clarify four common AI misconceptions.
The first misconception is that AI is inherently neutral, when in actuality, it is biased. The “bias” in AI arises from its data and algorithms. The outputs of machine learning strongly depend on the training data, which includes both existing data from various social institutions and real-time daily data. Regardless of the type, the data itself is deeply embedded with social inequality. When this data is used to re-train large models, it reinforces existing inequalities. Even if the data itself is unbiased, design choices of algorithms, interface information presentation, and other factors can also exacerbate inherent biases. Currently, concerns about the fairness of AI have given rise to various measures aimed at addressing such issues, including AI ethics alliances, open-source AI fairness toolkits, and government regulatory frameworks. Effective implementation of these initiatives undoubtedly requires integrating existing research findings from social sciences on social structures and mechanisms of inequality. This approach moves beyond the current superficial approaches to demographic categorization, forming more effective mechanism-led intervention frameworks.
The second misconception is that AI appears accurate, when in fact it is often erroneous. Generative AI is widely used to generate texts, images, sounds, and videos, and AI-generated content is increasingly prevalent in media, businesses, and even academic platforms. However, limitations stemming from technology itself and computing costs means that AI-generated content is not as accurate as it appears. Misinformation often appears in a very credible form, confusing audiences lacking discernment, a phenomenon known as “AI hallucination.” Developers have devised certain hallucination countermeasures, such as using more precise prompts, incorporating human supervision mechanisms, adopting retrieval-augmented generation methods, and labeling AI-generated content. However, beyond these technological advancements aimed at enhancing the accuracy of AI, we also need to address the social impact of AI information bias. This includes improving discernment and verification abilities of the receptionists, establishing relevant regulations, and clarifying the responsibility for societal consequences of collaborative human-machine content generation.
The third misconception is that AI appears omnipotent, when it is actually rather limited. In discussions about the relationship between AI and human labor, a core debate revolves around whether AI replaces or augments human capabilities. On one hand, AI is indeed proficient in completing tasks that can be translated into clear parameters. On the other hand, its limitations are obvious, especially in highly specialized tasks requiring extensive tacit knowledge, complex reasoning, emotional perception, and social interactions. Existing studies have shown that in the judicial field, AI risk assessment tools are no more accurate than humans. While AI may replace some jobs, it also creates new ones such as prompt engineers, robot application technicians, AI engineering technicians, AI trainers, and so on. Incorporating the skills required for these emerging occupations into vocational education framework design would be highly beneficial. Rather than worrying about complete replacement by AI, we should actively plan for human-AI collaboration, clarify what AI can and cannot do, and guide AI to augment human capabilities. This approach can enhance overall productivity and efficiency by leveraging AI’s strengths while acknowledging its limitations.
The fourth misconception is that AI seems inclusive, when in reality it is actually unequal. Some describe AI as a general-purpose technological innovation that can enhance human productivity comparably to steam engines, electricity, and computers. While we rejoice over the huge productivity potential of AI, we must also acknowledge its potential to perpetuate or even deepen existing inequality structures. These inequalities manifest among regions, companies, and different groups within the labor force. The development and application of AI requires substantial capital and technological investment, often concentrating in certain countries and regions, thus creating spatial inequalities in access to new technologies. In addition, whether in data acquisition as the basic raw materials for large model predictive tasks, algorithm development, or computing power supply, companies with digital infrastructure platform advantages are more likely to deeply engage in AI research, development, and application. This concentration can lead to polarization in terms of market share, economic influence, and political power. The impact of AI technology application on the labor market is not homogeneous. Digital control over low-skilled workers and the rise of AI-related positions requiring special skills often occur simultaneously, exacerbating existing inequalities in the traditional labor market. These inequalities existed even before the emergence of AI technology and rooted in societal benchmarks outside of large-scale models. Existing social sciences research helps us understand how these inequalities are formed, paving the way for more equitable application of AI.
This article outlines the four misconceptions about AI and calls for more in-depth social science research. In the intelligent era, social scientists should actively engage in this debate on new technologies and society, combine long-term research on social inequality and social structural changes, and promote the transformation of AI design and service principles from “human-centered” to “society-centered.”
Wu Fei is an associate professor from the Department of Sociology at Fudan University.
Edited by ZHAO YUAN