Artificial consciousness and biological naturalism
Illusionism may provide a theoretical basis for artificial consciousness. Photo: TUCHONG
Biological naturalism is a theory about the mind-body problem, which holds that mental phenomena such as consciousness and intentionality are biological phenomena that only exist in certain living systems. Although initially overlooked in the philosophy of mind, the theory has recently returned to the public eye amid heated discussions on artificial general intelligence (AGI) and artificial consciousness. Some critics have drawn on biological naturalism to cast doubt on the feasibility of artificial consciousness, while others have invoked it as a potential framework for advancing such a project. In this debate, biological naturalism appears to support both sides. Yet this very flexibility reveals its underlying weaknesses: key concepts such as “life” and “ontological reduction” are left poorly defined. Once these concepts are clarified, the theory’s core claims become untenable. What this debate ultimately reveals is that any framework that simultaneously affirms the reality of consciousness and seeks to naturalize it will inevitably encounter a contradiction between these aims—and thus is doomed to fail. Two alternatives that avoid this dilemma are panpsychism and illusionism, with illusionism offering a more viable theoretical path toward artificial consciousness.
Biological naturalism
The first step in consciousness research is to clarify those facts about consciousness that cannot be reasonably doubted. American philosopher John R. Searle claims that clarifying these facts naturally leads to biological naturalism, which he describes as “scientifically refined common sense.” He summarizes the theory in four main propositions. (1) Realism: Consciousness is a real phenomenon in the real world; its reality is beyond dispute. (2) Causality and sufficiency of neurophysiology: Consciousness arises entirely from low-level neurophysiological processes in the brain, and the causally sufficient conditions for any conscious phenomenon are located within the brain. (3) Neurophysiological realization: Consciousness is a higher-level feature of the brain system, existing above the level of individual neurons or synapses; single neurons cannot be conscious. (4) Psychological causation: Consciousness has causal power—intentional actions are driven by conscious states. These four propositions constitute the foundational elements of biological naturalism.
The proposition of artificial consciousness exposes the theoretical tension of biological naturalism. Artificial consciousness can be divided into two types: strong and weak. Weak artificial consciousness aims to replicate certain recognizable features of human consciousness in machines, while strong artificial consciousness aspires to generate genuine consciousness in artificial systems. The former is relatively uncontroversial; the latter is far more contentious. From the standpoint of biological naturalism, one view holds that consciousness necessarily depends on a biological substrate, and thus artificial systems cannot be truly conscious. A second view accepts the same premise—consciousness requires a biological substrate—but concludes that to realize strong artificial consciousness, machines must incorporate such a substrate. The first view falters if one accepts consciousness as a natural phenomenon: the supposed special status of the biological substrate becomes difficult to uphold, as it is merely one causal entity among many within the natural world and could, in principle, be replaced by other substrates (e.g., artificial neurons). The second view, meanwhile, faces a deeper ontological challenge. If the biological substrate is just another natural cause, it becomes difficult to sustain the ontological status of consciousness itself, since a fundamental gap remains between the third-person ontology of the causal world and the first-person ontology of conscious experience. Searle sought to bridge this ontological divide by introducing a descriptive level of explanation, but in doing so, he effectively stripped consciousness of its first-person ontological status. The failure of Searle’s biological naturalism illustrates an irreconcilable conflict between Proposition (1) and Proposition (2). This suggests that any attempt to affirm both the reality of consciousness and its naturalization will inevitably run aground on their mutual incompatibility. Escaping this impasse requires giving up either realism or naturalization. This leaves two theoretical options: illusionism and panpsychism, with illusionism offering the more promising route to conceptualizing artificial consciousness.
Panpsychism
Naturalization rests on a core presupposition that consciousness does not fundamentally exist in the natural world, and therefore must be situated within nature by identifying the basic constituents from which it is derived. This is referred to as the derivativeness presupposition. Proposition (2) exemplifies this view, holding that consciousness is derived from the brain, which serves as a biological substrate. However, this presupposition is not beyond dispute. Substance dualism, for instance, rejects it outright—but at the cost of incompatibility with natural science. If it were possible to reject the derivativeness presupposition in a manner consistent with scientific understanding, we would arrive at a coherent alternative. One such emerging alternative is new panpsychism.
Unlike traditional panpsychism, which simply posits that all things possess minds, new panpsychism is seen as compatible with physicalism. Physicalism maintains that everything is physical, yet it struggles to provide a precise definition of what “physical” means. One interpretation suggests that the physical is what physical theories describe. However, as British philosopher Bertrand Russell noted, physical theories merely articulate dispositional properties—based on the world’s spatiotemporal structure and dynamics—without addressing the intrinsic nature underlying those structures and dynamics. Panpsychism steps in here, claiming that this intrinsic nature is phenomenal. In this way, panpsychism can explain consciousness without conflicting with mainstream physicalist views: Our consciousness emerges from micro-consciousnesses inherently present in physical entities, intricately combined.
But how do micro-consciousnesses combine into a macro-consciousness? This is the combination problem that troubles constitutive panpsychism. In the realm of artificial consciousness, the challenge becomes even more acute, giving rise to what is known as the “AGI combination problem:” Even if we can explain the emergence of human consciousness, what reason do we have to believe that micro-consciousnesses within AGI systems would likewise combine into a coherent center of awareness? To resolve this, proponents of panpsychism must offer a compelling argument showing that AGI systems resemble human cognizers in all relevant respects—without relying on consciousness itself as one of those respects, to avoid circular reasoning.
Another prominent version of panpsychism is cosmopsychism, which posits that the conscious universe itself is the sole fundamental entity. At first glance, this view sidesteps the combination problem: Human and animal minds are not seen as composed of micro-subjects but instead as directly grounded in the consciousness of the universe. However, cosmopsychism faces a reverse challenge, the decomposition problem: How does a vast subject like the conscious universe give rise to smaller subjects such as individual human and animal minds?
This decomposition problem becomes even more complex in the domain of artificial consciousness, where it takes the form of the “AGI decomposition problem:” How can human-level consciousness be individualized from the consciousness of the universe, and further, why should we believe that AGI systems likewise possess individualized conscious experiences? When it comes to human consciousness, this problem is somewhat more manageable—we already know that humans are conscious, so the explanandum is well defined. But in the case of AGI, we do not know whether any given system is in fact conscious, which renders the explanandum uncertain. More critically, artificial consciousness research must tackle not only the general decomposition problem but also a specific decomposition problem: Why would the consciousness of the universe differentiate into the kind of artificial consciousness we aim to create, rather than into forms of “consciousness” akin to that of trees or stones? Unless and until these questions are adequately addressed, panpsychism remains an unlikely candidate for explaining artificial consciousness.
Illusionism
Let us now consider the illusionist approach. This view rejects proposition (1). Like illusionism, eliminativism also rejects proposition (1), arguing that our common-sense understanding of consciousness is fundamentally mistaken and should be discarded in favor of a purely physical explanation. However, this amounts to what is often called a “denial of the obvious”—that is, a refusal to acknowledge the experiential phenomena that everyone directly feels. As a subjective phenomenon, consciousness appears to demand an explanation that goes beyond mere physical processes. Eliminativism, however, fails to provide such an account. Illusionism takes a different path: It does not deny the existence of conscious phenomena. If consciousness appears to possess qualitative features, then there must be phenomena with apparent phenomenality—phenomena that present as conscious. What illusionism denies is that the appearance of qualitative features entails their actual existence. In reality, what exists are the brain’s information-processing activities, which, for some reason, are misrepresented as conscious experience.
Illusionism opens up a new path toward the realization of strong artificial consciousness. According to this view, the goal is not, as many existing theories assume, to generate genuine consciousness in artificial systems, but to construct machines capable of producing the illusion of consciousness. This can be achieved entirely through functional means. In the field of artificial consciousness, a longstanding dichotomy exists between “function” and “feeling.” Researchers have been confident in functional implementation, believing that learning and behavioral adaptation can be realized in machines without the presence of real feelings. For instance, artificial functional pain can be defined as the full realization of pain-related functions in a system—perhaps in a simple machine or computational simulation. Similarly, artificial functional consciousness refers to the process of realizing all the functions typically associated with consciousness. However, such implementations yield only weak artificial consciousness. Illusionism collapses the function-feeling divide. Designers of artificial consciousness need not grapple with the philosophical burden of explaining or reproducing phenomenal consciousness. Instead, they can focus on simulating the cognitive structures and behaviors that give rise to the illusion of consciousness.
Panpsychism and illusionism thus represent alternatives to biological naturalism in the quest for artificial consciousness. Biological naturalism attempts to reconcile Propositions (1) through (4), whereas panpsychism denies Proposition (2), and illusionism denies Proposition (1). Although panpsychism provides a physicalist-compatible explanation for consciousness, it faces substantial, perhaps insurmountable, challenges—particularly in the domain of artificial consciousness, where the difficulties only multiply. Illusionism, by contrast, sidesteps these challenges. It not only offers a workable route to the realization of strong artificial consciousness, but also simplifies the research agenda. In this sense, illusionism may represent the more compelling theoretical framework.
Wang Huaping is a professor from the Department of Philosophy (Zhuhai) at Sun Yat-sen University.
Edited by ZHAO YUAN