Skip to content

Can consciousness exist in a computer simulation?

Would it be desirable for artificial intelligence to develop consciousness? Not really, for a variety of reasons, according to Dr. Wanja Wiese of the Institute for Philosophy II at the Ruhr University of Bochum, Germany. In an essay, she examines the conditions that must be met for consciousness to exist and compares brains to computers. She has identified significant differences between humans and machines, most notably in the organization of brain areas as well as in memory and computing units. “The causal structure could be a relevant difference for consciousness,” she argues. The essay was published on June 26, 2024 in the journal Philosophical studies.

Two different approaches

When considering the possibility of artificial systems becoming conscious, there are at least two different approaches. One asks the question: how likely is it that current AI systems become conscious, and what should be added to existing systems to make them more likely to be capable of consciousness? Another approach asks the question: what types of AI systems are unlikely to become conscious, and how can we rule out the possibility of certain types of systems becoming conscious?

In her research, Wanja Wiese follows the second path. “My aim is to contribute to two goals: Firstly, to reduce the risk of creating artificial consciousness by mistake – which is desirable, as it is currently unclear under which conditions the creation of artificial consciousness is morally permissible. Secondly, this approach should help to rule out deception by seemingly conscious AI systems that only appear to be conscious,” she explains. This is especially important because there are already indications that many people who frequently interact with chatbots attribute consciousness to these systems. At the same time, the consensus among experts is that current AI systems are not conscious.

The principle of free energy

In his essay, Wiese asks: How can we find out whether there are essential conditions for consciousness that are not met, for example, by conventional computers? One common feature that all conscious animals share is that they are alive. However, being alive is such a strict requirement that many do not consider it a plausible candidate for a necessary condition for consciousness. But perhaps some conditions that are necessary for being alive are also necessary for consciousness?

In her article, Wanja Wiese refers to the free energy principle of the British neuroscientist Karl Friston. The principle states: The processes that ensure the continued existence of a self-organizing system such as a living organism can be described as a type of information processing. In humans, these include processes that regulate vital parameters such as body temperature, blood oxygen content, and blood sugar. The same type of information processing could also be performed in a computer. However, the computer would not regulate your temperature or blood sugar levels, but would simply simulate these processes.

Most differences are not relevant to consciousness.

The researcher suggests that the same might be true for consciousness. Assuming that consciousness contributes to the survival of a conscious organism, then according to the free energy principle, the physiological processes that contribute to the maintenance of the organism must retain a trace left by conscious experience and which can be described as an information processing process. This can be called the “computational correlate of consciousness.” This can also be done on a computer. However, additional conditions may need to be met in a computer for it to not only simulate but also replicate conscious experience.

In her article, Wanja Wiese discusses the differences between the way conscious beings perform the computational correlate of consciousness and the way a computer would do it in a simulation. She argues that most of these differences are not relevant to consciousness. For example, unlike an electronic computer, our brain is very energy efficient, but this is unlikely to be a requirement for consciousness.

Another difference, however, lies in the causal structure of computers and brains: In a conventional computer, data must always first be loaded from memory, then processed in the central processing unit, and finally stored back into memory. In the brain, there is no such separation, which means that the causal connectivity of different brain areas takes a different form. Wanja Wiese argues that this could be a difference between brains and conventional computers that is relevant to consciousness.

“In my opinion, the perspective offered by the free energy principle is particularly interesting, because it allows us to describe the characteristics of conscious living beings in such a way that they can in principle be realized in artificial systems, but are not present in large classes of artificial systems (such as computer simulations),” explains Wanja Wiese. “This means that the prerequisites for consciousness in artificial systems can be captured in a more detailed and precise manner.”