by Robert Schreiber
Berlin, Germany (SPX) Jul 22, 2024
The exploration of consciousness in artificial systems can take various approaches. One approach focuses on the likelihood of current AI systems achieving consciousness and identifying the requirements needed to enhance this likelihood. Another approach, as taken by researcher Wanja Wiese, examines which types of AI systems are unlikely to become conscious and aims to prevent the unintended creation of artificial consciousness.
Wiese's research addresses two primary goals: "Firstly, to reduce the risk of inadvertently creating artificial consciousness; this is a desirable outcome, as it's currently unclear under what conditions the creation of artificial consciousness is morally permissible. Secondly, this approach should help rule out deception by ostensibly conscious AI systems that only appear to be conscious," he explains. This concern is significant, given that many individuals interacting with chatbots attribute consciousness to these systems, despite expert consensus that current AI systems lack consciousness.
The Free Energy Principle
In his essay, Wiese poses the question: How can we determine if essential conditions for consciousness exist that are not met by conventional computers? A common trait among all conscious animals is being alive, though being alive is a stringent requirement and not widely accepted as necessary for consciousness. However, some conditions essential for being alive may also be necessary for consciousness.
Wiese refers to British neuroscientist Karl Friston's free energy principle, which describes the processes ensuring the continued existence of a self-organizing system, such as a living organism, as a form of information processing. For humans, these processes regulate vital parameters like body temperature, blood oxygen content, and blood sugar levels. Although a computer could simulate these processes, it would not regulate its temperature or blood sugar levels in reality.
Differences Between Biological and Simulated Consciousness
Wiese suggests that consciousness might follow a similar pattern. If consciousness contributes to an organism's survival, the physiological processes involved must leave an information-processing trace, known as the "computational correlate of consciousness." This could be replicated in a computer, but additional conditions might be necessary for a computer to truly replicate, rather than merely simulate, conscious experience.
Wiese analyzes the differences between how conscious creatures and computers realize the computational correlate of consciousness. He argues that most differences are not relevant to consciousness. For instance, the brain's energy efficiency compared to electronic computers is unlikely to be a requirement for consciousness.
A significant difference lies in the causal structure of computers and brains. In conventional computers, data is loaded from memory, processed in the central processing unit, and stored back in memory, unlike in the brain where there is no such separation. This distinct causal connectivity in the brain might be relevant to consciousness.
"As I see it, the perspective offered by the free energy principle is particularly interesting, because it allows us to describe characteristics of conscious living beings in such a way that they can be realized in artificial systems in principle, but aren't present in large classes of artificial systems (such as computer simulations)," explains Wiese. "This means that the prerequisites for consciousness in artificial systems can be captured in a more detailed and precise way."
Research Report:Artificial Consciousness: A Perspective From the Free Energy Principle
Related Links
Ruhr-University Bochum
All about the robots on Earth and beyond!