In light of the continued development and growing use of large language models (e.g., ChatGPT), other kinds of neural networks, generative agents, and the like, a group of scientists, mathematicians, philosophers, and other researchers have signed an open letter intended as a “wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.”
The letter was published by the Association for Mathematical Consciousness Science at the end of April. It says, in part:
AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind.
Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.
As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence. [citations omitted]
The letter emphasizes the role of science and mathematics in the study of consciousness:
To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mathematical tools to model consciousness and its relationship to physical systems. In conjunction with empirical and experimental methods to measure consciousness, questions of AI consciousness must be tackled…
Considerable research is required if consciousness science is to align with advancements in AI and other brain-related technologies. With sufficient support, the international scientific communities are prepared to undertake this task.
Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and one of the signatories of the letter, is himself skeptical that conscious artificial intelligence will be developed anytime soon. Still, he argues for more research on the matter. In an article at Nautilus, he writes:
[M]any in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware—at which the inner lights come on for them. Last March, OpenAI’s chief scientist Ilya Sutskever tweeted, “It may be that today’s large language models are slightly conscious.” Not long after, Google Research vice president Blaise Agüera y Arcas suggested that AI was making strides toward consciousness.
These assumptions and suggestions are poorly founded. It is by no means clear that a system will become conscious simply by virtue of becoming more intelligent. Indeed, the assumption that consciousness will just come along for the ride as AI gets smarter echoes a kind of human exceptionalism that we’d do well to see the back of. We think we’re intelligent, and we know we’re conscious, so we assume the two go together.
Recognizing the weakness of this assumption might seem comforting because there would be less reason to think that conscious machines are just around the corner. Unfortunately, things are not so simple. Even if AI by itself won’t do the trick, engineers might make deliberate attempts to build conscious machines—indeed, some already are.
Here, there is a lot more uncertainty. Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness, much remains unknown. My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living. If I’m right, the prospect of conscious AI remains reassuringly remote.
But I may be wrong, and other theories are a lot less restrictive, with some proposing that consciousness could arise in computers that process information in particular ways or are wired up according to specific architectures. If these theories are on track, conscious AI may be uncomfortably close—or perhaps even among us already.
This lack of consensus about consciousness, when set against the rapidly changing landscape of AI, highlights the need for more research into consciousness itself.
Seth goes onto argue that “trouble is on the way whether emerging AI merely seems conscious, or actually is conscious.”
Those interested in reading about some of those and other possible troubles may be interested in checking out the AI Safety Newsletter, from the Center for AI Safety:*
(* The Center for AI Safety is a paying advertiser at Daily Nous; they did not pay to be mentioned specifically in this post, but their newsletter is relevant to the topic.)