Researchers Call for More Work on Consciousness


In light of the continued development and growing use of large language models (e.g., ChatGPT), other kinds of neural networks, generative agents, and the like, a group of scientists, mathematicians, philosophers, and other researchers have signed an open letter intended as a “wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.”

The letter was published by the Association for Mathematical Consciousness Science at the end of April. It says, in part:

AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind.

Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.

As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence. [citations omitted]

The letter emphasizes the role of science and mathematics in the study of consciousness:

To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mathematical tools to model consciousness and its relationship to physical systems. In conjunction with empirical and experimental methods to measure consciousness, questions of AI consciousness must be tackled…

Considerable research is required if consciousness science is to align with advancements in AI and other brain-related technologies. With sufficient support, the international scientific communities are prepared to undertake this task.

Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and one of the signatories of the letter, is himself skeptical that conscious artificial intelligence will be developed anytime soon. Still, he argues for more research on the matter. In an article at Nautilus, he writes:

[M]any in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware—at which the inner lights come on for them. Last March, OpenAI’s chief scientist Ilya Sutskever tweeted, “It may be that today’s large language models are slightly conscious.” Not long after, Google Research vice president Blaise Agüera y Arcas suggested that AI was making strides toward consciousness.

These assumptions and suggestions are poorly founded. It is by no means clear that a system will become conscious simply by virtue of becoming more intelligent. Indeed, the assumption that consciousness will just come along for the ride as AI gets smarter echoes a kind of human exceptionalism that we’d do well to see the back of. We think we’re intelligent, and we know we’re conscious, so we assume the two go together.

Recognizing the weakness of this assumption might seem comforting because there would be less reason to think that conscious machines are just around the corner. Unfortunately, things are not so simple. Even if AI by itself won’t do the trick, engineers might make deliberate attempts to build conscious machines—indeed, some already are.

Here, there is a lot more uncertainty. Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness, much remains unknown. My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living. If I’m right, the prospect of conscious AI remains reassuringly remote.

But I may be wrong, and other theories are a lot less restrictive, with some proposing that consciousness could arise in computers that process information in particular ways or are wired up according to specific architectures. If these theories are on track, conscious AI may be uncomfortably close—or perhaps even among us already.

This lack of consensus about consciousness, when set against the rapidly changing landscape of AI, highlights the need for more research into consciousness itself.

Seth goes onto argue that “trouble is on the way whether emerging AI merely seems conscious, or actually is conscious.”

Those interested in reading about some of those and other possible troubles may be interested in checking out the AI Safety Newsletter, from the Center for AI Safety:*

AI Safety Newsletter

(* The Center for AI Safety is a paying advertiser at Daily Nous; they did not pay to be mentioned specifically in this post, but their newsletter is relevant to the topic.)

Use innovative tools to teach clear and courageous thinking
Subscribe
Notify of
guest

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Patrick S. O'Donnell
11 months ago

From my vantage point, one I’m well aware is not only on the outside looking in, but part of a very small majority, believes this “call for more work on consciousness” is, at bottom, yet another attempt to attract funding for researchers (and in its wake, capital investment, including venture capital) for what are the science fiction daydreams, dreams, and fantasies of computer scientists, neuroscientists, computer engineers, philosophers (especially those prone to ‘scientism’), and researchers in the AI community. It is the work of folly in pursuit of an enchanting chimera by otherwise very smart folks. By way of providing a different and vigorously dissenting take on such things (not all of the particulars of which I share but it is no less sound and persuasive for all that), and perhaps a pause and deep breath as well, one could do worse than read three items by P.M.S. Hacker (one co-written with M.R. Bennett), which make more or less the same argument in different ways: (i) “The Sad and Sorry History of Consciousness, being, among other things, a Challenge to the ‘Consciousness-studies Community,’” in Constantine Sandis and M.J. Cain, eds. Human Nature (Royal Institute of Philosophy Supplement 70) (Cambridge University Press, 2012): 149-168; (ii) M.R. Bennett and P.M.S. Hacker, Philosophical Foundations of Neuroscience (Blackwell), Sec. III, 237-351; (although I think one should read the book in toto); and (iii) Hacker’s The Intellectual Powers: A Study of Human Nature (John Wiley & Sons, 2013): 11-59. I anticipate—perhaps wrongly—being accused of merely and annoyingly kicking up dust, but that is of little consequence when the smog is thick and ubiquitous and the chances for Santa Ana winds are not in the forecast.

Daniel Cappell
Daniel Cappell
Reply to  Patrick S. O'Donnell
11 months ago

Plausibly, the moral and practical dimension of an object of a science should influence funding. For example, cryptography was rapidly advanced when lives were on the line. Even if AI sentience is a daydream, it is good to have scientific backing showing so. Anyway, there are a myriad of other reasons consciousness science has practical import. For example, we need to know whether patients are conscious or not in certain kinds of vegetative states. There is a growing contingent of psychologists trying to understand diseases like depression or dementia as a disorder of consciousness. We need to know more about the neural bases of conscious states to develop an adequate ethic of brain organoid research.

The Hacker dialectic is very sui generis. The book “Neuroscience and Philosophy” by Bennett gives a more thorough look at it because it includes two critics.

Patrick S. O'Donnell
Reply to  Daniel Cappell
11 months ago

Hacker would remind us that the central questions are not answered by science/scientists if only because they are enmeshed in conceptual confusion of a very basic kind. It would be, for example, far better to speak of the mind, and that in a way not reducible to brain states or functions. “Consciousness science” makes questionable assumptions or rests on eminently arguable presuppositions about human nature, personal identity, our mental lives and the associated myriad capacities and capabilities (including the nature of action and agency in the case of human animals or persons; mind is but a façon de parler for speaking of uniquely human powers and their exercise). Finally, I am one of those who think depression, be it commonplace or clinical, is not necessarily a disease (any more than alcoholism) in a biomedical sense (although sometimes it is tied to organic maladies of one kind or another, and pharmacology may provide some relief for its more severe manifestations or symptoms), while dementia can be, and has been, more clearly defined in biomedical terms and thus is not, in your words or sense, fundamentally a “disorder of consciousness” except perhaps somehow figuratively. (Anecdotally, I routinely interact with a woman roughly my age with, at least for now, mild dementia, and I cannot imagine a good reason to warrant describing her affliction as a ‘disorder of consciousness’). And I do not think any ethic(s) need depend on more knowledge of the neural bases of conscious states of awareness if only because that could be said to be only a necessary (basis) yet not sufficient condition of what is meant by “conscious states” (that something has a neural basis can only go so far when it comes to explanation, meaning, and so forth).

That said, my concern about funding was specific, whereas you raise it in domains outside the target or remit of my original and principal concern. However, my interest in the instant case is part of more generalized concerns in which our case could be said to illustrate or exemplify the proverbial big picture in the natural sciences and sciences surrounding the latest technologies, those having to do with changes in the conduct of science since, roughly, the last quarter of the 20th century with the emergence of what the late physicist John Ziman called “post-academic” science, i.e., that which effectively replaced an “academic” science captured to a greater or lesser degree by the ethos of Mertonian norms (after Robert Merton): “prescriptions, proscriptions, preferences and permissions” he believed scientists came to feel bound to [in part, a result of training/socialization/peer pressure] as intrinsic to the disciplinary character of their research. This scientific ethos or ethic was more or less captured by five fundamental norms or regulative principles: Communalism, Universalism, Disinterestedness, Originality, and Skepticism (‘CUDOS’). These are no doubt idealized, and there were, invariably, violations or exceptions ranging from minor to gross of these norms, at least in some sciences, but their intent and function can be appreciated in contrast to the post-academic world of science (the term itself presumes some continuity with what preceded it); in Ziman’s words, “In less than a generation we have witnessed a radical, irreversible, worldwide transformation in the way that science is organized, managed and performed.” Thus,

“[A] norm of utility is being injected into every joint of the research culture. Discoveries are evaluated commercially before they have been validated scientifically. [….] Scientists themselves are seldom in a good position to assess the utility of their work, so expert peer review is enlarged into ‘merit review’ by non-specialist ‘users.’ … [A]s researchers become more dependent on project grants, the ‘Matthew Effect’ is enhanced. Competition for real money takes precedence over competition for scientific credibility as the driving force of science. With so many researchers relying completely on research grants or contracts for their personal livelihood, winning these become an end in itself. Research groups are transformed into small business enterprises. The metaphorical forum of scientific opinion is turned into an actual market in research services.”

Post-academic or what I would call hyper-technological industrialized science, by contrast,

“contravenes these norms at almost every point. [….] Very schematically, industrial science is Proprietary, Local, Authoritarian, Commissioned, and Expert. It produces proprietary knowledge that is not necessarily made public. It is focused on local technical problems rather than on general understanding. Industrial researchers act under managerial authority rather than as individuals. Their research is commissioned to achieve practical goals, rather than undertaken in the pursuit of knowledge. They are employed as expert problem-solvers, rather than for their personal creativity. It is no accident, moreover, that these attributes spell out ‘PLACE.’ That, rather than ‘CUDOS,’ is what you get for doing good industrial science.”

The following, I think, can help us tease out provocative similarities to what is occurring today in the AI research community:

“Post-academic science is organized on market principles. One of the consequences of this is that the post-academic research project is subordinate to the sphere of influence of bodies with the corresponding material interests. Thus, for example, basic research findings in molecular genetics have potential applications in plant breeding. Agro-chemical firms and farmers are therefore deemed to have a legitimate right to influence the course of this research, from the formulation of projects to the interpretation of outcomes.
 
In general … post-academic natural scientists can usually be trusted to tell ‘nothing but the truth,’ on matters about which they are knowledgeable. But unlike academic scientists, they are not bound to tell ‘the whole truth.’ They are often prevented, in the interests of their employers, clients or patrons, from revealing discoveries or expressing doubts that would put a very different complexion on their testimony. The meaning of what is said is secretly undermined by what is not said. This proprietorial attitude to the results of research has become so familiar that we have forgotten how damaging it is to the credibility of scientists and their institutions.This is one result of the fact that ‘the context of application’ is largely defined by the material interests of bodies outside science.” [This is certainly the case with AI.]

For better on occasion and thus more often for worse,

“the problems that activate post-academic science are often deeply rooted in history, and are typically ‘owned’ by well-established institutions, such as pharmaceutical companies, arms procurement agencies, associations of engineering and medical practitioners, environmental protection commissions, economic advisory councils, and so on. This elaborate social structure is associated with an equally elaborate epistemic structure, where the ‘problem areas’ are differentiated much more arbitrarily, and are often narrow and specialized [despite the well-known fact that many of the issues tackled by science and society demand a ‘transdisciplinary’ approach], than they are in academic science.
 
In short, writes Ziman, we have “increasing subordination to corporate and political interests that do not put a high value on the production of knowledge for the benefit of society at large.” Put differently, philosophers can help science by being not only under-laborers, but philosophers in a more critical, big picture sense, with values and ethical dispositions oriented around sensibilities common to the the greater social, political, and would-be (Liberal-) democratic world (this Liberalism, after J.S. Mill, John Dewey and even John Rawls, is perfectly compatible with socialism). This world is universalizable, thus questions of distributive justice are often unavoidably global.
 

Last edited 11 months ago by Patrick S. O'Donnell
Peter Jones
Reply to  Patrick S. O'Donnell
10 months ago

I don’t think the ‘small minority’ of sceptics is as small as you fear. Trouble is that it lives largely on the outside of the field, I struggle to see the point of it in its current form. I’m not even sure why it’s called ‘scientific’. .

Grant Castillou
11 months ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Peter Jones
10 months ago

Well, good luck to anyone trying to accelerate research in modern consciousness studies. I suspect the inertial forces may never be overcome. To me it seems the most ideologically hidebound area of research in all of academia.

This remark struck me. ‘Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness…’. I see no evidence and feel some should have been provided.