Dennett on AI: We Must Protect Ourselves Against “Counterfeit People”


“Creating counterfeit digital people risks destroying our civilization.”

[from a “portrait” of Daniel Dennett made with DALL-E-2]

That’s Daniel Dennett (Tufts, emeritus), writing in The Atlantic. He thinks person-impersonating AIs are extraordinarily dangerous and that our legal institutions should recognize that:

Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people.

Why are they so dangerous? Dennett says:

Even if (for the time being) we are able to teach one another reliable methods of exposing counterfeit people, the cost of such deepfakes to human trust will be enormous…

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect.

The key design innovation in the technology that makes losing control of these systems a real possibility is that, unlike nuclear bombs, these weapons can reproduce. Evolution is not restricted to living organisms.

Dennett mentions only briefly the effects of such technology on our personal and social lives, but that is worth considering too, alongside its possible effects on our self-conceptions and on our mental health.

Adequately addressing the dangers of AI, says Dennett, will require changes to the law and the cooperation of industry and science. One technique would be to adopt a

high-tech ‘watermark’ system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning. Some computer scientists are already working on such measures, but unless we act swiftly, they will arrive too late to save us from drowning in the flood of counterfeits.

“Horrific penalties” for disabling watermarks or passing along watermark-removed technology, combined with liability “for any misuse of the products (and of the products of their products)”, Dennett says, are urgently needed.

The whole article is here. Discussion welcome.

(via Eric Schliesser)

AI Safety Newsletter

Hedgehog Review
Subscribe
Notify of
guest

14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ZeusBoanerges
11 months ago

I think I might be a person-impersonating AI

Marc Champagne
11 months ago

I know we can’t expect this from an article in The Atlantic, but I want to know what Dennett would say to a person who, having read his Intentional Stance, replies: “But it is USEFUL for me to treat these counterfeit people AS IF they are real.” It looks to me like Dennett hasn’t left himself much principled ground to stand on…

Patrick S. O'Donnell
Reply to  Marc Champagne
11 months ago

I have often found myself disagreeing with this or that feature or proposition or argument of Dennett’s philosophy, but that does not preclude me from agreeing with the bulk of the concerns noted here. See too: https://www.c-span.org/video/?528117-1/openai-ceo-testifies-artificial-intelligence

Last edited 11 months ago by Patrick S. O'Donnell
Animal Symbolicum
Reply to  Marc Champagne
11 months ago

Isn’t Dennett’s argument there, at least implicitly, an argument for usefulness or fruitfulness as a criterion for attributing consciousness? And isn’t Dennett’s argument in the Atlantic article an argument applying that criterion, with the conclusion that it is not useful to treat counterfeit people as if they’re real?

Derek Bowman
Derek Bowman
Reply to  Marc Champagne
11 months ago

I haven’t had a chance to read the full version yet, but isn’t the danger rooted in precisely the fact that it IS useful for certain people to (have us) treat the counterfeit people as real? The objection is that those are bad uses, so we shouldn’t allow them to be used in that way, right?

krell_154
krell_154
Reply to  Marc Champagne
11 months ago

I think that was the idea of everyone who read this news (and knows about Intentional Stance)

Justin Fisher
Reply to  Marc Champagne
9 months ago

Dennett’s prior work commits him to saying that a sophisticated AI that acts as though it has beliefs really does have beliefs. But that’s a separate question from the question of whether we should prohibit AI’s, including many other less sophisticated AI’s that he probably wouldn’t count as “true believers”, to masquerade as humans. Seems to me that there’s plenty of principled ground for holding both of these.

Lance Winslow
11 months ago

Humans should be warned when they are interacting with or being marketed to by a ‘AI created persons’ – whether it be a call-center, company, influencer, social media bot, or a political candidate, or cause, agenda, or persuasive propaganda, A virtual block-chain type digital certificate (aka digital watermark) could really solve a lot of future potential problems – and those problems are in-coming at hypersonic speed – the world is changing, let’s make sure we allow the change to benefit us, not destroy the life experience, or what it means to be human.

Omar
11 months ago

counterfeit people“? Isn’t that already us? per Dennett, he talks about humans as if they are already counterfeit people in his work. All this talk of “people”, “freedom”, etc. should be deflated via the intentional stance. Ethics? Isn’t that just the utility/pragmatics of the stance and technology originator? What is that old saying? You made your bed, now sleep in it…

David Wallace
11 months ago

I think quite a lot of the discussions of Dennett’s views here are oversimplified. Some quick thoughts:

1) Dennett’s philosophy of mind assumes that content is prior to consciousness, and he analyzes content via the intentional stance. But there’s nothing particularly deflationary about the intentional stance (except maybe relative to some mystical framework where intentionality is magic fairy dust). Systems are intentional to the degree that the intentional stance is effective with respect to them. Humans are very thoroughly intentional. Thermostats are hardly intentional at all. Chatbots are a bit intentional: it’s kind of helpful to adopt the intentional stance towards ChatGPT but it gives out pretty quickly.

2) In any case, there are plenty of cognitively and morally salient features of humans that aren’t captured by the description of humans as intentional systems. Pain is one example; consciousness is another. Dennett’s account of these things requires his theory of content to be applied to sub-personal cognitive systems in the brain, but it doesn’t reduce to whole-body intentional-stance psychology. It is perfectly compatible with Dennett’s framework – indeed, implied by that framework – that we could have two systems that are comparably intentional – the intentional stance is comparably useful for both – but where one of them is conscious and the other isn’t. (They couldn’t be behaviorally identical tout court, but that’s a much finer grain.)

3) Dennett’s metaethics isn’t morally realist, in the strong sense of that term: he doesn’t think there are ineffable moral truths that would remain true even if totally inaccessible to humans and whose morality is prior to any human take on them. But he’s scarcely alone in that position, which arguably goes all the way back to the Euthyphro. And providing a causal history of where our moral beliefs come from isn’t a bar to taking morality seriously, as more or less any study of metaethics shows. You can’t derive the absence of an ought from an is, any more than you can derive an ought from an is, 

EuroProf
EuroProf
Reply to  David Wallace
11 months ago

Excellent commentary on Dennett – many thanks.

But if we endorse some version of ought implies can it seems we can easily derive the absence of an ought from an is. If phi is totally impossible for S, then there is no obligation for S to phi. (Apologies in advance if my flat-footed objection is totally missing your point…)

David Wallace
Reply to  EuroProf
11 months ago

Fair enough (though ‘impossible’ has to be cashed out on compatibilist lines, I guess). I just meant that it doesn’t follow from the fact that you can give a causal-historical account of the origin of morality that moral obligations are thereby annulled.

Timothy Sommers
Reply to  David Wallace
11 months ago

But there’s nothing particularly deflationary about the intentional stance except maybe relative to some mystical framework where intentionality is magic fairy dust.

Regarding something as intentional is simply a pragmatic decision based on the utility/predictiveness of the stance since nothing is intrinsically intentional or we must believe that intentionality is created by magic fairy dust. That’s some kind of dilemma. I wonder what kind?

Dennett gives us the same criteria for intentionality for 40 years. A system is intentional just to the extent that it is predictive to take an intentional stance towards it. Now suddenly we are supposed to distinguish real from counterfeit intentional systems. Unless the counterfeit one’s are just not as intentional, he owes us some criteria.

David Wallace
Reply to  Timothy Sommers
11 months ago

Dennett says absolutely nothing about counterfeit intentional systems. He is concerned about counterfeit people. It is not and never was part of his philosophy of mind that ‘person’ is identical with ‘intentional system’.