“Creating counterfeit digital people risks destroying our civilization.”
That’s Daniel Dennett (Tufts, emeritus), writing in The Atlantic. He thinks person-impersonating AIs are extraordinarily dangerous and that our legal institutions should recognize that:
Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people.
Why are they so dangerous? Dennett says:
Even if (for the time being) we are able to teach one another reliable methods of exposing counterfeit people, the cost of such deepfakes to human trust will be enormous…
Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect.
The key design innovation in the technology that makes losing control of these systems a real possibility is that, unlike nuclear bombs, these weapons can reproduce. Evolution is not restricted to living organisms.
Dennett mentions only briefly the effects of such technology on our personal and social lives, but that is worth considering too, alongside its possible effects on our self-conceptions and on our mental health.
Adequately addressing the dangers of AI, says Dennett, will require changes to the law and the cooperation of industry and science. One technique would be to adopt a
high-tech ‘watermark’ system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning. Some computer scientists are already working on such measures, but unless we act swiftly, they will arrive too late to save us from drowning in the flood of counterfeits.
“Horrific penalties” for disabling watermarks or passing along watermark-removed technology, combined with liability “for any misuse of the products (and of the products of their products)”, Dennett says, are urgently needed.
The whole article is here. Discussion welcome.
(via Eric Schliesser)