These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place…
What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.
This, I think, is my point: The system is complicit in the abuse.
And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale…
…I have no idea how they can respond without shutting down the service itself, and most systems which resemble it.
We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the [problem].
That’s from “Something Is Wrong On The Internet,” an essay by artist James Bridle, at Medium. It’s long and rambling and mashes together too many ideas for my taste (and seems to sometimes make use of what may be an overly sensitive conception of what counts as abuse) but it is very interesting.
The core idea is that an enormous amount of online content and its promotion—his main examples are videos aimed at children and the playlists and suggestions that pop up upon watching them—is generated “algorithmically” to produce countless permutations of highly-viewed materials in a way that takes advantage of how information is organized and presented on the internet, with the result being that there’s a lot of really bizarre and disturbing material out there in places we don’t expect.
On the internet, more views means more money. If a computer-animated video of a cartoon figure moving a certain way with a certain song in the background gets watched enough times, other videos that are slight variations on it will then appear (either created by machines or persons; it’s sometimes not clear), attached to a similar and growing string of search terms, and served up in playlists. These, in turn, will then get views, and further variations of them will get made. So even if the initial video itself wasn’t particularly disturbing, there’s a chance that ultimately something disturbing will get made. That’s in addition to the intentionally disturbing content being created that gets automatically lumped in with superficially similar but substantively different content by internet services like Google and YouTube.
Bridle’s essay is focused on disturbing content served up to children. Weary parents may set their toddler in front of an iPad with a “Peppa Pig” YouTube playlist going so they can actually cook dinner or fold the laundry or if they’re lucky sit down to have a drink and an adult conversation, unaware that some of the videos in that playlist are of Peppa being “basically tortured.”
Bridle has a few examples of the disturbing videos served up on kid-oriented playlists throughout his essay, examples which he stresses are on the very tame side. Here’s one of them:
What Bridle conveys is that the combination of incentives, automation, and content provision on the internet generates an ever-growing uncontrollable monster. It’s appropriate that the above video is entitled “Buried Alive,” because that’s a good metaphor for what he thinks is happening to children and other vulnerable parties on the internet.
I don’t believe I know enough about what’s on the internet, how it’s made, and why it shows up where it does to know whether Bridle is really onto something, but the article has been shared widely, including among prominent long-time bloggers (among other places, at kottke.org) and I thought it was interesting.
I’m inclined to wonder a few things:
- Isn’t there an upside to this, and if so, shouldn’t we take that into account in our assessment of the situation? That is, if there is a lot of disturbing content being created automatically (or seemingly so), isn’t there the possibility of a lot of delightful content being created by the same means? (Think of Google Poetics, for example.) I ask this sincerely. Perhaps the current automation and incentives are more likely to generate disturbing rather than delightful content. Or perhaps there are more ways in which variations on a theme can be disturbing than there are ways in which they can be delightful. (This may significantly be a function of people’s tastes, I suppose.)
- Assume that the current state of affairs is how Bridle describes it. Do we have reason to think it will get worse or better? Shouldn’t improvements in machine learning, information gathering, and other forms of technology help us better create and filter online content?
- Bridle says human oversight is “simply impossible,” by which he means oversight of the production and filtering of massive amounts of online content. But aren’t there other forms of human oversight that are not impossible, such as preventing one’s young child from using the internet unsupervised? That won’t solve all of the problems related to the emergent horrors of information technology he describes, but it will help a little.
- The objectionable content is produced because decision makers don’t know it is being consumed (parents don’t know their kids are watching these videos). Are there lessons about how to deal with it, then, from other contexts in which this phenomenon occurs? Think of air pollution, lead paint, or bisphenol A (BPA) in plastic products, for example. In a way, the content Bridle is referring to seems like a kind of pollution.
- To what extent is the phenomenon Bridle draws attention to different from what happens more generally in market economies? If the answer is “not so much,” then is the appropriate response to worry less about what Bridle’s worried about or to worry more about how it happens offline?
- Who is responsible for which parts of this phenomenon? And how, if at all, should they be held responsible? And by whom?
and, of course:
- What other questions does this raise?
I’d be curious to hear others’ thoughts about this, especially those who work on philosophical questions related to technology, art, economics, psychology, or children.