Emergent Horrors (?) of Information Technology


These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place… 

What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale…

…I have no idea how they can respond without shutting down the service itself, and most systems which resemble it.

We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the [problem].

That’s from “Something Is Wrong On The Internet,” an essay by artist James Bridle, at Medium. It’s long and rambling and mashes together too many ideas for my taste (and seems to sometimes make use of what may be an overly sensitive conception of what counts as abuse) but it is very interesting.

The core idea is that an enormous amount of online content and its promotion—his main examples are videos aimed at children and the playlists and suggestions that pop up upon watching them—is generated “algorithmically” to produce countless permutations of  highly-viewed materials in a way that takes advantage of how information is organized and presented on the internet, with the result being that there’s a lot of really bizarre and disturbing material out there in places we don’t expect.

On the internet, more views means more money. If a computer-animated video of a cartoon figure moving a certain way with a certain song in the background gets watched enough times, other videos that are slight variations on it will then appear (either created by machines or persons; it’s sometimes not clear), attached to a similar and growing string of search terms, and served up in playlists. These, in turn, will then get views, and further variations of them will get made. So even if the initial video itself wasn’t particularly disturbing, there’s a chance that ultimately something disturbing will get made. That’s in addition to the intentionally disturbing content being created that gets automatically lumped in with superficially similar but substantively different content by internet services like Google and YouTube.

Bridle’s essay is focused on disturbing content served up to children. Weary parents may set their toddler in front of an iPad with a “Peppa Pig” YouTube playlist going so they can actually cook dinner or fold the laundry or if they’re lucky sit down to have a drink and an adult conversation, unaware that some of the videos in that playlist are of Peppa being “basically tortured.”

Bridle has a few examples of the disturbing videos served up on kid-oriented playlists throughout his essay, examples which he stresses are on the very tame side. Here’s one of them:

What Bridle conveys is that the combination of incentives, automation, and content provision on the internet generates an ever-growing uncontrollable monster. It’s appropriate that the above video is entitled “Buried Alive,” because that’s a good metaphor for what he thinks is happening to children and other vulnerable parties on the internet.

I don’t believe I know enough about what’s on the internet, how it’s made, and why it shows up where it does to know whether Bridle is really onto something, but the article has been shared widely, including among prominent long-time bloggers (among other places, at kottke.org) and I thought it was interesting.

I’m inclined to wonder a few things:

  1. Isn’t there an upside to this, and if so, shouldn’t we take that into account in our assessment of the situation? That is, if there is a lot of disturbing content being created automatically (or seemingly so), isn’t there the possibility of a lot of delightful content being created by the same means? (Think of Google Poetics, for example.) I ask this sincerely. Perhaps the current automation and incentives are more likely to generate disturbing rather than delightful content. Or perhaps there are more ways in which variations on a theme can be disturbing than there are ways in which they can be delightful. (This may significantly be a function of people’s tastes, I suppose.)
  2. Assume that the current state of affairs is how Bridle describes it. Do we have reason to think it will get worse or better? Shouldn’t improvements in machine learning, information gathering, and other forms of technology help us better create and filter online content?
  3. Bridle says human oversight is “simply impossible,” by which he means oversight of the production and filtering of massive amounts of online content. But aren’t there other forms of human oversight that are not impossible, such as preventing one’s young child from using the internet unsupervised? That won’t solve all of the problems related to the emergent horrors of information technology he describes, but it will help a little.
  4. The objectionable content is produced because decision makers don’t know it is being consumed (parents don’t know their kids are watching these videos). Are there lessons about how to deal with it, then, from other contexts in which this phenomenon occurs? Think of air pollution, lead paint, or bisphenol A (BPA) in plastic products, for example. In a way, the content Bridle is referring to seems like a kind of pollution.
  5. To what extent is the phenomenon Bridle draws attention to different from what happens more generally  in market economies? If the answer is “not so much,” then is the appropriate response to worry less about what Bridle’s worried about or to worry more about how it happens offline?
  6. Who is responsible for which parts of this phenomenon? And how, if at all, should they be held responsible? And by whom?

    and, of course:

  7. What other questions does this raise?

I’d be curious to hear others’ thoughts about this, especially those who work on philosophical questions related to technology, art, economics, psychology, or children.

 

Subscribe
Notify of
guest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
SomePhilosopher
6 years ago

Re: (1), this thesis is too strong, but I wanted to put it out there for discussion in case anyone wanted to pick it up:

Strong Aesthetic Claim About Automation: Automated content’s only possible aesthetic virtues are (a) the humor derived from being nonsensical or (b) the humor derived from the mocking of overly predictable, stale aesthetic features.

Children can derive enjoyment from (a) but I doubt they can from (b) because they lack the experience. This would put a serious limit on how valuable this stuff could turn out to be.

legal philosopher
legal philosopher
6 years ago

Another question to add to the mix here: assuming at least some of this amounts to a form of pollution, or dangerous content, from which consumers need to be protected, what are the practically feasible legal avenues for preventing, dis-incentivizing or regulating it? Are there any?

One thought would be a class action against those posting this stuff, or even YouTube directly, for negligent (or perhaps intentional) infliction of emotional distress. But this would be tricky because it’ll be difficult to prove causation of harm to identifiable victims (and even if you could, those posting this stuff are likely to be organized as thinly capitalized corporations that are effectively judgment proof). Another avenue might be to try to make use of consumer protection laws, though not sure how that would play out in practice. Anyway, curious to hear others’ take on how we might regulate this stuff as well (assuming that’s something we want to do, which might be debated further).

Jonathan Reid Surovell
6 years ago

“Assume that the current state of affairs is how Bridle describes it. Do we have reason to think it will get worse or better? Shouldn’t improvements in machine learning, information gathering, and other forms of technology help us better create and filter online content?”

Most of the developments in machine learning will be driven by the same incentives Bridle and others describe: to maximize the amount of time users spend looking at screens. For this reason, so far, negative health consequences do not diminish digital technologies’ profitability. The discovery that Facebook is seriously psychologically harmful (https://hbr.org/2017/04/a-new-more-rigorous-study-confirms-the-more-you-use-facebook-the-worse-you-feel) hasn’t really hurt its bottom line or induced any health-based adjustments in the architecture. Tristan Harris explains the underlying incentive structure: https://www.youtube.com/watch?v=C74amJRp730

Also, with the current state of knowledge about the effects of electronic devices on developing brains, we shouldn’t be looking for better algorithms for children’s content, but rather to get children (mostly) off screens. The American Association of Pediatrics recommends 0 screen time for ages 0-18 months. Beyond that, the effects of screen time on developing brains are hard to measure and their magnitude is up for debate. But there’s plenty of evidence that they pose serious mental health risks, and the benefits are too meager to outweigh these risks.

Here’s the American College of Pediatricians’ most recent report on children and screen time: https://www.acpeds.org/the-college-speaks/position-statements/parenting-issues/the-impact-of-media-use-and-screen-time-on-children-adolescents-and-families

And an interview with Kardaras, a partisan against exposing children to screens: https://www.youtube.com/watch?v=MQMlOjOPsKg