“What kind of civilization is it that turns away from the challenge of dealing with more… intelligence?”
That’s Tyler Cowen (GMU), writing at Marginal Revolution. He is addressing the “radical uncertainty” we should acknowledge regarding a future in which we’ve developed artificial intelligence (AI). Even if one does not believe that large language models (LLMs) could be a form of AI (recall the possible architectural limitation noted in the paper discussed last week), it does seem that at least the AI-like is here, will only get more convincing in functionality, and will likely bring substantial changes to our lives.
Cowen’s targets are those who are making broad judgments about the goodness and badness of these technological developments. He thinks we’re living in a transformational period—he calls it “moving history”—and our predictions about it should be informed by an appropriate degree of epistemic humility. He says:
Since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum. We don’t know how to respond psychologically, or for that matter substantively. And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists… No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.
Of course, that AI is “very likely to overturn most of our apple carts” and will ultimately be as unpredictable in its effects as the invention of fire or the printing press is itself a bold prediction. But suppose we accept it. That we can’t be certain of what might happen doesn’t render speculation random or pointless.
So let’s speculate. I’m curious what changes, if any, you think we might be in for.
And let’s talk about how to speculate. I’m curious about how to think about these changes.
We might learn something from paleo-futurology, the study of past predictions of the future. One lesson appears to be that while some technological advances may be easy to predict, social changes are less so. Futurists of the 1950s, thinking about life in the year 2000, were able to anticipate, in some form, for example, video calls, increased use of plastics, and easier-to-clean fabrics:
Yet apparently it was not as easy to predict how odd it would be to relegate the shopping and cleaning to “the housewife of 2000”.
Technological changes affect attitudes and norms that in turn affect our expectations for various aspects of our lives, and those expectations have effects on how we live, what we think, the kinds of individual and collective problems we recognize, what else we are spurred to change, and so on.
So it is complicated, and so yes, let’s be epistemically humble. But let’s let our imaginations roam a bit, too, to explore the possibilities.