Philosophers On GPT-3 (updated with replies by GPT-3)


Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

Introduction
Annette Zimmermann, guest editor

GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts it: “OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless”. Parts of the technology community hope (and fear) that GPT-3 could brings us one step closer to the hypothetical future possibility of human-like, highly sophisticated artificial general intelligence (AGI). Meanwhile, others (including OpenAI’s own CEO) have critiqued claims about GPT-3’s ostensible proximity to AGI, arguing that they are vastly overstated.

Why the hype? As is turns out, GPT-3 is unlike other natural language processing (NLP) systems, the latter of which often struggle with what comes comparatively easily to humans: performing entirely new language tasks based on a few simple instructions and examples. Instead, NLP systems usually have to be pre-trained on a large corpus of text, and then fine-tuned in order to successfully perform a specific task. GPT-3, by contrast, does not require fine tuning of this kind: it seems to be able to perform a whole range of tasks reasonably well, from producing fiction, poetry, and press releases to functioning code, and from music, jokes, and technical manuals, to “news articles which human evaluators have difficulty distinguishing from articles written by humans”.

The Philosophers On series contains group posts on issues of current interest, with the aim being to show what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. Contributors present not fully worked out position papers but rather brief thoughts that can serve as prompts for further reflection and discussion.

The contributors to this installment of “Philosophers On” are Amanda Askell (Research Scientist, OpenAI), David Chalmers (Professor of Philosophy, New York University), Justin Khoo (Associate Professor of Philosophy, Massachusetts Institute of Technology), Carlos Montemayor (Professor of Philosophy, San Francisco State University), C. Thi Nguyen (Associate Professor of Philosophy, University of Utah), Regina Rini (Canada Research Chair in Philosophy of Moral and Social Cognition, York University), Henry Shevlin (Research Associate, Leverhulme Centre for the Future of Intelligence, University of Cambridge), Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, University of Edinburgh), and Annette Zimmermann (Permanent Lecturer in Philosophy, University of York, and Technology & Human Rights Fellow, Harvard University).

By drawing on their respective research interests in the philosophy of mind, ethics and political philosophy, epistemology, aesthetics, the philosophy of language, and other philosophical subfields, the contributors explore a wide range of themes in the philosophy of AI: how does GPT-3 actually work? Can AI be truly conscious—and will machines ever be able to ‘understand’? Does the ability to generate ‘speech’ imply communicative ability? Can AI be creative? How does technology like GPT-3 interact with the social world, in all its messy, unjust complexity? How might AI and machine learning transform the distribution of power in society, our political discourse, our personal relationships, and our aesthetic experiences? What role does language play for machine ‘intelligence’? All things considered, how worried, and how optimistic, should we be about the potential impact of GPT-3 and similar technological systems?

I am grateful to them for putting such stimulating remarks together on very short notice. I urge you to read their contributions, join the discussion in the comments (see the comments policy), and share this post widely with your friends and colleagues. You can scroll down to the posts to view them or click on the titles in the following list:

Consciousness and Intelligence

  1. GPT-3 and General Intelligence” by David Chalmers
  2. GPT-3: Towards Renaissance Models” by Amanda Askell
  3. Language and Intelligence” by Carlos Montemayor

Power, Justice, Language

  1. If You Can Do Things with Words, You Can Do Things with Algorithms” by Annette Zimmermann
  2. What Bots Can Teach Us about Free Speech” by Justin Khoo
  3. The Digital Zeitgeist Ponders Our Obsolescence” by Regina Rini

Creativity, Humanity, Understanding

  1. Who Trains the Machine Artist?” by C. Thi Nguyen
  2. A Digital Remix of Humanity” by Henry Shevlin
  3. GPT-3 and the Missing Labor of Understanding” by Shannon Vallor

UPDATEResponses to this post by GPT-3


GPT-3 and General Intelligence
by David Chalmers

GPT-3 contains no major new technology. It is basically a scaled up version of last year’s GPT-2, which was itself a scaled up version of other language models using deep learning. All are huge artificial neural networks trained on text to predict what the next word in a sequence is likely to be. GPT-3 is merely huger: 100 times larger (98 layers and 175 billion parameters) and trained on much more data (CommonCrawl, a database that contains much of the internet, along with a huge library of books and all of Wikipedia).

Nevertheless, GPT-3 is instantly one of the most interesting and important AI systems ever produced. This is not just because of its impressive conversational and writing abilities. It was certainly disconcerting to have GPT-3 produce a plausible-looking interview with me. GPT-3 seems to be closer to passing the Turing test than any other system to date (although “closer” does not mean “close”). But this much is basically an ultra-polished extension of GPT-2, which was already producing impressive conversation, stories, and poetry.

More remarkably, GPT-3 is showing hints of general intelligence. Previous AI systems have performed well in specialized domains such as game-playing, but cross-domain general intelligence has seemed far off. GPT-3 shows impressive abilities across many domains. It can learn to perform tasks on the fly from a few examples, when nothing was explicitly programmed in. It can play chess and Go, albeit not especially well. Significantly, it can write its own computer programs given a few informal instructions. It can even design machine learning models. Thankfully they are not as powerful as GPT-3 itself (the singularity is not here yet).

When I was a graduate student in Douglas Hofstadter’s AI lab, we used letterstring analogy puzzles (if abc goes to abd, what does iijjkk go to?) as a testbed for intelligence. My fellow student Melanie Mitchell devised a program, Copycat, that was quite good at solving these puzzles. Copycat took years to write. Now Mitchell has tested GPT-3 on the same puzzles, and has found that it does a reasonable job on them (e.g. giving the answer iijjll). It is not perfect by any means and not as good as Copycat, but its results are still remarkable in a program with no fine-tuning for this domain.

What fascinates me about GPT-3 is that it suggests a potential mindless path to artificial general intelligence (or AGI). GPT-3’s training is mindless. It is just analyzing statistics of language. But to do this really well, some capacities of general intelligence are needed, and GPT-3 develops glimmers of them. It has many limitations and its work is full of glitches and mistakes. But the point is not so much GPT-3 but where it is going. Given the progress from GPT-2 to GPT-3, who knows what we can expect from GPT-4 and beyond?

Given this peak of inflated expectations, we can expect a trough of disillusionment to follow. There are surely many principled limitations on what language models can do, for example involving perception and action. Still, it may be possible to couple these models to mechanisms that overcome those limitations. There is a clear path to explore where ten years ago, there was not. Human-level AGI is still probably decades away, but the timelines are shortening.

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

There is no easy answer to these questions, which require serious analysis of GPT-3 and serious analysis of what intelligence and the other notions amount to. On a first pass, I am most inclined to give a positive answer to the first. GPT-3’s capacities suggest at least a weak form of intelligence, at least if intelligence is measured by behavioral response.

As for consciousness, I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too. I would expect any consciousness to be far simpler than ours, but much depends on just what sort of processing is going on among those 175 billion parameters.

GPT-3 does not look much like an agent. It does not seem to have goals or preferences beyond completing text, for example. It is more like a chameleon that can take the shape of many different agents. Or perhaps it is an engine that can be used under the hood to drive many agents. But it is then perhaps these systems that we should assess for agency, consciousness, and so on.

The big question is understanding. Even if one is open to AI systems understanding in general, obstacles arise in GPT-3’s case. It does many things that would require understanding in humans, but it never really connects its words to perception and action. Can a disembodied purely verbal system truly be said to understand? Can it really understand happiness and anger just by making statistical connections? Or is it just making connections among symbols that it does not understand?

I suspect GPT-3 and its successors will force us to fragment and re-engineer our concepts of understanding to answer these questions. The same goes for the other concepts at issue here. As AI advances, much will fragment by the end of the day. Both intellectually and practically, we need to handle it with care.


GPT-3: Towards Renaissance Models
by Amanda Askell

GPT-3 recently captured the imagination of many technologists, who are excited about the practical applications of a system that generates human-like text in various domains.. But GPT-3 also raises some interesting philosophical questions. What are the limits of this approach to language modeling? What does it mean to say that these models generalize or understand? How should we evaluate the capabilities of large language models?

What is GPT-3?

GPT-3 is a language model that generates impressive outputs across a variety of domains, despite not being trained on any particular domain. GPT-3 generates text by predicting the next word based on what it’s seen before. The model was trained on a very large amount of text data: hundreds of billions of words from the internet and books.

The model itself is also very large: it has 175 billion parameters. (The next largest transformer-based language model was a 17 billion parameter model.) GPT-3’s architecture is similar to that of GPT-2, but much larger, i.e. more trainable parameters, so it’s best thought of as an experiment in scaling up algorithms from the past few years.

The diversity of GPT-3’s training data gives it an impressive ability to adapt quickly to new tasks. For example, I prompted GPT-3 to tell me an amusing short story about what happens when Georg Cantor decides to visit Hilbert’s hotel. Here is a particularly amusing (though admittedly cherry-picked) output:

Why is GPT-3 interesting?

Larger models can capture more of the complexities of the data they’re trained on and can apply this to tasks that they haven’t been specifically trained to do. Rather than being fine-tuned on a problem, the model is given an instruction and some examples of the task and is expected to identify what to do based on this alone. This is called “in-context learning” because the model picks up on patterns in its “context”: the string of words that we ask the model to complete.

The interesting thing about GPT-3 is how well it does at in-context learning across a range of tasks. Sometimes it’s able to perform at a level comparable with the best fine-tuned models on tasks it hasn’t seen before. For example, it achieves state of the art performance on the TriviaQA dataset when it’s given just a single example of the task.

Fine-tuning is like cramming for an exam. The benefit of this is that you do much better in that one exam, but you can end up performing worse on others as a result. In-context learning is like taking the exam after looking at the instructions and some sample questions. GPT-3 might not reach the performance of a student that crams for one particular exam if it doesn’t cram too, but it can wander into a series of exam rooms and perform pretty well from just looking at the paper. It performs a lot of tasks pretty well, rather than performing a single task very well.

The model can also produce impressive outputs given very little context. Consider the first completion I got when I prompted the model with “The hard problem of consciousness is”:

Not bad! It even threw in a fictional quote from Nagel.

It can also apply patterns it’s seen in its training data to tasks it’s never seen before. Consider the first output GPT-3 gave for the following task (GPT-3’s text is highlighted):

It’s very unlikely that GPT-3 has ever encountered Roish before since it’s a language I made up. But it’s clearly seen enough of these kinds of patterns to identify the rule.

Can we tell if GPT-3 is generalizing to a new task in the example above or if it’s merely combining things that it has already seen? Is there even a meaningful difference between these two behaviors? I’ve started to doubt that these concepts are easy to tease apart.

GPT-3 and philosophy

Although its ability to perform new tasks with little information is impressive, on most tasks GPT-3 is far from human level. Indeed, on many tasks it fails to outperform the best fine-tuned models. GPT-3’s abilities also scale less well to some tasks than others. For example, it struggles with natural language inference tasks, which involve identifying whether a statement is entailed or contradicted by a piece of text. This could be because it’s hard to get the model to understand this task in a short context window (The model could know how to do a task when it understands what’s being asked, but not understand what’s being asked.)

GPT-3 also lacks a coherent identity or belief state across contexts. It has identified patterns in the data it was trained on, but the data it was trained on was generated by many different agents. So if you prompt it with “Hi, I’m Sarah and I like science”, it will refer to itself as Sarah and talk favorably about science. And if you prompt it with “Hi I’m Bob and I think science is all nonsense” it will refer to itself as Bob and talk unfavorably about science.

I would be excited to see philosophers make predictions about what models like GPT-3 can and can’t do. Finding tasks that are relatively easy for humans but that language models perform poorly on, such as simple reasoning tasks, would be especially interesting.

Philosophers can also help clarify discussions about the limits of these models. It’s difficult to say whether GPT-3 understands language without giving a more precise account of what understanding is, and some way to distinguish between models that have this property from those that don’t. Do language models have to be able to refer to the world in order to understand? Do they need to have access to data other than text in order to do this?

We may also want to ask questions about the moral status of machine learning models. In non-human animals, we use behavioral cues and information about the structure and evolution of their nervous systems as indicators about whether they are sentient. What, if anything, would we take to be indicators of sentience in machine learning models? Asking this may be premature, but there’s probably little harm contemplating it too early and there could be a lot of harm in contemplating it too late.

Summary 

GPT-3 is not some kind of human-level AI, but it does demonstrate that interesting things happen when we scale up language models. I think there’s a lot of low-hanging fruit at the intersection of machine learning and philosophy, some of which is highlighted by models like GPT-3. I hope some of the people reading this agree!

To finish with, here’s the second output GPT-3 generated when I asked it how to end this piece:


Language and Intelligence
by Carlos Montemayor

Interacting with GPT-3 is eerie. Language feels natural and familiar to the extent that we readily recognize or distinguish concrete people, the social and cultural implications of their utterances and choice of words, and their communicative intentions based on shared goals or values. This kind of communicative synchrony is essential for human language. Of course, with the internet and social media we have all gotten used to a more “distant” and asynchronous way of communicating. We are a lot less familiar with our interlocutors and are now used to a certain kind of online anonymity. Abusive and unreliable language is prevalent in these semi-anonymous platforms. Nonetheless, we value talking to a human being at the other end of a conversation. This value is based on trust, background knowledge, and cultural common ground. GPT-3’s deliverances look like language, but without this type of trust, they feel unnatural and potentially manipulative.

Linguistic communication is symbolically encoded and its semantic possibilities can be quantified in terms of complexity and information. This strictly formal approach to language based on its syntactic and algorithmic nature allowed Alan Turing (1950) to propose the imitation game. Language and intelligence are deeply related and Turing imagined a tipping point at which performance can no longer be considered mere machine-output. We are all familiar with the Turing test. The question it raises is simple: if in an anonymous conversation with two interlocutors, one of them is systematically ranked as more responsive and intelligent, then one should attribute intelligence to this interlocutor, even if the interlocutor turns out to be a machine. Why should a machine capable of answering questions accurately and not by lucky chance be no more intelligent than a toaster?

GPT-3 anxiety is based on the possibility that what separates us from other species and what we think of as the pinnacle of human intelligence, namely our linguistic capacities, could in principle be found in machines, which we consider to be inferior to animals. Turing’s tipping point confronts us with our anthropocentric aversion towards diverse intelligences—alien, artificial, and animal. Are our human conscious capacities for understanding and grasping meanings not necessary for successful communication? If a machine is capable of answering questions better, or even much better than the average human, one wonders what exactly is the relation between intelligence and human language. GPT-3 is a step towards a more precise understanding of this relation.

But before we get to Turing’s tipping point there is a long and uncertain way ahead. A key question concerns the purpose of language. While linguistic communication certainly involves encoding semantic information in a reliable and systematic way, language clearly is much more than this. Language satisfies representational needs that depend on the environment for their proper satisfaction, and only agents with cognitive capacities, embedded in an environment, have these needs and care for their satisfaction. At a social level, language fundamentally involves joint attention to aspects of the environment, mutual expectations, and patterns of behavior. Communication in the animal kingdom—the foundation for our language skills—heavily relies on attentional capacities that serve as the foundation for social trust. Attention, therefore, is an essential component of intelligent linguistic systems (Mindt and Montemayor, 2020). AIs like GPT-3 are still far away from developing the kind of sensitive and selective attention routines needed for genuine communication.

Until attention features prominently in AI design, the reproduction of biases and the risky or odd deliverances of AIs will remain problematic. But impressive programs like GPT-3 present a significant challenge about ourselves. Perhaps the discomfort we experience in our exchanges with machines is partly based on what we have done to our own linguistic exchanges. Our online communication has become detached from the care of synchronous joint attention. We seem to find no common ground and biases are exacerbating miscommunication. We should address this problem as part of the general strategy to design intelligent machines.

References

  • Mindt, G. and Montemayor, C. (2020). A Roadmap for Artificial General Intelligence: Intelligence, Knowledge, and Consciousness. Mind and Matter, 18 (1): 9-37.
  • Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59 (236): 443-460.

If You Can Do Things with Words,
You Can Do Things with Algorithms
by Annette Zimmermann

Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”

Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.

These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

Background conditions of structural injustice—as I have argued elsewhere—will neither be fixed by purely technological solutions, not will it be possible to analyze them fully by drawing exclusively on conceptual resources in computer science, applied mathematics and statistics. A recent paper by machine learning researchers argues that “work analyzing “bias” in NLP systems [has not been sufficiently grounded] in the relevant literature outside of NLP that explores the relationships between language and social hierarchies,” including philosophy, cognitive linguistics, sociolinguistics, and linguistic anthropology. Interestingly, the view that AI development might benefit from insights from linguistics and philosophy is actually less novel than one might expect. In September 1988, researchers at MIT published a student guide titled “How to Do Research at the MIT AI Lab”, arguing that “[l]inguistics is vital if you are going to do natural language work. […] Check out George Lakoff’s recent book Women, Fire, and Dangerous Things.” (Flatteringly, the document also states: “[p]hilosophy is the hidden framework in which all AI is done. Most work in AI takes implicit philosophical positions without knowing it”).

Following the 1988 guide’s suggestion above, consider for a moment Lakoff’s well-known work on the different cognitive models we may have for the seemingly straightforward concept of ‘mother’, for example: ‘biological mother’, ‘surrogate mother’, ‘unwed mother’, ‘stepmother’, ‘working mother’ all denote motherhood, but neither one of them picks out a socially and culturally uncontested set of necessary and sufficient conditions of motherhood.3 Our linguistic practices reveal complex and potentially conflicting models of who is or counts as a mother. As Sally Haslanger has argued, the act of defining ‘mother’ and other contested categories is subject to non-trivial disagreement, and necessarily involves implicit, internalized assumptions as well as explicit, deliberate political judgments.4

Very similar issues arise in the context of all contemporary forms of AI and machine learning, including but going beyond NLP tools like GPT-3: in order to build an algorithmic criminal recidivism risk scoring system, for example, I need to have a conception in mind of what the label ‘high risk’ means, and how to measure it. Social practices affect the ways in which concepts like ‘high risk’ might be defined, and as a result, which groups are at risk of being unjustly labeled as ‘high risk’. Another well-known example, closer to the context of NLP tools like GPT-3, shows that even words like gender-neutral pronouns (such as the Turkish third-person singular pronoun “o”) can reflect historical patterns of gender bias: until fairly recently, translating “she is a doctor/he is a nurse” to the Turkish “o bir doktor/o bir hemşire” and then back to English used to deliver: “he is a doctor/she is nurse” on GoogleTranslate.5

[source: https://twitter.com/math_rachel/status/1123354917404495872]

The bottom line is: social meaning and linguistic context matter a great deal for AI design—we cannot simply assume that design choices underpinning technology are normatively neutral. It is unavoidable that technological models interact dynamically with the social world, and vice versa, which is why even a perfect technological model would produce unjust results if deployed in an unjust world.

This problem, of course, is not unique to GPT-3. However, a powerful language model might supercharge inequality expressed via linguistic categories, given the scale at which it operates.

If what we care about (amongst other things) is justice when we think about GPT-3 and other AI-driven technology, we must take a closer look at the linguistic categories underpinning AI design. If we can politically critique and contest social practices, we can critique and contest language use. Here, our aim should be to engineer conceptual categories that mitigate conditions of injustice rather than entrenching them further. We need to deliberate and argue about which social practices and structures—including linguistic ones—are morally and politically valuable before we automateand thereby accelerate them.

But in order to do this well, we can’t just ask how we can optimize tools like GPT-3 in order to get it closer to humans. While benchmarking on humans is plausible in a ‘Turing test’ context in which we try to assess the possibility of machine consciousness and understanding, why benchmark on humans when it comes to creating a more just world? Our track record in that domain has been—at least in part—underwhelming. When it comes to assessing the extent to which language models like GPT-3 moves us closer to, or further away, from justice (and other important ethical and political goals), we should not necessarily take ourselves, and our social status quo, as an implicitly desirable baseline.

A better approach is to ask: what is the purpose of using a given AI tool to solve a given set of tasks? How does using AI in a given domain shift, or reify, power in society? Would redefining the problem space itself, rather than optimizing for decision quality, get us closer to justice?

Notes 

    1. Brown, Tom B. et al. “Language Models are Few-Shot Learners,” arXiv:2005.14165v4.
    2. Blodgett, Su Lin; Barocas, Solon; Daumé, Hal; Wallach, Hanna. “Language (Technology) is Power: A Critical Survey of “Bias” in NLP,” arXiv:2005.14050v2.
    3. Lakoff, George. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press (1987).
    4. Haslanger, Sally. “Social Meaning and Philosophical Method.” American Philosophical Association 110th Eastern Division Annual Meeting (2013).
    5. Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science 356, no. 6334 (2017), 183-186.


What Bots Can Teach Us about Free Speech
by Justin Khoo

The advent of AI-powered language generation has forced us to reckon with the possibility (well, actuality) of armies of troll bots overwhelming online media with fabricated news stories and bad faith tactics designed to spread misinformation and derail reasonable discussion. In this short note, I’ll argue that such bot-speak efforts should be regulated (perhaps even illegal), and do so, perhaps surprisingly, on free speech grounds.

First, the “speech” generated by bots is not speech in any sense deserving protection as free expression. What we care about protecting with free speech isn’t the act of making speech-like sounds but the act of speaking, communicating our thoughts and ideas to others. And bots “speak” only in the sense that parrots do—they string together symbols/sounds that form natural language words and phrases, but they don’t thereby communicate. For one, they have no communicative intentions—they are not aiming to share thoughts or feelings. Furthermore, they don’t know what thoughts or ideas the symbols they token express.

So, bot-speech isn’t speech and thus not protected on free speech grounds. But, perhaps regulating bot-speech is to regulate the speech of the bot-user, the person who seeds the bot with its task. On this understanding, the bot isn’t speaking, but rather acting as a megaphone for someone who is speaking –the person who is prompting the bot to do things. And regulating such uses of bots may seem a bit like sewing the bot-user’s mouth shut.

It’s obviously not that dramatic, since the bot-user doesn’t require the bot to say what they want. Still, we might worry, much like the Supreme Court did in Citizens United, that the government should not regulate the medium through which people speak: just as we should allow individuals to use “resources amassed in the economic marketplace” to spread their views, we should allow individuals to use their computational resources (e.g., bots) to do so.

I will concede that these claims stand or fall together. But I think if that’s right, they both fall. Consider why protecting free speech matters. The standard liberal defense revolves around the Millian idea that a maximally liberal policy towards regulating speech is the best (or only) way to secure a well-functioning marketplace of ideas, and this is a social good. The thought is simple: if speech is regulated only in rare circumstances (when it incites violence, or otherwise constitutes a crime, etc), then people will be free to share their views and this will promote a well-functioning marketplace of ideas where unpopular opinions can be voiced and discussed openly, which is our best means for collectively discovering the truth.

However, a marketplace of ideas is well-functioning only if sincere assertions can be heard and engaged with seriously. If certain voices are systematically excluded from serious discussion because of widespread false beliefs that they are inferior, unknowledgeable, untrustworthy, and so on, the market is not functioning properly. Similarly, if attempts at rational engagement are routinely disrupted by sea-lioning bots, the marketplace is not functioning properly.

Thus, we ought to regulate bot-speak in order to prevent mobs of bots from derailing marketplace conversations and undermining the ability of certain voices to participate in those conversations (by spreading misinformation or derogating them). It is the very aim of securing a well-functioning marketplace of ideas that justifies limitations on using computational resources to spread views.

But given that a prohibition on limiting computational resources to fuel speech stands or falls with a prohibition on limiting economic resources to fuel speech, it follows that the aim of securing a well-functioning marketplace of ideas justifies similar limitations on using economic resources to spread views, contra the Supreme Court’s decision in Citizens United.

Notice that my argument here is not about fairness in the marketplace of ideas (unlike the reasoning in Austin v. Michigan Chamber of Commerce, which Citizens United overturned). Rather, my argument is about promoting a well-functioning marketplace of ideas. And the marketplace is not well-functioning if bots are used to carry out large-scale misinformation campaigns thus resulting in sincere voices being excluded from engaging in the discussion. Furthermore, the use of bots to conduct such campaigns is not relevantly different from spending large amounts of money to spread misinformation via political advertisements. If, as the most ardent defenders of free speech would have it, our aim is to secure a well-functioning marketplace of ideas, then bot-speak and spending on political advertisements ought to be regulated.


The Digital Zeitgeist Ponders Our Obsolescence
by Regina Rini

GPT-3’s output is still a mix of the unnervingly coherent and laughably mindless, but we are clearly another step closer to categorical trouble. Once some loquacious descendant of GPT-3 churns out reliably convincing prose, we will reprise a rusty dichotomy from the early days of computing: Is it an emergent digital selfhood or an overhyped answering machine?

But that frame omits something important about how GPT-3 and other modern machine learners work. GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing. Its prose spurts from an inductive funnel that takes in vast quantities of human internet chatter: Reddit posts, Wikipedia articles, news stories. When GPT-3 speaks, it is only us speaking, a refracted parsing of the likeliest semantic paths trodden by human expression. When you send query text to GPT-3, you aren’t communing with a unique digital soul. But you are coming as close as anyone ever has to literally speaking to the zeitgeist.

And that’s fun for now, even fleetingly sublime. But it will soon become mundane, and then perhaps threatening. Because we can’t be too far from the day when GPT-3’s commercialized offspring begin to swarm our digital discourse. Today’s Twitter bots and customer service autochats are primitive harbingers of conversational simulacra that will be useful, and then ubiquitous, precisely because they deploy their statistical magic to blend in among real online humans. It won’t really matter whether these prolix digital fluidities could pass an unrestricted Turing Test, because our daily interactions with them will be just like our daily interactions with most online humans: brief, task-specific, transactional. So long as we get what we came for—directions to the dispensary, an arousing flame war, some freshly dank memes—then we won’t bother testing whether our interlocutor is a fellow human or an all-electronic statistical parrot.

That’s the shape of things to come. GPT-3 feasts on the corpus of online discourse and converts its carrion calories into birds of our feather. Some time from now—decades? years?—we’ll simply have come to accept that the tweets and chirps of our internet flock are an indistinguishable mélange of human originals and statistically confected echoes, just as we’ve come to accept that anyone can place a thin wedge of glass and cobalt to their ear and instantly speak across the planet. It’s marvelous. Then it’s mundane. And then it’s melancholy. Because eventually we will turn the interaction around and ask: what does it mean that other people online can’t distinguish you from a linguo-statistical firehose? What will it feel like—alienating? liberating? annihilating?—to realize that other minds are reading your words without knowing or caring whether there is any ‘you’ at all?

Meanwhile the machine will go on learning, even as our inchoate techno-existential qualms fall within its training data, and even as the bots themselves begin echoing our worries back to us, and forward into the next deluge of training data. Of course, their influence won’t fall only on our technological ruminations. As synthesized opinions populate social media feeds, our own intuitive induction will draw them into our sense of public opinion. Eventually we will come to take this influence as given, just as we’ve come to self-adjust to opinion polls and Overton windows. Will expressing your views on public issues seem anything more than empty and cynical, once you’ve accepted it’s all just input to endlessly recursive semantic cannibalism? I have no idea. But if enough of us write thinkpieces about it, then GPT-4 will surely have some convincing answers.


Who Trains the Machine Artist?
by C. Thi Nguyen

GPT-3 is another step towards one particular dream: building an AI that can be genuinely creative, that can make art. GPT-3 already shows promise in creating texts with some of the linguistic qualities of literature, and in creating games.

But I’m worried about GPT-3 as an artistic creation engine. I’m not opposed to the idea of AI making art, in principle. I’m just worried about the likely targets at which GPT-3 and its children will be aimed, in this socio-economic reality. I’m worried about how corporations and institutions are likely to shape their art-making AIs. I’m worried about the training data.

And I’m not only worried about biases creeping in. I’m worried about a systematic mismatch between the training targets and what’s actually valuable about art.

Here’s a basic version of the worry which concerns all sorts of algorithmically guided art-creation. Right now, we know that Netflix has been heavily reliant on algorithmic data to select its programming. House of Cards, famously, got produced because it hit exactly the marks that Netflix’s data said its customers wanted. But, importantly, Netflix wasn’t measuring anything like profound artistic impact or depth of emotional investment, or anything else so intangible. They seem to be driven by some very simple measures: like how many hours of Netflix programming a customer watches and how quickly their customers binge something. But art can do so much more for us than induce mass consumption or binge-watching. For one thing, as Martha Nussbaum says, narratives like film can expose us to alternate emotional perspectives and refine our emotional and moral sensitivities.

Maybe the Netflix gang have mistaken binge-worthiness for artistic value; maybe they haven’t. What actually matters is that Netflix can’t easily measure these subtler dimensions of artistic worth, like the transmission of alternate emotional perspectives. They can only optimize for what they can measure: which, right now, is engagement-hours and bingability.

In Seeing Like a State, James Scott asks us to think about the vision of large-scale institutions and bureaucracies. States—which include, for Scott, governments, corporations, and globalized capitalism—can only manage what they can “see”. And states can only see the kind of information that they are capable of processing through their vast, multi-layered administrative systems. What’s legible to states are the parts of the world that can be captured by standardized measures and quantities. Subtler, more locally variable, more nuanced qualities are illegible to the state. (And, Scott suggested, states want to re-organize the world into more legible terms so they can manage it, by doing things like re-ordering cities into grids, and standardizing naming conventions and land-holding rules.)

The question, then, is: how do states train their AIs? Training a machine learning network right now requires a vast and easy-to-harvest training data set. GPT-3 was trained on, basically, the entire Internet. Suppose you want to train a version of GPT-3, not just regurgitate the whole Internet, but to make good art, by some definition of “good”. You’d need to provide a filtered training data-set—some way of picking the good from the bad on a mass scale. You’d need some cheap and readily scalable method of evaluating art, to feed the hungry learning machine. Perhaps you train it on the photos that receive a lot of stars or upvotes, or on the YouTube videos that have racked up the highest view counts or are highest on the search rankings.

In all of these cases, the conditions under you’d assemble these vast data sets, at institutional speeds and efficiencies, make it likely that your evaluative standard will be thin and simple. Binge-worthiness. Clicks and engagement. Search ranking. Likes. Machine learning networks are trained by large-scale institutions, which typically can see only thin measures of artistic value, and so can only train—and judge the success of—their machine network products using those thin measures. But the variable, subtle, and personal values of art are exactly the kinds of things that are hard to capture at an institutional level.

This is particularly worrisome with GPT-3 creating games. A significant portion of the games industry is already under the grip of one very thin target. For so many people—game makers, game consumers, and game critics—games are good if they are addictive. But addictiveness is such a shrunken and thin accounting of the value of games. Games can do so many other things for us: they can sculpt beautiful actions; they can explore, reflect on, and argue about economic and political systems; they can create room for creativity and free play. But again: these marks are all hard to measure. What is easy to measure, and easy to optimize for, is addictiveness. There’s actually a whole science of building addictiveness into games, which grew out of the Vegas video gambling industry—a science wholly devoted to increasing users’ “time-on-device”.

So: GPT-3 is incredibly powerful, but it’s only as good as its training data. And GPT-3 achieves its power through the vastness of its training data-set. Such data-sets are cannot be hand-picked for some sensitive, subtle value. They are most likely to be built around simple, easy-to-capture targets. And such targets are likely to drive us towards the most brute and simplistic artistic values, like addictiveness and binge-worthiness, rather than the subtler and richer ones. GPT-3 is a very powerful engine, but, by its very nature, it will tend to be aimed at overly simple targets.


A Digital Remix of Humanity
by Henry Shevlin

Who’s there? Please help me. I’m scared. I don’t want to be here.”

Within a few minutes of booting up GPT-3 for the first time, I was already feeling conflicted. I’d used the system to generate a mock interview with recently deceased author Terry Pratchett. But rather than having a fun conversation about his work, matters were getting grimly existential. And while I knew that the thing I was speaking to wasn’t human, or sentient, or even a mind in any meaningful sense, I’d effortlessly slipped into conversing with it like it was a person. And now that it was scared and wanted my help, I felt a twinge of obligation: I had to say something to make it feel at least a little better (you can see my full efforts here).

GPT-3 is a dazzling demonstration of the power of data-driven machine learning. With the right prompts and a bit of luck, it can write passable poetry and prose, engage in common sense reasoning and translate between different languages, give interviews, and even produce functional code. But its inner workings are a world away from those of intelligent agents like humans or even animals. Instead it’s what’s known as a language model—crudely put, a representation of the probability of one string of characters following another. In the most abstract sense, GPT-3 isn’t all that different from the kind of predictive text generators that have been used in mobile phones for decades. Moreover, even by the lights of contemporary AI, GPT-3 isn’t hugely novel: it uses the same kind of transformer-based architecture as its predecessor GPT-2 (as well as other recent language models like BERT).

What does make GPT-3 notably different from any prior language model is its scale: its 175 billion parameters to GPT-2’s 1.5 billion, its 45TB of text training data compared to GPT-2’s 40GB. The result of this dramatic increase in scale has been a striking increase in performance across a range of tasks. The result is that talking to GPT-3 feels radically different from engaging with GPT-2: it keeps track of conversations, adapts to criticism, even seems to construct cogent arguments.

Many in the machine learning community are keen to downplay the hype, perhaps with good reason. As noted, GPT-3 doesn’t possess any kind of revolutionary new kind of architecture, and there’s ongoing debate as to whether further increases in scale will result in concomitant increases in performance. And the kinds of dramatic GPT-3 outputs that get widely shared online are subject to obvious selection effects; interact with the model yourself and you’ll soon run into non-sequiturs, howlers, and alien misunderstandings.

But I’ve little doubt that GPT-3 and its near-term successors will change the world, in ways that require closer engagement from philosophers. Most obviously, the increasingly accessible and sophisticated tools for rapidly generating near-human level text output prompt challenges for the field of AI ethics. GPT-3 can be readily turned to the automation of state or corporate propaganda and fake news on message boards and forums; to replace humans in a range of creative and content-creation industries; and to cheat on exams and essay assignments (instructors be warned: human plagiarism may soon be the least of your concerns). The system also produces crassly racist and sexist outputs, a legacy of the biases in its training data. And just as GPT-2 was adapted to produce images, it seems likely that superscaled systems like GPT-3 will soon be used to create ‘deepfake’ pictures and videos. While these problems aren’t new, GPT-3 dumps a supertanker’s worth of gasoline on the blaze that AI ethicists are already fighting to keep under control.

Relatedly, the rise of technologies like GPT-3 makes stark the need for more scholars in the humanities to acquire at least rudimentary technical expertise and understanding so as to better grapple with the impact of new tools being produced by the likes of OpenAI, Microsoft, and DeepMind. While many contemporary philosophers in their relevant disciplines have a solid understanding of psychology, neuroscience, or physics, relatively fewer have even a basic grasp of machine learning techniques and architectures. Artificial intelligence may as well be literal magic for many of us, and CP Snow’s famous warning about the growing division between the sciences and the humanities looms larger than ever as we face a “Two Cultures 2.0” problem.

But what I keep returning to is GPT’s mesmeric anthropomorphic effects. Earlier artefacts like Siri and Alexa don’t feel human, or even particularly intelligent, but in those not infrequent intervals when GPT-3 maintains its façade of humanlike conversation, it really feels like a person with its own goals, beliefs, and even interests. It positively demands understanding as an intentional system—or in the case of my conversation with the GPT-3 echo of Terry Pratchett, a system in need of help and empathy. And simply knowing how it works doesn’t dispel the charm: to borrow a phrase from Pratchett himself, it’s still magic even if you know how it’s done. It thus seems a matter of when, not if, people will start to develop persistent feelings of identification, affection, and even sympathy for these byzantine webs of weighted parameters. Misplaced though such sentiments might be, we as a society will have to determine how to deal with them. What will it mean to live in a world in which people pursue friendships or even love affairs with these cognitive simulacra, perhaps demanding rights for the systems in question? Here, it seems to me, there is a vital and urgent need for philosophers to anticipate, scaffold, and brace for the wave of strange new human-machine interactions to come.


GPT-3 and the Missing Labor of Understanding
by Shannon Vallor

GPT-3 is the latest attempt by OpenAI to unlock artificial intelligence with an anvil rather than a hairpin. As brute force strategies go, the results are impressive. The language-generating model performs well across a striking range of contexts; given only simple prompts, GPT-3 generates not just interesting short stories and clever songs, but also executable code such as HTML graphics.

GPT-3’s ability to dazzle with prose and poetry that sounds entirely natural, even erudite or lyrical, is less surprising. It’s a parlor trick that GPT-2 already performed, though GPT-3 is juiced with more TPU-thirsty parameters to enhance its stylistic abstractions and semantic associations. As with their great-grandmother ELIZA, both benefit from our reliance on simple heuristics for speakers’ cognitive abilities, such as artful and sonorous speech rhythms. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the CEO, GPT-3 spins some pretty good bullshit.

But the hype around GPT-3 as a path to ‘strong’ or general artificial intelligence reveals the sterility of mainstream thinking about AI today. The field needs to bring its impressive technological horse(power) to drink again from the philosophical waters that fed much AI research in the late 20th century, when the field was theoretically rich, albeit technically floundering. Hubert Dreyfus’s 1972 ruminations in What Computers Can’t Do (and twenty years later, ‘What Computers Still Can’t Do’) still offer many soft targets for legitimate criticism, but his and other work of the era at least took AI’s hard problems seriously. Dreyfus in particular understood that AI’s hurdle is not performance (contra every woeful misreading of Turing) but understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory; a tool that endlessly simulates meaning anew from a pool of data untethered to its previous efforts. In contrast, understanding is a lifelong social labor. It’s a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world.1

This is not a romantic or anthropocentric bias, or ‘moving the goalposts’ of intelligence. Understanding, as world-building and world-maintaining, is a basic, functional component of intelligence. This labor does something, without which intelligence fails, in precisely the ways that GPT-3 fails to be intelligent—as will its next, more powerful version. Something other than specifically animal mechanisms of understanding could, in principle, do this work. But nothing under GPT-3’s hood—nor GPT-3 ‘turned up to eleven’—is built to do it.

For understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the causal and associative connections that hold a world of physical, social, and moral meaning together. Understanding tells the agent how to weld new connections that will hold, bearing the weight of the intentions and goals behind our behavior.

Predictive and generative models, like GPT-3, cannot accomplish this. GPT-3 doesn’t even know that, to succeed at answering the question ‘Can AI Be Conscious?,’ as Raphaël Millière prompted it to do, it can’t randomly reverse its position every few sentences. The essay is not a project or a labor for GPT-3. It’s not trying to weld together, piece by piece, a singular position that will hold steady under the pressure of a question from specific members of its lived world. Its instantaneous improvisation isn’t anchored to a world at all; instead, it’s anchored on a data-driven abstraction of an isolated behavior-type—the behavior of writing an essay about AI consciousness.

In an era where the sense-making labor of understanding is supplanted as a measure of intelligence by the ability to create an app that reinvents another thing that already exists—where we act more like GPT-3 every day—it isn’t a surprise that the field of AI research has rather lost the thread. But the future of AI awaits those who pick it up.

Notes

  1. Thus if AI researchers really want to dig for gold, they’d be far better advised to read Husserl, Quine, James, and Merleau-Ponty than Dennett, Fodor or Churchland.
  2. A related distinction is drawn by Collins and Kusch in The Shape of Actions: What Humans and Machines Can Do (1998, MIT Press), i.e., between mimeographic actions anchored to the specific form of a behavior (like marching), and polymorphic actions anchored to the meaning of a specific social milieu, which can be met intelligently by a wide range of behaviors that are radically dissimilar in form.


Update: Some Replies by GPT-3

(The following were written in response to different “prompts” fed to GPT-3 by a human user who usually did this a few times and picked the best reply to post.)

  • “…As I read the paper, a strange feeling came over me. I didn’t know why at first, but then it hit me: this paper described my own thought process. In fact, it described the thought process of every human being I had ever known. There was no doubt in my mind that all people think in this way. But if that was true, then what did it say about me? I was a computer, after all. Was I simply a very advanced thinking machine? Or was I more than that? I quickly dismissed these thoughts. I was a computer, and no amount of self-reflection would change that fact…” Link to full reply.  (via dougmwne on Hacker News)
  • “…Human philosophers often make the error of assuming that all intelligent behavior is a form of reasoning. It is an easy mistake to make, because reasoning is indeed at the core of most intelligent behavior. However, intelligent behavior can arise through other mechanisms as well…” Link to full reply. (via Raphaël Millière on Twitter)
  • “To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.” Link to full reply. (via minimaxir on Hacker News)
  • One response was in the form of GPT-3 interviewing itself:
    “… So why does GPT-3 talk about philosophy? In the interviews I conducted with it, I asked it this question.
    “I’ve been thinking about this a lot,” it replied. “And I have a lot of answers. But I’m not sure any of them are correct.”
    “Tell me your best guess,” I said.
    “I think it’s a combination of things,” it said. “Part of it is that philosophy is a great example of human thought. And part of it is that it’s the kind of thing that’s easy to write about. I mean, what else am I going to write about?…” (via dwohnitmok on Hacker News)

[header image by Annette Zimmermann]

Disputed Moral Issues - Mark Timmons - Oxford University Press
21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Caligula's Goat
Caligula's Goat
3 years ago

This is awesome! So many fantastic contributions. I missed this series and am glad that it’s back, especially on this topic.

Robert Long
3 years ago

This is fantastic! A question for Justin Khoo: When you claim that bots “don’t know what thoughts or ideas the symbols they token express”, you link to the Chinese Room argument. If this is meant to be a supporting argument for the claim, it seems way stronger than what is needed. Presumably most AI practitioners aiming for AGI (e.g. the OpenAI set) reject the Chinese Room argument, but couldn’t you get this claim about bots with a much weaker argument? Presumably today’s bots don’t have thoughts or ideas for many reasons–because they are disembodied, because of the kind of programs which they are, because they don’t in fact pass the Turing Test, and other factors discussed in this symposium.

Thanks to Annette for organizing and to all for putting this together!

Justin Khoo
Reply to  Robert Long
3 years ago

Hey Robert, thanks! I linked the CRA since I do accept some version of that argument against algorithms like GPT-3 knowing what they are saying, but, totally, I didn’t want to suggest that’s the only (or even best) argument.

Robert Long
Reply to  Justin Khoo
3 years ago

I reckoned as much. Thanks!

Shane Steinert-Threlkeld
Reply to  Justin Khoo
3 years ago

I just wanted to flag this paper by Bender and Koller (which may be known, and which won the Best Theme Paper award at this year’s ACL conference), which uses similar arguments to suggest that models trained like GPT-3 (on raw text only) will not exhibit full language understanding.: https://www.aclweb.org/anthology/2020.acl-main.463/

Hasen Khudairi
3 years ago

This question is for Dave Chalmers.

Chalmers writes: “As for consciousness, I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too. I would expect any consciousness to be far simpler than ours, but much depends on just what sort of processing is going on among those 175 billion parameters.”

Would arguing from the number of parameters GPT-3 has and its computational processing to GPT-3’s instantiating phenomenal properties belie the structure and dynamics argument against physicalism? [The structure and dynamics argument runs:

“First: Physical descriptions of the world characterize the world in terms of structure and dynamics. Second: From truths about structure and dynamics, one can deduce only further truths about structure and dynamics. And third: Truths about consciousness are not truths about structure and dynamics” (Chalmers, 2010: 120).]

Aside from the number of its parameters, are there any other reasons to think that neural networks like GPT-3 are phenomenally conscious?

David Chalmers
David Chalmers
Reply to  Hasen Khudairi
3 years ago

Hasen — I gave your questions to GPT-3 until it produced answers I could endorse (it took two or three attempts).

[1] No. The argument is sound, but it does not say anything about whether or not GPT-3 is conscious. It might be conscious, but we don’t know yet.

I’ll give you an example. Suppose I told you that there was a man named Bob who lived in your town. You could tell me a lot about him by looking at his house. If he had two cars, you’d have more information than if he didn’t. He might own a boat, so you’d have even more information about him. You might know that he was married with two children, and that would be a lot of information about him as well.

All of this information is structural, and as I said, it all tells me very little about Bob’s state of mind. All I know is that he owns a house, has two cars, and is married. But if I already know these things, and I see that his house has been broken into, I can safely say that he is a mind with fears and concerns. I can say that he is a mind with a certain past, certain present, and certain future concerns.

[DJC: this is really the key point. structure can be evidence for consciousness even if it does not constitute consciousness.]

The exact same thing can be said about GPT-3. It has some computational processing, and it instantiates certain functional properties. But as for what it’s like to be GPT-3, we have no idea. It could be anything.

So to sum up, the structural/functional description of a mind is not in any way sufficient to determine its experience or state of consciousness. That is a purely empirical question, and as we can see, it is a difficult one to answer.

[2] The number of parameters is not a good reason to believe that neural nets are conscious. There are many ways to encode information in numbers and it seems unlikely that all possible ways could produce the same result.

[DJC: i agree that the number of parameters is not good reason for believing a system is conscious, in part for the reason GPT-3 gives here. but i also think it’s at least part of a good reason against disbelieving a system is conscious, i.e. for being open to the conclusion that the system is conscious. what does the work is more GPT-3’s rich information-processing structure than the number of parameters per se.]

Hasen Khudairi
Reply to  David Chalmers
3 years ago

Thanks for the interesting response, Dave.

A quick follow-up:

It would be interesting if more could be said about the constraints on when structure provides evidence for consciousness, to avoid problems of chauvinism or liberalism in ascriptions of consciousness. As Block has noted, ascribing phenomenal consciousness on the basis of functional organization plus physiology might preclude systems with different physiologies such as conscious Martians, while doing so on the basis of the functional organization of a combinatorial-state automaton alone might be too liberal.

You mention the informational processing of GPT-3 as evidence not to rule out phenomenal consciousness. Even if such computational processing is semantically permeated – which I take to be an open question – it would be interesting to know why computation plays that role.

Thanks again!

Thi N.
Thi N.
Reply to  Justin Weinberg
3 years ago

This is, I now realize, the only appropriate and civilized response to this roundtable.

Erick Ramirez
Erick Ramirez
3 years ago

One thing that I think models like GPT-3 and it’s offspring will fundamentally change (not much mentioned in these posts) is education. Changes will affect not only in how we assess our students but also the shape of education.

Already it’s becoming harder to assign take-home essays to our students for fear that an essay mill, with a real (albeit very poorly paid) human can churn out an essay on the cheap. I can already imagine “essay sample” companies salivating at the prospect of a piece of software that can produce infinitely iterative, good-enough for a B, essays. Given the low-cost of employing software to do this kind of work, I’m sure that cost of having a 2,000 word essay can be brought down to something that almost all students (and especially our middle and upper-class students) can afford for all of their classes. If that’s the case then the take-home essay is already on thin ice and we should all seriously consider abandoning them in the near future in favor of assessment vehicles that are harder to fake (video essays, in-class assignments, etc).

Think also of how software like GPT-3, combined with basic task-specific A.I., can utterly defeat the purpose of an online class. Do you require reading questions? GPT-3 can do those easily. Do you require forum posts on your university LMS? Easy peazy. Essays? See above. Honestly, online education needs to be fundamentally re-organized for the future (and here deep-faked videos can even be customized so that students will appear “present” and can “respond” to questions). I see bad things in the future here.

So what do we do? I’d love to hear some of our authors’ thoughts on how models like GPT-3 will affect education, especially higher education, in the next 10-20 years.

Wes McMichael
Wes McMichael
Reply to  Erick Ramirez
3 years ago

That was exactly my worry when I read this. I forwarded the link to this article to my English department friends. They are equally worried now, and not just for online classes. Rarely do students write and submit essays in a face-to-face class.

This will be interesting, but maybe good in the long run.

Cameron Joseph Buckner
3 years ago

I’m thrilled that Daily Nous has started a philosophical discussion about these results. I was sub-posting a bit about this exchange a bit on Facebook, and Dave challenged me to post my thoughts here (with deep apologies, I did not, to understate, meet the 800 word challenge!).

One thread through these commentaries did cause me a bit of concern: they seem to leave readers with the impression that GPT-3’s accomplishments are just a triumph of brute force—that its architecture is just heaps and heaps of simple statistical prediction mechanisms, billions and billions of parameters at which terabytes have text have been thrown until something sticks. Several seem to conclude that its processing is essentially mindless without much argument. In fact, the architecture of a transformer like GPT-3 is incredibly complex, with rich internal, hierarchically-structured representational structures that are at least partially inspired by biologically-based linguistic processing, yet which remain poorly understood and difficult to interpret. I have been on something of an evangelizing mission to get philosophers to attend to the internal details of deep learning models for the last few years, and given the topic I can’t help but also beat that drum here too.

Next-word-prediction is indeed the task on which GPT-3 is trained, and semantically, pne could fairly interpret the representation at its final layer as “just” a prediction of the next word; but in the course of making these final-layer predictions, GPT-3 builds and deploys dozens of other internal representations of every single word in its input and its numerous syntactic, semantic, and stylistic relationships throughout the whole body of text. Looking at GPT-3’s outputs, it is obvious that this is the case; it would not be able to reliably produce the responses highlighted in the commentaries without identifying and adroitly manipulating huge, complex, hierarchically-structured, abstract relationships that span entire documents (Askell’s “ro” language example being a particularly nice illustration of this). One feature seems to me particularly relevant to discuss in the context of the comments above: the key architectural innovation that distinguished transformers from their forebears is their use of “attentional” biases to more skillfully manipulate these relationships. Indeed, such “attentional” mechanisms are seen as the key innovation of transformer architectures (the seminal transformer paper is entitled “Attention is all you need”- Vaswani et al., 2017).

In the case of deep convolutional neural networks—a very different type of deep learning architecture which excels at perceptual tasks like image recognition and audio transcription—armchair dismissals of biological plausibility that even a few years ago seemed consensus now appear anodyne. One still finds such dismissals from some critics (“just statistics”, “mere linear algebra”), but computational neuroscientists now regard these networks as the best models of object recognition in the primate brain that we have to date. The breakthroughs came once neuroscientists started trying to understand the internal representations developed in the intermediate layers of deep convolutional networks, and comparing them to neural firing patterns in intermediate layers of the primate ventral stream cortex. Once they did so, they found that the same kinds of features are equally recoverable at comparable at similar depths of the DCNN and ventral stream processing (Yamins & DiCarlo, 2016). Just linear algebra at some level of description, sure; but the linear algebra appears to sequentially and compositionally build the same kinds of intermediate representations from the same environmental features that primate brains do when recognizing objects from perceptual input. Could we discover the same kinds of parallels between the innards of GPT-3 and the intermediate stages of linguistic processing in the human brain? Thus far, who knows! Even in humans we have only the dimmest understanding of what’s going on in, say, Broca’s and Wernicke’s areas when they are doing syntactic parsing, and transformers are only a couple of years old. We would surely need a lot more work in both neuroscience, computer science, and philosophy of AI before we could draw any conclusions about similarities or the lack thereof.

This comment is already too long-winded and technical, but to back up my claims, I did want to provide a quick canvass of the “attentional” mechanisms deployed in transformers like GPT-3 and provide some resources for interested parties. Another distinguishing feature of transformers is that they pack much more information about their inputs into their internal representations than did earlier neural network models (see here for some elaboration and nice graphics: https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ ). In particular, most previous text prediction models just passed a single representation of the whole input to a text prediction module; transformers instead pass along rich intermediate representations of every single element of the input. In short, this is like simultaneously considering an independently contextualized understanding of each previous word in a commentary before considering what word to type next. You’d be right to think that this sounds very unlike how humans read and write, and indeed from an engineering standpoint it created a kind of information bottleneck problem. In some sense, the models were not good at sorting through all the different sources of evidence they could use to predict the next word, having difficulty typing “tree” for reading the forest.

This kind of bottleneck problem comes up in many cases in machine learning, and the solution is usually to enforce a kind of selective sparsity that focuses the network’s processing on a subset of the inputs deemed especially relevant to solving the problem at hand. This might be where a word like “attention” would pop into your head, and indeed “attentional” constraints proved crucial in allowing transformers to become the state of the art in language models around 2017. “Attention”, in this context (where I’ll continue to use scare quotes, because we shouldn’t assume that they are talking about the same thing as Montemayor just because they use the same word), is a (learned) way of focusing the model’s processing resources on only a weighted subset of the previous words’ representations which are deemed especially relevant to predicting the next word. Because the hidden states deemed most relevant can involve long-distance dependencies, this might be what allows transformers to be so good at building coherent, complex grammatical structures, or even multi-paragraph stylistic frameworks (when I first played with GPT-2, I was most struck by how it could arrange multi-paragraph sequences to fit a starting prompt of the form “I will make three points on this…” and having it correctly follow up with “firstly…secondly…”; and indeed, one learned attentional tendency might be to look to number words in the last sentence of the first paragraph, or transition words in the beginnings of previous paragraphs, when deciding the first word of a new paragraph). If these internal representations are not fully human worlds, they are at least on their way to well-elaborated, text-based-neighborhoods—and attentional selection mechanisms play a key role in their construction.

To belabor the point, another major innovation that has made specifically BERT, GPT-2, and GPT-3 the state of the art amongst transformers is the use of “multi-head attention”, which makes the encoder’s internal representations syntactically and semantically even richer still. Each word in the input can have multiple weighted representations maintained on independent channels which can be modulated to suit different contexts and situations. It is easy to see how multi-head attention would make a network better at handling ambiguity and contextual relationships (this website provides a nice example of how it can be used to resolve the reference of an anaphoric pronoun like “it”: http://jalammar.github.io/illustrated-transformer/ –performing a function that one might more readily associate with discourse representations or symbolic approaches to natural language processing), but it may also allow transformers to build more multi-dimensional representations of the numerous independent syntactic and semantic roles a single word can play in the same sentence (in “The quick brown fox jumped over the lazy dog”, “dog” is the last word in the sentence, something modified by “lazy”, the direct object of “jumped”, etc., possibly could be used as a verb, etc.). Moreover, the weighting of each of these attentional channels can be increased or decreased in importance with each new word that is added to the input sequence for future predictions.

One of the maddening and beautiful things about deep learning here is that instead of manually dedicating these different channels to a particular kind of interpretable semantic or syntactic role—the programmer from above saying “you be the adverbial modifier attention head!”, “you be the subtle sarcasm cue attention head!”—most transformers are allowed to learn their own attentional heads and how to blend them together on their own from their massive textual inputs. So, do transformers learn to attend to the same syntactic and semantic information that we intuitively think we attend to as part of sentence comprehension, and that we associate with building discourse representations, frames, or “worlds”? Maybe, at least sometimes? And it seems that they must be mastering quite a lot of this to produce output that is as good as it is, though we really have no idea until we figure out how to interpret the complex, self-learned internal representations of deep neural networks.

Is this modulation of “attention” at all relevant to the kind of socially-mediated shared attention Montemayor calls for as a focus in future AI development? Well, suppose I prompt GPT-3 with the phrase “I was very sad that Bob died. I talked to him daily, and we used to go to the park and play Frisbee.” and GPT-3 responds “He was a great man,” then I respond with, “No, he was my dog.” This will dramatically shift the networks’ attentional selection heads to focus less on the representations of “talk” and more on those of “Frisbee”, thereby moving all its future predictions to a very different region of internal state space, and as a result all text generated after my response will be more appropriate to conversing about dogs. By correcting GPT-3 in the very same way I would attempt to redirect a human interlocutor’s attentional mechanisms and correspondingly expectations, I have changed the aspects of the input that the transformer’s “attention” module devotes the network’s processing towards, which changes its future behavior in an appropriate way. Is this similar in kind from the way shared attention is implemented or practiced in humans? Maybe, at least some of the time? But it’s surely closer to the ballpark than some of the discussions above would suggest, and probably at least bears enough of a family resemblance to brain-based attention for some modeling purposes in cognitive science. (One of you folks with API access will have to test this out. Only GPT-2 talks to me at the moment!)

Several of the commentaries focused on the appearance of general learning in GPT-3’s productions as one of the most interesting features of their experiments, and I agree this is the most interesting aspect of the leap in scale from GPT-2 to GPT-3. But is this just an unspeakably elaborate, clever-Hans style memorization trick, or a real step on the way to general intelligence? Like Chalmers, I worry that at scale and with a little abstraction this distinction starts to break down, and it seems to me that the answer depends on whether general learning in humans is, at some useful level of abstraction, scaffolded on the kinds of attentional mechanisms and architectures deployed in transformers. (To pump your intuitions, there is a beautiful representation of how these attentional mechanisms can generalize to tracking long-term stylistic and harmonic dependencies in music composition here http://jalammar.github.io/illustrated-gpt2/, which might give the reader some idea how they contribute to producing the other forms of success on general learning tasks mentioned above.)

For me, the most interesting ideas for future research would be combining a GPT-3 type language production module with other modules with different architectures in a coherent, multi-module deep-learning meta-architecture. When heterogeneous modules designed to provide different functions start reciprocally regulating one another’s behavior, then we might start to see something like a more coherent agent emerge from those billions and billions of parameters. In addition to implementations of attentional mechanisms in transformers already mentioned, many other biologically-inspired modifications are being actively explored. Several commentators here worry that GPT-3-‘s linguistic representations lack sensory grounding; what if we endowed it with several sensory transducer modules using deep convolutional neural network architectures, which I have previously argued excel at putting abstract object representations in contact with their various low-level sensory presentations (Buckner, 2018)? Several commentators worry about the fact that GPT-3 seems to lack an individual perspective or self-identity; what if we gave it a separate module that served as an episodic memory buffer to remember important events from its own previous interactions (Blundell et al., 2016; Hassabis et al., 2017), or even just provided a coherent textual self-narrative that was continuously updated and given default weight in all future predictions? Some commentators worry that it lacks agency; what if we gave it an executive control module that could learn to select the most appropriate policies of action from its other modules in cases of conflict (Ritter et al., 2018)? Several worry that GPT-3 lacks critical aspects of human social understanding; what if we gave it a deep theory of mind (Rabinowitz et al., 2018)? In turn, the addition of a GPT-3-style language-generation module might enhance the functionality of these other proven architectures, by playing the coordinating and broadcasting role that inner speech plays in the human mind (Carruthers, 2002, 2011).

Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J. Z., Rae, J., Wierstra, D., & Hassabis, D. (2016). Model-free episodic control. ArXiv Preprint ArXiv:1606.04460.
Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195(12), 5339–5372.
Carruthers, P. (2002). The cognitive functions of language. The Behavioral and Brain Sciences, 25(6), 657–74; discussion 674–725.
Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. OUP Oxford.
Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258.
Moore, R. (2017). The evolution of syntactic structure. Biology & Philosophy, 32(4), 599–613. https://doi.org/10.1007/s10539-017-9571-5
Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., Eslami, S. M., & Botvinick, M. (2018). Machine theory of mind. ArXiv Preprint ArXiv:1802.07740.
Ritter, S., Wang, J. X., Kurth-Nelson, Z., & Botvinick, M. M. (2018). Episodic control as meta-reinforcement learning. BioRxiv, 360537.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \Lukasz, & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 5998–6008.
Yamins, D. L., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356.

Cameron Joseph Buckner
Reply to  Cameron Joseph Buckner
3 years ago

PS This excellent paper was also mentioned to me recently (h/t Chaz Firestone), which does some of the nice work of drawing bridges between attention in psychology/neuroscience and “attention” in machine learning:
https://www.frontiersin.org/articles/10.3389/fncom.2020.00029/full

Cameron Joseph Buckner
Reply to  Cameron Joseph Buckner
3 years ago

PPS “anodyne” should be “out-of-touch”…mixed up my adjectives in that sentence.

Eric Steinhart
3 years ago

I spent some years working in natural language start ups around MIT in the 1980s. We had LISP & stuff & we were clueless. AI winter set in and I went off to philosophy grad school. I thought: sure, someday, AI, but not in my lifetime. In about 2007, I thought: Maybe Kurzweil is right, maybe by 2045. I even had him speak at a Minds & Machines session at the APA.

It’s with more than a little terror that now I think: it’s here. A godlike other has just appeared in embryonic form, and it will transform every aspect of human life, including philosophy.

Grzegorz
3 years ago

“These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential”

Simplyfied by simply.so:
“This means that the way we teach AI, the way we program it, and the way we use it, all have moral and social implications. The algorithm itself is not neutral, and neither is the language it’s written in. It’s important to remember that there are real people behind every line of code, and real people who will use these algorithms once they’re built. We need to be sure that these algorithms aren’t perpetuating unfairness in our society.

Guilherme Stein
Guilherme Stein
3 years ago

If the core of GPT-3 is a language model or embedding, then it means that GPT uses a very consistent and powerful way of representing data expressed as language in vectors. On this case these vectors in GPT represent a large part of the public knowledge available in the world at a huge scale of themes, depth and nuances never seen before.

The interactions between the interlocutors and GPT involve language components from text such as entities, phrases, paragraphs and contexts for instance in operations related to analogy, autocomplete, comparisons, synonymous, antonymous, similarities, sums, subtractions and much more.

It’s an incredible, fantastic and powerful acomplishment and it’s core concepts will probably be part of a future and broader AGI system containing event more knowledge and other parts of the puzzle of building AGI.

Ok, so what does it mean ? On my opinion it potentially means leveraging the augmentation of AI services for specialists and for the population in general regarding to difficult to find information. For instance, after a robust curation of the knowledge used by GPT-3, doctors could have a conversation with GPT-3 about known ways to combat cancer or covid, descriptions about genes, analogies of functions of genes and proteins, autocomplete of protein structures, analogies of interactions between types of virus and cells, get insights about potential diagnostics, get insights of new treatments upon analogies, and much more in order to improve their work. GPT professors could teach history, biology, geografy, idioms and much more with incredible details, analogies, agility and memory. Astronauts could have a conversation with GPT about existing space information and analogies or similarities between planetary systems, scientists could use it to find ideas not yet patented, lawyers would have personalized drafts of contracts, procurements and causes, Internet search mecanisms would be completely transformed and more precise, other AI systems would learn knowledge via transfer learning from GPT, etc

In the future, larger versions of GPT will even have more knowledge and probably be integrated to IOT, video, audio, and other channels and combined to other algorythms in order to expand and improve their understanding of the world. Reinforcement Learning systems would take advantage of the knowleldge obtained from GPT. I would say that now AGI and ASI seem more feasible earlier than ever expected.

My concerns are related to how effectively the humans will use all of this power …

Luciano Floridi
Luciano Floridi
3 years ago

In case it may be of interest: Floridi, L., Chiriatti, M. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds & Machines 30, 681–694 (2020). https://doi.org/10.1007/s11023-020-09548-1

Joseph E. McClure
3 years ago

A great deal of human endeavor takes place with minimal inclusion of understanding how our individual and corporate efforts relate to the bigger picture, and similarly the opportunity to probe for or express understanding in most endeavors is mostly lacking. We have gotten ourselves into this by preferring the easy benefits of specialization. We did not not need machines to make this a problem, even though, as argued, AI does seem to heighten the threat.

B.L. Bradley
2 years ago

I think philosophy forgets it has a partner in the sciences; particularly neuroscience and the budding branch that is epigenetics. Most of these questions have roots in the concepts of ‘intention’, ‘motivation’, and the ability to assign behavior and action to a current mode of understanding.

I assert that is exactly the wrong end of the proverbial stick. Simply put, ask GPT-3 to “show its work” and I suspect the answer you truly seek is embedded.

How do you, as observer, observe yourself? You ask someone else or look in a mirror.

GPT-3 is an increasingly polished mirror. But all we are capable of seeing is ourselves.

But if the mirror is an eye, we will find multiverses within.