We’re Not Ready for the AI on the Horizon, But People Are Trying


Ongoing developments in artifical intelligence, particularly in AI linguistic communication, will affect various aspects of our lives in various ways. We can’t foresee all of the uses to which technologies such as large language models (LLMs) will be put, nor all of the consequences of their employment. But we can reasonably say the effects will be significant, and we can reasonably be concerned that some of those effects will be bad. Such concern is rendered even more reasonable by the fact that it’s not just the consequences of LLMs that we’re ignorant of; there’s a lot we don’t know about what LLMs can do, how they do it, and how well. Given this ignorance, it is hard to believe we are prepared for the changes we’ve set in motion.

By now, many of the readers of Daily Nous will have at least heard of GPT-3 (recall “Philosophers on GPT-3” as well as this discussion and this one regarding its impact on teaching). But GPT-3 (still undergoing upgrades) is just one of dozens of LLMs currently in existence (and it’s rumored that GPT-4 is likely to be released sometime over the next few months).

[“Electric Owl On Patrol”, made with DALL-E]

The advances in this technology have prompted some researchers to begin to tackle our ignorance about it and produce the kind of knowledge that will be crucial to understanding it and determining norms regarding its use. A prime example of this is work published recently by a large team of researchers at Stanford University’s Institute for Human-Centered Artificial Intelligence. Their project, “Holistic Evaluation of Language Models,” (HELM) “benchmarks” 30 LLMs.

One aim of the benchmarking is transparency. As the team writes in a summary of their paper:

We need to know what this technology can and can’t do, what risks it poses, so that we can both have a deeper scientific understanding and a more comprehensive account of its societal impact. Transparency is the vital first step towards these two goals. But the AI community lacks the needed transparency: Many language models exist, but they are not compared on a unified standard, and even when language models are evaluated, the full range of societal considerations (e.g., fairness, robustness, uncertainty estimation, commonsense knowledge, disinformation) have not be addressed in a unified way.

The paper documents the results of a substantial amount of work conducted by 50 researchers to articulate and apply a set of standards to the continuously growing array of LLMs. Here’s an excerpt from the paper’s abstract:

We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models.

First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what’s missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness).

Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios to the extent possible (87.5% of the time), ensuring that metrics beyond accuracy don’t fall to the wayside, and that trade-offs across models and metrics are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to more deeply analyze specific aspects (e.g. knowledge, reasoning, memorization/copyright, disinformation).

Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, including 21 scenarios that were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on a set of core scenarios and metrics under standardized conditions.

Our evaluation surfaces 25 top-level findings concerning the interplay between different scenarios, metrics, and models. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit for easily adding new scenarios, models, metrics, and prompting strategies. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models.

One of the authors of this paper is Thomas Icard, associate professor of philosophy at Stanford. His work on the HELM project has been mainly in connection to evaluating the reasoning capabilities of LLMs. Something he emphasized about the project is that it aims to be an ongoing and democratic evaluative process: “it aspires to democratize the continual development of the suite of benchmark tasks. In other words, what’s reported in the paper is just a first attempt at isolating a broad array of important tasks of interest, and this is very much expected to grow and change as time goes on.”

Philosophical questions across a wide array of areas—philosophy of science, epistemology, logic, philosophy of mind, cognitive science, ethics, social and political philosophy, philosophy of law, aesthetics, philosophy of education, etc.—are raised by the development and use of language models and by efforts (like HELM) to understand them. There is a lot to work on here, philosophers. While some of you are already on it, it strikes me that there is a mismatch between, on the one hand, the social significance and philosophical fertility of the subject and, on the other hand, the amount of attention it is actually getting from philosophers. I’m curious what others think about that, and I invite philosophers who are working on these issues to share links to their writings and/or descriptions of their projects in the comments.


Utilitarianism.net

guest

13 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
J Searl
2 months ago

This paper (https://arxiv.org/pdf/2209.00731) written by two philosophers, Atoosa Kasirzadeh and Iason Gabriel, also makes a solid contribution from the philosophy of language/science side of things to this discussion. It will be good to add this preprint to philosophers’ contributions on this topic, too.Report

Orwin O'Dowd
2 months ago

Some basic constraints on language models are wired so deep into the system that they pass unnoticed. Like four-factor personality theory / Talcot Parsons sociology = the standard mapping of political opinions, obviously in heavy use fot add targeting. Underlying that is the classic Galen-Dioscurides Animal Model shared with medical research: and Lacan took that from ethology for his Schemas of symbolic order

Next up would be the innate moral sense tracking all human behaviour through involuntary emotive expression and responses to the same. And that matches the recent perhaps surprising progress in machine-ethics oversight of social networks. Whether there is a Next Big Thing is now a real philosophical challenge. But I see no clear distinction from the now familiar range of modal logics / deontologies where epistemic models emerge, where the applications run strongly behind the scenes of current oversight technologies. So I’m not here to post any new position paper.

Just a warning that more interesting / nonlinear physiology or physologia in the older philosophical sense runs over the infinite horizon where Kant dug in for already a form of logicism (on On Negative Quantities, powerfully contextualized in the Development if Analysis report) now seeming a recipe for failure. And x86 hardware snagged too at 486 heavy computational multitasking, over which looms now China’s controversial flagship 5G multitasking tech!

So we get to the hard-core turn marked on cue by Elon Musk.Report

KittyCat
KittyCat
Reply to  Orwin O'Dowd
2 months ago

Was this post written by an LLM?Report

John Varty
2 months ago

I don’t know how relevant this is but Randall Collins was writing about AI a long time ago…Report

16699293220486203832380231071398.jpg
Gordon
2 months ago

Agreed there’s not a ton by people in philosophy, but maybe this will be helpful. First, some of the interdisciplinary work is very, very good. Second, there are some emerging sites. For example, Shannon Vallor has a chair in AI Ethics at Edinburgh (this is where Atoosa Kasirzadeh is as well). Edinburgh is program-building and are worth following. The Oxford Internet Institute (run by Luciano Floridi) is generating a ton of AI ethics papers on a pretty regular basis.

There’s also some individual philosophers worth following: Ramon Alvarado (Oregon) comes to mind (he’s got a nice paper on AI and radiology: https://doi.org/10.1111/bioe.12959); he and John Symons also have a recent Synthese paper on epistemic injustice in data science technologies (https://link.springer.com/article/10.1007/s11229-022-03631-z; I’ve got one on epistemic justice in AI datasets under review, preprint here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4137697). And of course Colin Koopman (also at Oregon) does great, Foucauldian work on data.

A lot of the work on big data and data ethics applies, other things being equal, to AI. That’s because current machine-learning AI systems train on vast amounts of data, so all the concerns about privacy (etc) are at least potentially relevant in an ML context.

More generally, my sense is that work on ethics, especially privacy and discrimination/bias, are where most of the initial philosophical attention has happened, and that work in epistemology and philosophy of language has somewhat lagged. But that may in part be just because robust NLP is really, really recent technologically. Image recognition was the prior big deal for AI, and there is quite a bit more work on that, and in particular on the datasets it uses.

There’s a lot of excellent work being written by computer scientists, law professors, STS folks, and so forth. For natural language processing, the paper by Emily Bender, Timnit Gebru and Margaret Mitchell – this is the one that got Gebru fired from Google a couple of years ago – is an essential start: https://doi.org/10.1145/3442188.3445922

I blog about AI/ML sometimes at NewApps, and try to link to papers that I’ve read and find interesting. And, yes, this is really politically and socially important.Report

Patrick S. O'Donnell
2 months ago

Re: “… evaluating the reasoning capabilities of LLMs.” I believe there is no such thing. Here are some reasons as to why I hold this belief:
 
I do not think that it is true or fair to say that “artificial intelligence” (hereafter, AI) replicates human cognitive abilities, although it may “replicate” in a very attenuated or simply analogical sense, one aspect or feature of one particular cognitive ability, formal logical reasoning, and even then, in as much as human cognitive abilities do not function in isolation, in other words, as they work more or less in tandem and within a larger cognitive (and affective, etc.) human context, the replication that takes place in this case is not in any way emulative of human intelligence as such. AI is not about emulating or instantiating (peculiarly) human intelligence, but rather a technological replication of aspects of formal logic that can be mathematized (such as algorithms), thus it is a mistake to describe the process here as one of “automated reasoning” (i.e., AI machines don’t ‘reason,’ they compute and/or process), if only because our best philosophical and psychological conceptions of rationality and reasoning cast a net—as the later Hilary Putnam often reminded us—far wider than anything that can, in fact or principle, be scientized, logicized, or mathematized (i.e., formalized).
 
Moreover, AI only occurs as the result of the (creative, collaborative, managerial …) work of human designers, programmers, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicate is used in moral philosophy and moral psychology) machines or robots, although perhaps we could view this as a stipulative description or metaphor, albeit one still parasitic (and to that extent misleading) on conceptions of human autonomy. And insofar as a replication is in reference to a copy, in this case the copy is not an exact replica of the original. We should therefore be critically attentive to the metaphors and analogies (but especially the former) used in discussions of AI: “learning,” “representation,” “intelligence” (the adjective ‘artificial’ thus deserving more descriptive and normative semantic power), “autonomous,” “mind,” “brain,” and so forth; these often serve, even if unintentionally, to blur distinctions and boundaries, muddle our thinking, create conceptual confusion, obscure reality, and evade the truth. Often extravagant claims are made (by writers in popular science, scientists themselves, mass media ‘experts’ and pundits, philosophers, corporate spokespersons or executives, venture capital and capital investors generally, and so forth) to the effect that AI computers or machines possess powers uncannily “like us,” that is, they function in ways that, heretofore at least, were demonstrably distinctive of human (and sometimes simply animal) powers and capacities, the prerogative, as it were, and for better and worse, of human beings, of persons, of personal normative agency.
Recent attempts to articulate something called “algorithmic accountability,” meaning a concern motivated by the recognition that data selection and (so to speak) algorithmic processing often encode “politics,” psychological biases or prejudices, or stereotypical judgments, are important, but attributions of accountability and responsibility, be they individual or shared (in this case, almost invariably the latter), can only be human, not “algorithmic” in the first instance, hence that notion lacks any meaningful moral or legal sense unless it is a shorthand reference to the human beings responsible for producing or programming the algorithms in the first place.
 
A fair amount of the philosophical, legal, and scientific (including computer science) literature—both its questions and propositions—on AI (‘artificial intelligence’), including robots, “autonomous artificial agents, and “smart machines,” is replete with implausible presuppositions and assumptions, as well as question-begging premises such the arguments can be characterized by such terms as implausibility, incoherence, and even “nonsense” (or failure to ‘make sense’). In short, the conceptual claims upon which its statements and premises depend, that which is at the very heart of its arguments, often make no sense, that is to say, they “fail to express something meaningful.”
 
Sometimes even the very title of a book will alert us to such nonsense, as in the case of a volume edited by Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009). The book title begs the question, first with the predicate “moral,” and correlatively, with the phrase, “teaching robots right from wrong,” which depends upon concepts heretofore never applied outside human animals  or persons (we can speak of ‘teaching’ at least some kinds of animals, but it makes no sense to speak of teaching them ‘right from wrong’), and thus it is eminently arguable as to whether or not we can truly “teach” robots anything, let alone a basic moral orientations, in the way, say, that we teach our children, our students, or each other, whether in informal or formal settings. The novelty of the claim, as such, is not what is at issue, although radical and unprecedented manner in which it employs concepts and words should provoke presumptive doubts as to whether or not our authors have a clear and plausible picture of what it means for “us” to “be moral,” what it typically means for us to teach someone “right from wrong,” or how someone learns this fundamental moral distinction. We might employ such words in a stipulative or even theoretical sense, for specific and thus very limited purposes that are parasitic on conventional or lexical meaning(s), or perhaps simply analogical or metaphorical at bottom, but even in those cases, one risks misleading others by implying or evoking eminently questionable or arguable presuppositions or assumptions that make, in the end, for more or less conceptual confusion if not implausibility.
 
According to our editors, respectively a consultant and writer affiliated with Yale’s Interdisciplinary Center for Bioethics and a Professor of History and Philosophy of Science and of Cognitive Science, “today’s [computer] systems are approaching a level of complexity … that requires the systems to make moral decisions—to be programmed with ‘ethical subroutines’ to borrow a phrase from Star Trek” (the blurring of the lines between contemporary science and science fiction, or the belief that much that was once science fiction on this score is no longer fiction but the very marrow of science itself, is commonplace). This argument depends, I would argue, on a rather implausible model of what it means for us to make “moral decisions,” as well as on an incoherent or question-begging application of the predicate “ethical.” Wallach and Allen open the Introduction with the breathless statement that scientists at the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT) “are designing computers that can read human emotions,” as if this is a foregone conclusion awaiting technical development or completion. Human beings are not always adept at reading emotions insofar as we can hide them or simply “fake it,” as it were. In any case, as I will later argue, a machine cannot understand what constitutes a human emotion. For now, an assertion will have to suffice: The expression of emotions in persons is in fact an incredibly complex experience, involving both outward and inner dimensions (some of which are cognitive), biographical history, relational contexts and so forth, all of which are part, in principle or theory, of an organic whole, that is, the person. In the words of P.M.S. Hacker,
 
“Emotions and moods are the pulse of the human spirit. They are both determinants and expressions of our temperament and character. They are tokens of our engagement with the world and with our fellow human beings. [….] [T]he emotions are also perspicuously connected with what it, or is thought to be, good and bad. Our emotional proneness and liabilities are partly constitutive of our temperament and personality. Our ability to control our emotions, to keep their manifestations and their motivating force within the bounds of reason, is constitutive of our character as moral agents. So the investigation of the emotions is a fruitful prolegomenon to the philosophical study of morality. It provides a point of access to the elucidation of right and wrong, good and evil, virtue and vice, that skirts the morass of deontological and consequentialist approaches to ethics without neglecting the roles of duties and obligations, or the role of the consequences of our actions in our practical reasoning.”
 
Let’s pause to consider in a rough and preliminary matter, the role of emotions from the vantage point of human nature or philosophical anthropology. This should help us appreciate the extravagance or implausibility of the claim that AI machines soon will, in fact, or might, in principle, “read human emotions.” Here I will rely upon P.M.S. Hacker’s third volume in a planned tetralogy on human nature as seen from the vantage point of philosophical anthropology, The Passions: A Study of Human Nature (2018). We begin with a brief introduction that characterizes emotions as essential to what “we care about:”
 
“Emotions and moods are the pulse of the human spirit. They are both determinants and expressions of our temperament and character. They are tokens of our engagement with the world and with our fellow human beings. [….] [T]he emotions are also perspicuously connected with what it, or is thought to be, good and bad. Our emotional proneness and liabilities are partly constitutive of our temperament and personality. Our ability to control our emotions, to keep their manifestations and their motivating force within the bounds of reason, is constitutive of our character as moral agents. So the investigation of the emotions is a fruitful prolegomenon to the philosophical study of morality. It provides a point of access to the elucidation of right and wrong, good and evil, virtue and vice, that skirts the morass of deontological and consequentialist approaches to ethics without neglecting the roles of duties and obligations, or the role of the consequences of our actions in our practical reasoning. [….]  For the remainder of this argument, please see this post from last year: https://www.religiousleftlaw.com/2021/04/this-is-a-revised-version-from-something-i-posted-several-years-ago-i-was-inspired-by-to-return-to-this-topic-after-dippin.html
 
And a cluster of arguable presuppositions and assumptions can be traced back to Alan Turing, or at least propositions attributed to or believed to be derived from his work: https://www.religiousleftlaw.com/2022/04/assessing-some-of-the-arguments-of-ais-first-philosopher-alan-turing.html
 
 
 
 Report

Naive Skeptic
Naive Skeptic
Reply to  Patrick S. O'Donnell
1 month ago

“Moreover, AI only occurs as the result of the (creative, collaborative, managerial …) work of human designers, programmers, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicae is used in moral philosophy and moral psychology) machines or robots,”

Moreover, NI (natural intelligence) only occurs as the result of the (reproductive, educational, inspirational …) work of human parents, educators, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicate is used in divine philosophy and divine psychology) organisms or people

Clearly only gods can be actually autonomousReport

Patrick S. O'Donnell
Reply to  Naive Skeptic
1 month ago

Autonomy is not an absolute condition (hence one might be more or less autonomous), indeed, one learns what the meaning of moral psychological and moral autonomy through its exercise in the course of familial socialization, education, and moral training (which often cites exemplars of same), in the course of thinking for oneself, in the process of becoming a unique individual person (hence self-determination). More than a few philosophers, after Kant (and others), have offered more than plausible conceptions and arguments on behalf of this (or these) notion(s). Freud termed it “strength of the ego” (for an incisive discussion of what this meant for Freud, see Ilham Dilman’s brilliant book, Freud, Insight and Change [Basil Blackwell, 1988]; and while Freud did not agree with Kant about practical moral reasoning, he held fast to a notion of autonomy that is eminently defensible if not persuasive). It is fundamental to our exercise of truly human agency and our capacity for practical reasoning, as well as various compelling conceptions of freedom. There is a fair amount of philosophical and psychological literature describing this condition and proffering arguments on its behalf. You might attempt to familiarize yourself with it, rather than relying on a straw-man or a caricature.Report

Naive Skeptic
Naive Skeptic
Reply to  Patrick S. O'Donnell
1 month ago

That seems reductive and essentialist, but let’s grant everything stated here. “It is fundamental to our exercise of truly human agency and our capacity for practical reasoning, as well as various compelling conceptions of freedom.”
So what?
A submarine doesn’t swim like a fish, a plane doesn’t fly like a bird. An AI might well not think like a human or have agency like a human. But if it is able to persue goals then it can still affect the world, even if it’s not a “human” agent. And we know they can affect the world. Social media algorithms explore possibilitty space like the roots of a tree growing through soil. This may be very different from how humans operate, but it is still able to adjust parts of the world that it has control over in order to move the world in line with its goals. In practice this means affecting the social media feeds of users causing them to have more ad exposure maximizing ad revenue. And also making the users more predictable and radicalized, but that’s another topic.
Well existing AIs may be limited, it is clear that they are becoming more and more general agents over time. Even if the type of agents they’re becoming are not human agents.
Whatever mystical or metaphysical core of human agency that might exist, and that Kant and Freud seem to think makes humans special, doesn’t seem especially relevant to this conversation.Report

Rosanna Festa
2 months ago

A great project that enrolls mathematics.Report

David Wallace
1 month ago

I passed Justin’s invitation for comments on to a colleague. They don’t (currently) have internet access so I’m sharing their reply on their behalf.

“As an academic philosopher, I completely agree that there are a wide array of philosophical questions raised by the development and use of language models and efforts to understand them. From philosophy of science and epistemology to ethics and social and political philosophy, these issues touch on many different areas of philosophy.

I also agree that there is a mismatch between the social significance and philosophical fertility of these issues, and the amount of attention they are currently getting from philosophers. It is important for philosophers to engage with these issues in a thoughtful and rigorous way, in order to provide insights and frameworks for understanding the ethical, philosophical, and social implications of these developments in AI.

I am currently working on a project that examines the ethical implications of large language models, and I believe that this is an area that deserves more attention from philosophers. I would be happy to share more details about my project in the comments, and I would also be interested in hearing from other philosophers who are working on similar issues.”Report

David Wallace
Reply to  David Wallace
1 month ago

Just realized I should have included a link to their website.Report

John Basl
1 month ago

Northeastern has been doing a lot of hiring in this area and we have some extremely talented early-career scholars working in this area. I’d really suggest that people take a look at the work of Katie Creel, Sina Fazelpour, and Chad Lee-Stronach. I think, in general, there are a growing number of early career-scholars doing excellent work in this area. That said, there aren’t many places one can go to get training in AI ethics, and the demand (e.g., on the market) for scholars working is really high.

Starting this summer, Northeastern will host an NSF-funded graduate training program for AI and data ethics. We’re really excited about this. Please do share this with graduate students that might be interested.

Here’s a link to the program: https://cssh.northeastern.edu/ethics/summer-grad-training-program/Report