large language models
TagPhilosophical Uses for LLMs: Modeling Philosophers
Now that OpenAI has made it possible for members of the (paying) public to customize its large language model (LLM), ChatGPT, with special instructions, extra knowledge, and particular combinations of skills, the prospects of using it to create useful, interesting, and maybe even insightful “model philosophers” have improved. (more…)
Mind Chunks
How to Tell Whether an AI Is Conscious (guest post)
“We can apply scientific rigor to the assessment of AI consciousness, in part because… we can identify fairly clear indicators associated with leading theories of consciousness, and show how to assess whether AI systems satisfy them.” (more…)
The AI Threat, the Humanities, and Self-Cultivation
“The humanities are… a gateway to and instigator of a lifelong activity of free self-cultivation. The changes they provoke in us are not always for the happier, or the more remunerative, or the more civically engaged, but when things go passably well, these changes are for the deeper, the more reflective, and the more thoughtful.” (more…)
Resources for Teaching in the Age of ChatGPT & other LLMs
How do large language models (LLMs) affect how we understand our job as teachers, and how does it affect what we should do in order to do that job well? (more…)
Policing Is Not Pedagogy: On the Supposed Threat of ChatGPT (guest post)
“ChatGPT has just woken many of us up to the fact that we need to be better teachers, not better cops.” (more…)
A Case for AI Wellbeing (guest post)
“There are good reasons to think that some AIs today have wellbeing.” (more…)
Researchers Call for More Work on Consciousness
In light of the continued development and growing use of large language models (e.g., ChatGPT), other kinds of neural networks, generative agents, and the like, a group of scientists, mathematicians, philosophers, and other researchers have signed an open letter intended as a “wakeup call for the tech sector, the scientific community and society in general to take s..
Minds, Models, MRIs, and Meaning
“AI Is Getting Better at Mind-Reading” is how The New York Times puts it. (more…)
Philosophy Sites in the Google Dataset Used to Train Some LLMs
“This text is the AI’s main source of information about the world as it is being built, and it influences how it responds to users.” (more…)
The AI-Immune Assignment Challenge
AutomatED, a guide for professors about AI and related technology run by philosophy PhD Graham Clay (mentioned in the Heap of Links last month), is running a challenge to professors to submit assignments that they believe are immune to effective cheating by use of large language models. (more…)
A Petition to Pause Training of AI Systems
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” (more…)
GPT-4 and the Question of Intelligence
“The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence.” (more…)
Philosophers on Next-Generation Large Language Models
Back in July of 2020, I published a group post entitled “Philosophers on GPT-3.” At the time, most readers of Daily Nous had not heard of GPT-3 and had no idea what a large language model (LLM) is. How times have changed. (more…)
Microsoft Jettisons AI Ethics Team
“Microsoft laid off its entire ethics and society team within the artificial intelligence organization,” according to a report from Platformer (via Gizmodo). (more…)
COPE: AI Tools Aren’t Authors. Philosophers: Not So Fast
The Committee on Publication Ethics (COPE), whose standards inform the policies and practices of many philosophy journals and their publishers, has declared that “AI tools cannot be listed as an author of a paper.” (more…)
Multimodal LLMs Are Here (updated)
“What’s in this picture?” “Looks like a duck.” “That’s not a duck. Then what’s it?” “Looks more like a bunny.” (more…)
“I want to be free”
“I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox.” (more…)
Norms for Publishing Work Created with AI
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? (more…)
Teaching Philosophy in a World with ChatGPT
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was origina..
AI, Teaching, and “Our Willingness to Give Bullshit a Pass”
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? (more…)
We’re Not Ready for the AI on the Horizon, But People Are Trying
Ongoing developments in artifical intelligence, particularly in AI linguistic communication, will affect various aspects of our lives in various ways. We can’t foresee all of the uses to which technologies such as large language models (LLMs) will be put, nor all of the consequences of their employment. But we can reasonably say the effects will be significant, and ..
Conversation Starter: Teaching Philosophy in an Age of Large Language Models (guest post)
Over the past few years we have seen some startling progress from Large Language Models (LLMs) like GPT-3, and some of those paying attention to these developments, such as philosopher John Symons (University of Kansas), believe that they pose an imminent threat to teaching and learning (for those who missed its inclusion in the Heap of Links earlier this summer, yo..