large language modelsTag
Now that OpenAI has made it possible for members of the (paying) public to customize its large language model (LLM), ChatGPT, with special instructions, extra knowledge, and particular combinations of skills, the prospects of using it to create useful, interesting, and maybe even insightful “model philosophers” have improved. (more…)
“We can apply scientific rigor to the assessment of AI consciousness, in part because… we can identify fairly clear indicators associated with leading theories of consciousness, and show how to assess whether AI systems satisfy them.” (more…)
“The humanities are… a gateway to and instigator of a lifelong activity of free self-cultivation. The changes they provoke in us are not always for the happier, or the more remunerative, or the more civically engaged, but when things go passably well, these changes are for the deeper, the more reflective, and the more thoughtful.” (more…)
How do large language models (LLMs) affect how we understand our job as teachers, and how does it affect what we should do in order to do that job well? (more…)
“ChatGPT has just woken many of us up to the fact that we need to be better teachers, not better cops.” (more…)
“There are good reasons to think that some AIs today have wellbeing.” (more…)
In light of the continued development and growing use of large language models (e.g., ChatGPT), other kinds of neural networks, generative agents, and the like, a group of scientists, mathematicians, philosophers, and other researchers have signed an open letter intended as a “wakeup call for the tech sector, the scientific community and society in general to take s..
“This text is the AI’s main source of information about the world as it is being built, and it influences how it responds to users.” (more…)
AutomatED, a guide for professors about AI and related technology run by philosophy PhD Graham Clay (mentioned in the Heap of Links last month), is running a challenge to professors to submit assignments that they believe are immune to effective cheating by use of large language models. (more…)
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” (more…)
“The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence.” (more…)
Back in July of 2020, I published a group post entitled “Philosophers on GPT-3.” At the time, most readers of Daily Nous had not heard of GPT-3 and had no idea what a large language model (LLM) is. How times have changed. (more…)
The Committee on Publication Ethics (COPE), whose standards inform the policies and practices of many philosophy journals and their publishers, has declared that “AI tools cannot be listed as an author of a paper.” (more…)
“What’s in this picture?” “Looks like a duck.” “That’s not a duck. Then what’s it?” “Looks more like a bunny.” (more…)
“I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox.” (more…)
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? (more…)
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was origina..
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? (more…)
Ongoing developments in artifical intelligence, particularly in AI linguistic communication, will affect various aspects of our lives in various ways. We can’t foresee all of the uses to which technologies such as large language models (LLMs) will be put, nor all of the consequences of their employment. But we can reasonably say the effects will be significant, and ..
Over the past few years we have seen some startling progress from Large Language Models (LLMs) like GPT-3, and some of those paying attention to these developments, such as philosopher John Symons (University of Kansas), believe that they pose an imminent threat to teaching and learning (for those who missed its inclusion in the Heap of Links earlier this summer, yo..