Back in July of 2020, I published a group post entitled “Philosophers on GPT-3.” At the time, most readers of Daily Nous had not heard of GPT-3 and had no idea what a large language model (LLM) is. How times have changed. (more…)
“I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox.” (more…)
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was origina..
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? (more…)
“How to deal with GPT-3-written essays? Instead of scolding students not to use it, we ask them to generate a ten, choose the best one, and explain why. Unless they have a paid account, the word-count limit would make it impossible to use GPT-3 to also generate the explanation…” (more…)
Over the past few years we have seen some startling progress from Large Language Models (LLMs) like GPT-3, and some of those paying attention to these developments, such as philosopher John Symons (University of Kansas), believe that they pose an imminent threat to teaching and learning (for those who missed its inclusion in the Heap of Links earlier this summer, yo..
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann. (more…)