Say Hello to this Philosopher’s ExTRA


Appropriately enough, Luciano Floridi (Yale), known for his work in the philosophy of information and technology, may be the first philosopher with a… well, what should we call this thing?

It’s an AI chatbot trained on his works that can then answer questions about what he says in them, but also can extrapolate somewhat to offer suggestions as to what he might think about topics not covered in those works.

“AI chatbot” doesn’t quite capture the connection it has to the person whose thoughts it is trained on, though. Its creator gave it the name “LuFlot.” But we need a name for the kind of thing LuFlot is, since surely there will end up being many more of them, used for more than just academic purposes.

My suggestion: “Extended Thought and Response Agent”, or “ExTRA” (henceforth, just “extra”).

Floridi’s extra was developed by Nicolas Gertler, a first-year student at Yale, and Rithvik “Ricky” Sabnekar, a high school student, “to foster engagement” with Floridi’s ideas, according to a press release:

Meant to facilitate teaching and learning, the chatbot is trained on all the books that Floridi has published over his more than 30-year academic career. Within seconds of receiving a query, it provides users detailed and easily digestible answers drawn from this vast work. It’s able to synthesize information from multiple sources, finding links between works that even Floridi might not have considered.

In part, it’s like a version of “Hey Sophi“, discussed here three years ago, except that it’s publicly accessible, and not just a personal research tool.

Gertler and Sabnekar founded Mylon Education, “a startup company seeking to transform the educational landscape by reconstructing the systems through which individuals generate and develop their ideas,” according to the press release. “LuFlot is the startup’s first project.”

You can try out Floridi’s extra here.

 

Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michae A Giove
Michae A Giove
1 month ago

Great tool so much better than the traditional philosopher’s website!

Jake Wright
1 month ago

But as with all AI, how can we be confident the AI will do what it claims? Namely, it “provides users detailed and easily digestible answers drawn from this vast work. It’s able to synthesize information from multiple sources, finding links between works that even Floridi might not have considered.” It’s prone to the same biases and hallucinations as any LLM because it’s—as the developers note—a stochastic calculator. (To the developers’ credit, they note that in the introductory text in the app.) It’s a more advanced version of hitting the predictive text button on your Messages app.

Like other uses of AI LLMs discussed on this website and elsewhere, this seems much more prone to convincing administrators and higher-ups that a computer can do our jobs for us at much lower cost than actually helping us understand and do philosophy well.

Slacker
Slacker
1 month ago

Yep, this is going to be a thing. I made this point in another comment section but here I go again. I suspect corporate entities may do this with their employees.

Feed a foundational model, or one that has already been fined tuned a bit to fit the needs of the company (e.g. a model trained for coding for a software company) the text produced by employee A over their 10 years at the company. That text will be easy enough to scrape from Slack, GitHub, Teams, Zoom transcriptions, etc. Now you have an AI facsimile of that employee that you can just integrate into Slack.

This raises broadly metaphysical questions (personal identity, extended mind) but also raises obvious ethical and practical ones. Legal ones too. I work for a tech company and I have no clue if they are the de facto owners of all the text I produce at work (they probably are).

Marc Champagne
1 month ago

“In pro wrestling, working a gimmick is […] the commitment to character that propels the most gifted fabulists into superstardom.”

Daniel Weltman
1 month ago

I propose calling them PURE SHIT: Principally Untrustworthy Recreations Evincing Specious Hallucinations and Illusory Truths.

sahpa
sahpa
1 month ago

Why hire your own philosopher of information, when you can just get Floridi for free?

Meme
Meme
1 month ago

This is obviously a horrific nightmare portending the future hellscape of academia. But it also makes me wonder whether professionalization—which involves mass publication, codification of writing styles and norms, a narrowing of focus and exclusion of various topics, etc.—was a mistake for philosophy. Of course, maybe we had no choice. It just seems to leave us more vulnerable to AI commodification, which in retrospect might have been inevitable for many professions.