Online Philosophy Resources Weekly Update
The weekly report on new and revised entries at online philosophy resources and new reviews of philosophy books…
- Propositional Logic by Curtis Franks.
- Xenophanes by James Lesher.
- Walter Chatton by Rondo Keele and Jenny Pelletier.
- Constitutionalism by Wil Waluchow and Dimitrios Kyritsis.
- Structural Realism by James Ladyman.
- The Compactness Theorem by Robert Leek and A. C. Paseau. (Revised)
- Imitation of Rigor: An Alternative History of Analytic Philosophy by Mark Wilson is reviewed by Katherine Brading.
Open-Access Book Reviews in Academic Philosophy Journals ∅
Recent Philosophy Book Reviews in Non-Academic Media
- Plato of Athens: A Life in Philosophy by Robin Waterfield is reviewed by Jane O’Grady at The Telegraph.
- Free and Equal: What Would a Fair Society Look Like? By Daniel Chandler is reviewed by Jonathan Wolff at the Times Literary Supplement.
Compiled by Michael Glawson
BONUS: Can we create conscious machines?
Those interested in the revised constitutionalism entry in the SEP might also want to look at my recently updated bibliography for constitutionalism.Report
As for the bonus question, the short (but no less emphatic) answer is “no.” For some of the various reasons for reaching that conclusion, I recommend a book by a computer scientist (who appears to have some background in or at least knowledge of philosophy, but even better, a philosophical temperament or disposition), one which covers an immense amount of territory while being clearly and well written: Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can‘t Think the Way We Do (Belknap Press of Harvard University Press, 2021). If more people who often weigh in on this and related questions (about general intelligence, etc.) read this work they might be a bit less confident or cocky or adamant about their beliefs, intuitions, feelings, predictions, and the like; save, perhaps, as those might arise from their more warranted fears about how big corporations will continue (or plan) to exploit AI for their own ends (in conjunction with heavily funded Big Science/Big Data endeavors along the lines of the Human Brain Project), ends which are at once formed and constrained by capitalist imperatives that tend to run roughshod over democratic values and principles, as well as ethical orientations that clash with or critique the corresponding techno-science and scientism wrapped up in the shimmering guise of myths and sci-fi fantasies with all-too-Promethean roots.Report
> the short (but no less emphatic) answer is “no.” […] If more people who often weigh in on this and related questions (about general intelligence, etc.) read this work they might be a bit less confident or cocky or adamant about their beliefs, intuitions, feelings, predictions, and the like
Mostly in jest but… that’s a pretty confident (and maybe just a bit cocky) “no”. Maybe you mean other people who disagree with you will be less confident or cocky or adamant after reading the book?Report