Logic Courseware, Surveyed (guest post)


What materials exist for teaching large introductory logic courses, and how do they compare?

This summer, Curtis Franks, associate professor of philosophy at the University of Notre Dame, has taken up the task of examining a wide range of logic texts and software, many of them free and open-access. He has long made use of the popular Language, Proof and Logic package, but, as he explains in the first part of this guest post, he has some complaints about it and is looking for an alternative. In the second part of the post, he shares his reviews of over 20 other logic courseware options.


[Illustrations of logicians by Matt Leadbetter for the Open Logic Project. Click on image for more info about these drawings.]

Logic Courseware, Surveyed
by Curtis Franks

Like many logic instructors, I am a long-time user of the multimedia package Language, Proof and Logic (LPL) by Jon Barwise, John Etchemendy, and Dave Barker-Plummer. I believe I taught a full course based on this package at least once in each of the past twenty years. It has its merits, including a lightly-worded, easily read and easily navigated textbook, a user-friendly graphical interface for model construction, a point-and-click natural deduction proof editor with drop down menus for rule templates, and auto-generated grade reports. Students and teachers alike love these things, and LPL was a pioneer in most of it. Some experts have complained about the very features many others adore. Van Benthem had LPL and similar packages in mind when he wrote: “Sometimes the new technology is used to create high-tech versions of largely standard fare in the traditional curriculum with, say, sophisticated graphics interfaces for classical natural deduction proof systems, like a Latin Mass with rock guitars.”* But this put-down might not resonate for someone who has listened to much zeuhl or who has had to grade for enrollments greater than 50 students.

Van Benthem has another gripe with LPL: “[S]ometimes also, there is ideological fervor behind the effort: the course designers have a special research agenda with their own view of logic, modifying or changing existing curricula, and they want to export their revolution by by-passing the academic colleagues and instead of that, influencing the youth.” As one might have guessed, this is only a criticism if you do not share the course designers’ ideology. Van Benthem admits that his own “course Logic in Action falls in the second activist category.” More on Logic and Action anon.

1. Complaints about Language, Proof, and Logic (LPL)

My own complaints with LPL are different from these. If they resonate with you, then you might be wondering what other options for teaching large introductory logic courses are out there and how they compare on these measures. So I’ll first list my grievances with LPL in order to frame the discussion of alternative logic packages I have surveyed. In increasing order of toxicity:

(i) Inevitably between 1 and 5 percent of students experience some sort of software installation glitch, delaying their engagement with the course. There doesn’t seem to be anything wrong with the software, and these students seem to be pretty technologically engaged otherwise, but I have not once taught a section of students none of whom experienced this sort of problem.

(ii) Dorothy Edgington calls the material conditional analysis of indicative mood conditional statements “logic’s first surprise.” She’s right that “it does not strike students as obviously correct.” I do not know which is worse: declaring that indicative mood conditional statements are expressions of implication relations or insisting in a pedagogical setting that the classical, bivalent, truth-functional analysis of such statements is correct. The latter is surely more disingenuous. The authors of LPL keep an admirable distance from the first obscenity but crash head-on into the second one. I do not think it is conscionable to teach with the LPL package without issuing a disclaimer about the storied history of thoughtful reasons to question the “material” analysis of conditionals. My own disclaimer has expanded over the years to include many of Edginton’s own examples and her brilliantly simple proof of Lewis’s theorem about conditional probabilities.

(iii) LPL leans very heavily on a “standard interpretation” of a collection of predicate symbols from what is called “the blocks language.” This sets up the arena described above as a very attractive user interface for model construction: a rotating chessboard on which one can position blocks of various shapes and sizes in order to construct a “Tarski world” that satisfies or provides a counterexample to a claim like “Every large cube is larger than a small dodecahedron to its left.”

While the exercises involving proof construction are standard fare, the Tarski’s World environment is the site of LPL’s most creative problem solving challenges. And it does foster a good grasp of concepts like satisfiability. But its pervasiveness in the course definitely implants a systematic confusion in some students between the concepts of “truth in all first-order models” and “truth in all possible Tarksi worlds.” Compare the distinction between pure first-order logic and “first-order logic with identity.” In pure first-order logic, the equals sign gets treated as a binary relation symbol like any other. Expressions like “a=a” are not valid, because in the interpretation where candidate objects are people who have lived on Earth and the symbol “=” is associated with the relation ” … is the grandfather of … ,” it is false on nearly all assignments of objects to the constant symbol “a.” In first-order logic with identity, by contrast, “a=a” is valid, because here the equals sign is always interpreted as literal identity among objects. It is important in logic to have on some occasions standard interpretations of some designated vocabulary, like the equals sign. But pedagogically it is unclear that using a language in which the entire vocabulary is so designated is advisable. Instructors’ words of caution and occasional reminders do little to weather the habituation that hundreds of weekly practice problems instill.

(There is also an oddity about the standard interpretation itself. Can 9 distinct objects be in the same column? “Yes.” Apparently contingent features of the chessboard are not built into the meanings of the predicates. If a is neither a tetrahedron nor a cube, must it be a dodecahedron? “Yes.” Apparently the fact that Tarski’s world only hosts three shapes of block is. Can two large objects be adjacent? “Too hard to check.”)

(iv) One of the most powerful features of LPL is a verifier for logical consequence. In the Fitch program, whose main purpose is the construction of natural deduction proofs, it is possible to run queries of the form, “Is sentence C a classical first-order logical consequence of sentences A and B?” Not even Alonzo Church could write a program that answers all queries of this form correctly, so as one expects, LPL often responds with a question mark instead of an affirmative checkmark or repudiating X. But it does provide a correct answer for a surprisingly large class of sentences even with a bit of quantifier complexity depth. It detects the classical validity of the “drinker sentence” in a second, and more generally it handles most of the exercises from part 2 of the text. A typical exercise is of the form “If C follows from A and B, build a proof of it. Otherwise build a counterexample world.” Perceptive students quickly catch on to the idea of first running a first-order consequence query so that they will know which of these two tasks to bother with.

But what if you run a query about sentences that aren’t from the text? One of my favorite basic logic questions is to determine of each of these formulas whether it is unsatisfiable, finitely satisfiable, or satisfiable but false in every finite model:

(a) ∃y∀x∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))

(b) ∀x∃y∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))

(c) ∀x∀z∃y(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))

Several years back, a very good student got this problem wrong because he fed the sentences and their negations into the LPL software to see if any of them were unsatisfiable and trusted the answers that the software provided. After he brought this to my attention, we poked around some more, “discovering,” among other oddities, that (b) implies (c), that (c) implies a contradiction, and that (b) does not imply a contradiction. I don’t want to spoil the problem for interested readers, but I don’t think I am revealing too much to point out that these three things cannot all be correct. In the eight years or so that have passed, as LPL has been upgraded, I have checked periodically to see that these erroneous assessments remain in place, and I have noted several other similar errors. It seems bad that this educational package would produce errors of this sort. Nothing in the text or software documentation warns users about the problem.

(v) But the thing that weighs most heavily on me when I teach from LPL is its bizarre nomenclature for the natural deduction rules. There is a history behind the prominence of natural deduction in basic logic curricula. In fact, F.J. Pelletier wrote such a history, and one can see already in its abstract that it is a history “from the view that [its] founders embraced to the widespread acceptance of the method in the 1960s” (emphasis aded). Gerhard Gentzen‘s ideas are what were left behind in the series of accidents that led to the appearance in logic textbooks of the technology he invented.

What ideas do I have in mind? Well, Gentzen said many things about natural deduction. It is modeled off the way people actually reason. Its individual rules were abstracted from the inferences that appear in published mathematical manuscripts. It frees formal logic from the need of axiomatic foundations, embracing instead the hypothetical nature of logical inference. For these reasons and a few others, it came to be believed that the value of natural deduction lay in its ability to facilitate the teaching of logic. Because they emulate patterns of natural inference, students will take to its rules and the method of constructing proofs with relative ease.

If this is one’s attitude, then of course there is always room to do Gentzen one better. For example, students do not typically find it intuitive to create an elaborate network of subproofs and to invoke the principle ex falso quodlibet, as Gentzen’s system demands, in order to establish the truth of B from the premises “A or B” and “not A.” They want to establish the inference forthright, as it seems more natural than any one of the steps in Gentzen’s system that are meant to justify it. So why not make this “disjunctive syllogism rule” a primitive component of the proof system? A more natural natural deduction.

Gentzen had another idea that is seldom encountered in textbooks. The way he put it is unfortunately a bit clumsy, but the idea has been refined and clarified into one of the major conceptual developments of 20th century logic, so we can see retrospectively just what he meant. It is well known that Gentzen’s system had, for each logical constant, an introduction rule and an elimination rule. His clumsy statement was that the introduction rules “provide the definition of” the logical constants and that the elimination rules are “consequences of” the introduction rules.

Consider the rule modus ponens. It might strike you as something more than one among the countless valid inferences one can draw from a statement governed by the conditional operator. To believe “if A, then B” is be prepared to believe B as soon as you come to know A. That’s what “if … then” means. And that’s what modus ponens ( A ⊃ B, A ⊢ B ) says. Now, the rule itself doesn’t say that it is the definition of the conditional. So when I say that it is the definition of the conditional, I am saying something more than what the rule says. I am saying something about the rule. I am saying both that A ⊃ B, A ⊢ B is correct and that if any other sentence can stand in the same inferential relationships that A ⊃ B stands in—i.e., can fill the role of A ⊃ B by being put in the blank in this sequent ____, A ⊢ B to say something correct—then that sentence has the meaning of the conditional “built into it.” That other sentence must be at least as strong as A ⊃ B, in the logical sense of standing in the premise position of at least as many inferential relations. Another way of putting this is to say that, for all C, If C, A ⊢ B, then C ⊢ A ⊃ B.

So what we have just shown is that the rule

for all C, If C, A ⊢ B, then C ⊢ A ⊃ B

although it does not follow from the rule

A ⊃ B, A ⊢ B

does follow from the conception of that rule as definitive of the ⊃ symbol. You probably recognize these two rules as the introduction and elimination rules for the ⊃ symbol in natural deduction.

One should have a reason for introducing university students to logic with a natural deduction style proof system. There are alternatives. Axiomatic systems are far easier to describe and to reason about, even if impractical to use. Semantic tableaux or truth trees are simpler and lead to extremely straightforward arguments for the completeness theorem and related reults. One bad reason to introduce students to natural deduction instead of these alternative frameworks is that it is what the instructor happens to be most familiar with. A better, though debatable, reason is that it will be easiest for the student to use in the construction of proofs.

My reason, especially in the context of a philosophy curriculum, is that the idea of defining a logical connective with an intro/elim pair of rules is a profound discovery, one of the great 20th century conceptual breakthroughs in logic. It is an idea that generalizes throughout modern mathematics (and beyond) in the concept of a “universal property.” Natural deduction is an occasion for students to experience the phenomenon of defining something in terms of its situation in the network of relations it bears to other things of the same type, just as truth-functional semantics is an occasion for students to to experience the phenomenon of defining something in terms of the abstract object it refers to or its internal constitution. You can think of these as the relational and essential styles of definition.

When Gentzen tabulated his set of introduction and elimination rules for the standard connectives ∧, ∨, ⊃, ¬, ∀, ∃, he drew our attention to two other inference figures that do not fall into this scheme: ex falso quodlibet, and the double negation rule. Concerning the latter he emphasized that although it is a method of reasoning with negations, it cannot be justified by virtue of the intro/elim rules for that connective. In other words, the double negation rule is not licensed by the meaning of the negation symbol, when one thinks of definition relationally. But of course, the double negation rule is justified on the classical truth-functional definition of the symbol. So apparently, the relational and essential schemes of definition come apart.

Now consider the names of natural deduction rules in LPL. For conjunction, disjunction, and the quantifiers, the text adheres to Gentzen’s original taxonomy. But it is clear that the whole idea of relational definition is lost on the authors when one sees that the double negation rule is there labeled “¬-Elim.” Gentzen’s whole point had been that this rule “falls outside the Intro/Elim scheme.” Similarly, the rule Gentzen called “¬-Elim” (because it, together with the rule he called “¬-Intro”, provides a relational definition of the negation symbol) is labeled in LPL “⊥-Intro.” LPL calls the ex falso quodlibet rule “⊥-Elim”. Never mind that it does not pair with the previous rule to provide the relational definition of anything. Again, Gentzen took it to be a theoretical discovery of great moment that ex falso quodlibet does not fall under and cannot be justified within the intro/elim framework. This means, among other things, that the disjunctive syllogism rule cannot be justified by relational definitions of the ¬ and ∨ connectives alone. Who would have thought?!

For years I taught with the LPL package using its nomenclature. But teaching natural deduction without mentioning anything about the theory of relational definition is like teaching music theory from stave without ever discussing what anything sounds like. A lot of the terms are Italian anyway, so one could shuffle them randomly without anyone catching on (well, maybe not at my institution). I prefer to play the “New World Symphony” very loudly in class at least twice by mid-term. So over time I began referring to the real names of the natural deduction rules in my lectures, “and here you need a ‘¬-Elim’—what the book and software call ‘⊥-Intro’.” In part because the double-talk is so exhausting, and in part because proof construction strategy is actually hindered by the mislabeling, in recent years I have just made a one-time disclaimer after which I use the real names exclusively.

This problem is far from unique to LPL. Early in Pelletier’s survey, he declares that he will call the double negation rule “¬-Elim” “without deviating too much from Gentzen, but being more in line with later developments” (p9). Why? Because “most textbook writers do not follow the int-elim ideal, but instead have a large number of ‘overlapping’ rules (presumably for pedagogical reasons)” (p10). He is correct about most textbooks. But it cannot be maintained that this is a minor deviation from Gentzen’s thought if the idea of relational definition has ever sunk in. And whatever pedagogical reasons might lead one to innovative relabeling and supplementing of Gentzen’s scheme is arguably offset by the pedagogical good of introducing students to the beautiful theoretical idea behind natural deduction. So I am making the argument. (I am not alone. Richard Zach told me just this week: “The desire to have ND with standard rule names was one of the reasons to make my own version of [one of the open textbooks surveyed below] forall x.”)

Those are my main gripes with LPL. As I said, though, I really like the project. I am not a multi-media sort of person, but the software interfaces are great for students, steering them away from sloppy formatting errors that hinder many students and have little to do with the serious ideas we want them to focus on. And of course I like the automated grading. So I am interested in projects of a similar spirit for classroom use. It shouldn’t be surprising that there is stuff going on in the vein LPL opened over two decades ago. I dug around a little. The next section covers what I found.

2. Reviews of Other Logic Courseware

First of all, there is an old blog post by Richard Zach here. You will see that most of the links are dead and in other ways some of the information is outdated. Thankfully, there is this page hosted by the Open Logic Project, most recently updated by… Richard Zach. Here the focus is on freely available (and openly licensed) texts. There are several reasons why one might want to prioritize these: price, customizability, integration. Here is what I mean by integration: People like Sean Walsh are writing books specifically to work with the Carnap software. People like Graham Leach-Krouse are writing procedures for Carnap called .ZachTFL2019 specifically to format proofs according to the style of the Calgary “remix” of P. D. Magnus‘s open textbook. I will add a reason that might not have occurred to you: this is where a lot of the action is right now.

So, let me first cover some texts that are both free and open access.

A Concise Introduction to Logic
by Craig DeLancey

This is an elegantly and carefully worded textbook.

1. It does not have its own software, but it could be used together with other software.

2. Discussion of the conditional is nuanced. The bivalent truth-functional analysis is never called correct, so students who don’t accept it as a full specification of the truth conditions of conditional claims (or prodigious students who wonder whether indicative mood conditionals even have truth conditions) won’t feel like they are being told they are wrong. Unfortunately, on pages 20—21, the expressions “P implies Q” and “If P, then Q” are said to be equivalent.

3. A limit of the text is that its scope is pretty elementary. “Advanced topics” are relegated to a final chapter rather than being touched on along the way. This doesn’t devalue what is there, but it would only be suitable in courses that don’t aspire to introduce much reasoning about logical systems and their properties.

4. Pushing in the opposite direction of having a standard interpretation of a common vocabulary, the book turns to a wide range of applications, from analysis of important historical texts to reasoning about cards.

5. The formal proof system is in the natural deduction style, but it is not suitable to introduce the theoretical ideas behind natural deduction. Disjunctive syllogism, modus tollens, etc., are primitive rules.

Elements of Deductive Logic
by Antony Eagle

1. No software.

2. The treatment of conditionals is fine. One finds the claim that the bivalent truth-functional analysis of indicative conditionals does indeed capture such expressions’ truth conditions, but it is backed by an argument (following Jackson) in a way that acknowledges that others have disagreed about this. That may be too heavy-handed for some instructors who suppose that at least some indicative mood conditionals have stronger than truth-functional truth conditions or who, following Ramsey, Rhinelander, Adams, Kratzer, and others think that conditional probabilities do a better job, but it sure beats more cavalier statements found in other texts. And Eagle goes on to distinguish truth conditions from assertibility conditions in this discussion, so he gives plenty of space for those who think that there’s more to meaning. Nowhere are conditionals taken to be expressions of implication claims.

3. The text is designed for a more advanced course. Choices are made in the basic framework to facilitate a gentle entry into basic metatheory. For example, the formal proof systems are semantic tableaux and sequent calculi with an eye towards a demonstration of completeness via Hintikka sets. (Do not expect much proof theory: The sequent system here is a “semantic” one: instead of operational rules, there is a primitive rule of “equivalents” allowing the simultaneous substitution of subformulas in a sequent with formulas true in the same structures. Things like modus ponens are justified against this backdrop. Again: all sights are on streamlined completeness results and the like.)

4. There are no oddities in the way satisfaction is discussed, but the framework is more mathematical both notationally and in its precision than what one introduces in a typical first course in logic.

5. Natural deduction is not discussed, so even if you are teaching a course at a more advanced level for which this book is well-suited, if one of your ambitions is to teach about relational definition via universal properties, it is not the right text.

The Carnap Book & Carnap (software)
by Graham Leach-Krouse and Jake Ehrlich

1. Carnap is a free, on-line, open source proof writer and evaluator, model constructor, translation checker, and grading system. It is versatile and powerful, with functionality extending to such things as proof trees and modal logic. I have been playing around with it these past two weeks and believe that its set-up obviates the sort of start-up problems that many students report with software that has to be installed and registered locally. And I am impressed with the sort of creative functionality evident in the ability to create timed assignments, randomized problems, etc. One of the best things about the software is that it is customizable, not only in that an instructor can write their own problems and create their own assignments, but in that it can be configured to match the syntax of a wide range of proof systems and thus be used in tandem with any textbook. There is also a candidate accompanying text, the Carnap Book, to which I now turn.

2. The treatment of conditionals is great so far as it goes. Less heavy-handed than most, the Carnap Book simply says that that the classical truth-functional analysis is far simpler than alternatives and that understanding the alternatives is enhanced with familiarity with this simple starting point. We are told that, “by and large, it works.” There is no conflation with implication, but that is in part because implication isn’t introduced. The entire business with truth-functional analysis of language comes late, and the examples of truth-table evaluation are all singular (i.e., no “joint truth tables”). Nowhere is the idea of a set of sentences truth-functionally implying another sentence introduced. Even in the discussion of the proof system, where the method of conditional proof is featured prominently, there doesn’t seem to be an occasion for comparing the claims P ⊃ Q, ⊢ P ⊃ Q, and P ⊢ Q.

3. There are no problems with the treatment of satisfaction, other than that the development is probably more elementary than what many instructors are wanting. We mentioned already that the treatment of the semantics for propositional logic is abbreviated. The semantics for the quantificational language is not even introduced.

4. I have not yet encountered errors in the software. Compared to what one gets in LPL, the user interface is less graphical. The absence of point-and-click functionality is missed. The graphical model building environment is missed. The syntax of proof writing will take students longer to pick up. The instructor’s end takes much longer to master. In part due to the great versatility of the software, leading to it having specifications for a wide range of vocabularies, proof styles, etc., the documentation is difficult to navigate. As with many open-source endeavors, Carnap developers and users are true enthusiasts who are eager to help when you get stuck or confused. There is even a mailing list.

The trade off is that once one overcomes the learning curve, the greater flexibility and customizability is amazing. One can do more. It seems that the automation of grading is even enhanced. One does not need to collect grade reports and tabulate progress by hand: Carnap does everything.

I entered the sentences, ∃y∀x∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)), ∀x∃y∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)), ∀x∀z∃y(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)) that LPL couldn’t handle into the “countermodeler” that Carnap is equipped with. I wanted to make sure that this software can identify models to complex sentences like these. It doesn’t (yet) have a feature to search for models, but it allows you to describe models of your own construction and to evaluate sentences in them. On these tricksters and on some even more complex ones that I tested, Carnap passed with flying colors.

5. The proof system in the Carnap Book is in the natural deduction style, but it doesn’t attempt at all to capture the ideas behind natural deduction emphasized above. Again, this is comment only about the Carnap Book. The Carnap software is ready to run with a wide range of proof systems, even including the tree style natural deduction system straight out of Gentzen’s thesis.

forall x: an introduction to formal logic (and its derivatives)
by P. D. Magnus

1. This is an open source textbook introduced in 2005 and continuously revised. But beyond being revised, it has fallen into the hands of many instructors who have seen in its open license an opportunity to rewrite, expand, or otherwise tweak its contents to suit their own purposes. And this was Magnus’s original vision. It does not come with software, but Carnap has already been customized with settings specific to the original text as well as several of its “remixes.”

2. The treatment of the conditional varies from version to version. The Calgary remix stands out as having the most careful discussion of all the texts we surveyed: implication and indicative mood conditionals are explicitly contrasted, and the relationship between them made clear in a succinct discussion of the use/mention distinction. The material conditional analysis is called “the best candidate for a truth-functional conditional,” nothing stronger. There is a whole section about examples of nested conditionals that give rise to ridiculous conclusions on the material analysis, included Edgington’s proof of the existence of God.

3. There is nothing peculiar about the treatment of satisfaction. It should be noted that the several derivatives vary considerably. Magnus’s original text is fairly elementary. The Cambridge remix pushes towards a more advanced treatment; its author Tim Button also wrote an entire sequel text called Metatheory which is among the least intimidating treatment of basic metatheory of classical logic I have seen. In this same direction, the Calgary remix incorporates parts of Button’s Metatheory as well as material on modal logic. The UConn remix treats some issues in nonclassical logic. The Leeds remix has a chapter on probability theory (that stops short of the Lewis/Edgington result, mentioned above, about the lack of truth conditions for indicative conditonals as interpreted by Ramsey et al.). In the other direction, the Lorain County remix is designed “to move more slowly” even than the original “and be more accessible.” Speaking of accessibility, there is a version of the Calgary remix reworded and typeset for dyslexic readers.

4. I have not detected any errors in the versions of the texts I have looked at.

5. The proof system across most versions is in the natural deduction style, and the presentation adheres pretty closely to Gentzen’s scheme in most. The U. British Columbia remix has tableaux in addition to a natural deduction style system, but its version of ND is the same deviant one as in Magnus’s original. The Auckland version is the only one I noticed that doesn’t use natural deduction at all: only truth-trees. In the Cambridge remix, one finds a TND rule to characterize classical logic. In Calgary, they are using an IP rule. My own preference would be a general rule of double negation (DN), so that the sufficiency of IP (essentially, a heavily constrained use of DN) can be presented as a surprising proof-theoretical result. This is a first step towards a verification of Glivenko’s theorem (essentially the sufficiency of a maximally constrained use of DN)—beginning students can handle this. But these are cosmetic differences that do not interfere with the presentation of the main ideas behind natural deduction.

Introduction to Logic
by Matthias Felleisen, John Grenier, Moshe Vardi, Phokion Kolaitis, and Ian Barland

1. There is no autograding software, but the course is designed with some very specific applications in place, including engagement with the downloadable game Water World and with computer programs as such. I can imagine that at Rice (where this was developed) there are assignments designed to be submitted and evaluated mechanically.

2. A single connective is called “the conditional” in places and is said to be pronounced “implies” in other places. But the text is very unconcerned about translation of natural language. Instead the symbolism is always applied to one of three tasks: navigation in the game Water World, circuit verification, and type error detection in the Scheme programming language. For these applications, as in evaluation of claims in mathematics, the distinction collapses: There are no true conditionals that are not logically true, i.e., true because their antecedents do in fact imply their consequents.

3. You’ve surmised already that the book is quite peculiar. It won’t be suitable for most philosophy courses. But it really is a nice book, written in a straightforward manner, and bringing logic to life. Among its oddities is the introduction of the concepts of soundness and completeness in terms of the relationship between truth tables and Boolean algebras, with a proof sketch. The correspondence between these environments and the Water World proof system is considered but declared “beyond the scope” of the book. Still, the idea is there.

4. It is unusual to have a logical system that is never encountered outside of specific applications. But it seems rather to enliven the formalism rather than to implant a false sense of its meaning. In any event, students taking this course will already be familiar with programming in high-level functional languages like Scheme, so are immune from misconception of that sort.

5. The proof system is in the natural deduction style, but the rules are quite oddly labeled, and it is fair to say that the authors’ choice of proof system was not influenced in any way by Gentzen’s ideas. It is great, though, that the idea of the logical strength of a proposition is presented and emphasized. This is the right framework for discussing relational definition: (⊃-Elim tells us about the strength of conditional claims, and ⊃-Intro tells us that any other claim standing in the same sort of relation must be at least as strong as the conditional.)

Logic: The Art of Persuasion and the Science of Truth
by Vann McGee

1. This is a relatively more advanced text, covering such topics as the decidability of the monadic fragment of first order quantification theory, the compactness of the propositional language, algebraic semantics, and the  sense/reference distinction.

2. The treatment of conditionals is accurate and thorough, if possibly too ideological for some some readers: The material analysis is flatly said to be “not very good,” and there is an entire chapter called “Trouble with ‘if’s” that  introduces Adams’s and Stalnakers’ analyses without a formal presentation of Stalnaker semantics or anything like that. This is what one might expect from an author possibly most famous for his counterexample to modus ponens. There is no conflation of conditionals and implication claims.

3. There is nothing unusual or misleading about the treatment of satisfaction …

4. … nor are there any evident mistakes.

5. Proof systems are not presetned for their own sake or because of any theoretical properties they might exhibit. There are none for propositional logic, where other decision procedures abound. They are introduced for predicate logic as means for recursively enumerating theorems, in a tradition inaugurated by the “main method” in Quine’s Methods of Logic. Instructors interested in relational definitions of connectives à la natural deduction could simply supplement the material here with another unit of their own design.

Form and Content: An Introduction to Formal Logic
by Derek Turner

1. No software. This is a fairly elementary book. It is written in fine style, but quite poorly typeset. Despite being on the less advanced end of the spectrum, it does touch on some topics that other less advanced texts ignore such as truth-functional completeness.

2. The text says explicitly that “If P, then Q” says “exactly the same thing” as “P implies Q.” By now I am accustomed to encountering the conflation of these ideas, but it is unusual to read what appears to be a fully deliberate embrace of it. What is more unusual still is the appearance of this remark in a textbook with an entire section on the use/mention distinction. I believe that the fixture of that distinction in logic curricula is due to Quine and others emphasizing it in the 1950s—70s precisely in order to distinguish the indicative conditional operator from the implication relation.

3. There is nothing about the book that I think might mislead a student about the generality of interpretation or satisfaction or anything like that.

4. And I did not detect any errors.

5. The natural deduction style proof system is not at all suitable for a presentation of the idea of relational definition of the connectives.

Introduction to Logic and Critical Thinking
by Matthew van Cleave

1. No software. This option stands out from the others. It is quite elementary as a formal logic textbook, but it is also really extensive in its treatment of logical fallacies, informal logic, and basic probability theory. It is heartening to see something in this genre also being prepared under a free and open license.

2. The treatment of conditionals rubs me wrong on both scores: the same symbol is used to translate indicative mood conditionals and implication claims, and the argument that this symbol be interpreted truth-functionally is presented as conclusive.

3. The infrastructure stops well short of a fully developed quantification theory, so there is no notion of satisfaction to be misleading about.

4. I didn’t look closely enough to detect errors.

5. The proof system is only partially modeled on natural deduction. For example, there are no subproofs.

Introductory Logic
by Sean Walsh

1. This book was written explicitly to be integrated with the Carnap software and has Carnap evaluation widgets embedded in its pages (as well as short instructional videos). And the text itself refers to what “the software will do” in various cases. So it is an alternative to the Carnap Book, even hosted on the Carnap domain. It is a considerably more advanced text, with more scope and depth, building up to such topics as the BHK interpretation of intuitionistic logic, completeness, basic probability theory, normalization, etc.

2. Here we are explicitly told in a couple of places that the same symbol translates both indicative mood conditionals and implication claims, although in the translation exercises and discussion of the proof apparatus, conditionals are always the focus. The good news is that in the very well designed chapter on probability, one finds the remark: “Perhaps when we are asserting ‘if p then q’ we are not asserting the truth (or high probability) of p ⊃ q, but rather we are asserting that the probability of p conditional on q is high.” Perhaps!

3. No misleading aspects of the treatment of interpretation or satisfaction.

4. No errors detected.

5. The treatment of natural deduction is excellent, and this is the only text we surveyed that discusses the relational definition of logical particles (in somewhat different terms than we use). There is an entire section called “Natural deduction, definitions, and normalization.”

Turning now to the texts labeled “free, not open” I will dispense with the five item enumeration and say only what I think is pertinent.

Notes on the Art of Logic
by Nuel Belnap

This is a slow, prodding text. Though written with great care, it is not the sort of reading that today’s students will relish (nor does it have a table of contents). It is very good on conditionals, although maybe too dogmatic in the opposite direction (Belnap “asks” whether ordinary language “if … then” expressions are truth-functional and provides this “hint”: “No; but philosophers differ in their opinions on this matter.”). Well, it is Belnap. Pelletier identified Fitch as the author of one of the few prominent textbooks that preserves Gentzen’s intro/elim scheme. His student, Belnap, here took the other extreme with natural deduction “rules” like “tautological implication” (I am an academic descendent in this lineage who humbly aims to restore its original legacy). The scope is wide, covering a bit of set theory, generalized quantifiers, and things like that.

Logic in Action
by Johan van Benthem, Hans van Ditmarsch, Jan van Eijck, Jan Jaspars

This is an amazing and almost unique book. It appears that the material on deductive machinery and basic metatheory, at the end (included there to show that despite the piles of insults found in the earlier chapters and the framing of the project (e.g., “the usual emphasis on formal proof somehow suggests that mathematical activities are the highest point of logical intellectual skills, a claim as debatable as thinking that the best test of someone’s moral fiber is her behavior in church.”), the authors “have no quarrel with standard curricula”), is still in progress. But the presentation of natural deduction there is almost perfectly suited for showcasing the idea behind the system (it is unfortunate that the rule of indirect proof, used here instead of DN, is called “⊥-Elim”). It is even extended to cover identity and mathematical induction in natural deduction style, which is not common in textbooks.

What is great about this book, though, is its extensive focus, at a fully accurate but still introductory level, on the logical analysis of information flow, games, and computation. There is nothing else quite like this except for …

Get Rational with these Four Weird Tricks
by Josh Dever

Falling into your virtual lap in over 1000 pages, this book-in-progress covers just about everything. The long chapters on epistemic logic, decision theory, social choice theory, and game theory are all excellent and well-integrated into the broader development of formal logic. The presentation of conditionals is not just fair and accurate, it is carefully worded and developed. Almost uniquely, it recognizes that the case of a conditional with true antecedent and consequent is as debatable as any other. The natural deduction style proof system is a departure from the intro/elim paradigm like many of the others we’ve seen. The double negation rule is called “¬-Elim”. (I would also prefer seeing a LaTeX style package for formatting the “Fitch style” natural deduction proofs, as these table templates are somewhat gangly in an otherwise nicely typeset text.)

Symbolic Logic: A First Course
by Gary Hardegree

A pretty standard treatment. It handles conditionals with a succinct observation that “we have to make a choice” between treating conditionals with false antecedents as true or abandoning the truth-functional analysis of conditional expressions altogether,” proceeding to do the former. But like many other texts that recognize a subtlety here, it does not think there is room to question the truth of conditional expressions with true antecedents and conditionals. The natural deduction style proof system adheres in no way to the Gentzen paradigm.

An Exposition of Symbolic Logic
by Terence Parsons

I have known of people using this textbook together with Kaplan’s software Logic 2010. The two were designed to work together. I can now not find the textbook anywhere. Brian Rabern appears to be using it together with his own software package elogic. elogic is a proof writer/evaluator, hint giver, model checker, and automatic grader very much in the vien of LPL and Carnap. Currently it supports only the style of logic present in Parsons’s text, but other formats are works in progress. Not being able to find the text, I am not certain about its contents. The Kalish and Montague style package in Carnap, which presumably represents what’s going on in this book too, is a radical deviation from Gentzen’s taxonomy and ideas.

Symbolic Logic: An Accessible Introduction to Serious Mathematical Logic, vol. I
by Tony Roy

This book is what the title suggests: more accessible than most, but aiming at something different (“serious mathematical logic”) than the typical introductory college class. Comparable in scope to Eagle’s text, but quite different in style.

There is really no discussion about the suitedness of the material conditional as an analysis of natural language expressions. Again, the focus here is mathematical logic. Natural deduction is presented in a very careful, almost painstaking way, but without attention to or compatibility with Gentzen’s paradigm of relational definition. The method of indirect proof is called “¬-Elim”.

The Logic Notes
by John Slaney

This is a nicely written and easily navigated text. The treatment of conditionals is a bit idiosyncratic. Natural deduction is presented before truth-functional interpretations, and there the point is made that the ⊃ symbol can be called an implication sign because the only time it appears in a proof is when in fact it has been possible to derive the consequent from the antecedent, in which case the truth of the conditional is due to its parts standing in an implication relation. And this is fine (as well as being an explanation for why intuitionists and relevant logicians tend towards the implication reading of this connective). But then when the legitimacy of the truth- functional analysis of conditionals is brought up, the claim is made that while it is not obvious, it can be justified by the natural deduction rules (via a sort of rough soundness argument), which might seem to some readers as question-begging. The presentation of natural deduction adheres very closely to the intro/elim scheme for all the connectives, although this might be a happy accident: the double negation rule is called “¬¬-Elim”, and it is paired with a (redundant) “¬¬-Int” rule, despite these rules having nothing to do with introduction and elimination as Gentzen conceived them.

The most compelling thing about the text are the challenges to the paradigm chapter, which contains some of the finest pedagogical discussion of constructive and relevant logic I have seen, and the gentle build to some basic metatheory. For example, two separate completeness proofs are given for propositional logic, one extendable to quantification theory and the other simper but not so extendable. (Because of the development of tableaux systems, everything is in place actually for a simpler completeness argument, modeled on the trivial one Samuel Buss provides for the cut-free sequent calculus in his Handbook of Proof Theory chapter.)

Speaking of Slaney, one really ought to check out his Logic4Fun site. This is an amazing, creative, and powerful logic problem solver. It is versatile and very well presented. It is not, so far as I can tell, suited for building an introductory formal logic class around. For example, it is not coordinated with any sort of enrollment roster, grade report generator, vel sim. And its design from the bottom up is more about creating logic puzzles that can have solutions computed when they are properly encoded, not a standard run through formal logic. And I can’t imagine trying to teach a group of introductory students how to encode all the parameters to even describe a problem in the interface. It is possible that this could be put to use together with Logic in Action. The database of prewritten problems even concludes with an “insane” example of an epistemic hat problem.

blogic
by J. David Velleman

This is an interactive text with widgets embedded in the webpages for exercises. Conditionals are treated well. There is an appendix in which the truth-functional paradigm is both given support and challenged, and never are the truth of these things conflated with implication claims. The treatment of natural deduction doesn’t serve my purposes: the rule called “∨-Elim” is actually disjunctive syllogism!

Introduction to Logic
by Stefan Waner and Steven R. Costenoble

This is a very simple text/course. There are drop down menus for questions and examples embedded in the webpage. The treatment of conditionals is somewhat cavalier (it is claimed that the material analysis is unproblematic unless a causal connection is intended), but there is no conflation with implication. The natural deduction style proof system is very far from Gentzen’s thought: the DeMorgan’s inferences are all primitive.

Arguments: Deductive Logic Exercises
by Howard Pospesel and David Marans

This text doesn’t develop a system of formal logic at all, but it is a fine repository of exercises, especially strong in that it draws, like DeLancey’s book, from actual passages from history and literature.

A Modern Formal Logic Primer
by Paul Teller

Perfect on conditionals, joining the ranks of Velleman, Dever, and forall x: Calgary. Natural deduction in name only: the label “∨-Elim” is slapped onto the disjunctive syllogism.

Another piece of software available for free online is Martin Frické’s Softoption. The website embeds this software together with instructional and tutorial material which some users might find to be of use. A lot of it is organized like course notes, with assigned readings from another textbook (Bergmann’s The Logic Book). In the instructional notes, the conditional is gently (i.e., in name only) conflated with the implication relation, but there is a section of “the paradoxes of material implication” with a single example and a couple of links (only one of which works). The example is presented as self-explanatory and exemplifies only one of the many things people have found wanting about the material analysis of conditional expressions. Somewhat like Carnap, the software is pretty versatile, covering propositional and predicate logic, identity, lambda calculus, induction, and other things. There are truth trees as well as a natural deduction system. The propositional natural deduction system is coded just like that of LPL: the double negation rule is called “¬-Elim” and what Gentzen called “¬-Elim” is called “Absurdity-Intro”. There are notational options to toggle. One has to dig around a bit to find how to use the software to write one’s own problems. And there doesn’t seem to be any autograding facility: All users have the same username and password, and one is advised to have students print and hand-in the work they complete with the software.

I mentioned Rabern’s elogic above. What I said about it is based on inferences from screenshots and what I know about the Kalish and Montague text. But the documentation says that other versions of deduction system are on the way. The documentation also says that “elogic follows the Software as a Service model and is available to educational institutions.” I sent a request from an institutional e-mail address for an account so that I could test the software as an option for use at my university but received no reply after several days.

Finally, because so few of these texts use natural deduction in a way that allows a presentation of the style of relational definition, I will mention two non-free books that do. An older one,

Logic, Language and Meaning, vol 1
by LTF Gamut

and two newer ones,

Logic in Computer Science: Modelling and Reasoning About Systems, Second Edition
by Michael Huth and Mark Ryan

and

Elements of Logical Reasoning
by Jan von Plato

All three of these come close to conflating the conditional operator and implication relation. The Gamut and von Plato are launched from within the constructivist/intuitionist/verificationist theory of meaning, so the elision is somewhat more excusable (see the entry on Slaney’s text above). Huth and Ryan just never mention conditionals and call the arrow symbol an implication connective whose semantics is “not quite as intuitive” as that of the other connectives, something that a reader might “feel slightly uncomfortable with.” Carnap is equipped with style packages for the formalisms in each of these texts.

Different instructors will have different priorites in textbook selection. As a scorned lover of LPL, I am perhaps overly sensitive to some of its bad habits. But maybe the descriptions I have put together can be helpful for people considering these options for their own classrooms.


* The entire volume, accessible here, is highly recommended. In it one finds in addition to van Benthem’s wonderful paper, several excellent essays about logic pedagogy. None of the authors seem to have quite the same agenda as me, but Thompson and Seligman’s paper on teaching natural deduction uses a system perfectly suited to explain relational definition, as does Villadsen, Birch Jensen, and Schlichtkrull’s description of the proof assistant NaDeA for Isabelle (which references Huth and Ryan). Slaney’s description of Logic4Fun appears here, too. And there are other treats for logic pedagogues.


 

Use innovative tools to teach clear and courageous thinking
Subscribe
Notify of
guest

32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Richard Zach
Richard Zach
9 months ago

FWIW there soon will be an HTML version (that should work with screen readers) of forall x: Calgary as well; a prototype is at https://forallx.openlogicproject.org/bookml/ and the accessibility issues I’m working on are listed at https://forallx.openlogicproject.org/a11y/bookml/

Ian S
9 months ago

Don’t forget mine: The Logic Course Adventure: https://logiccourse.com/

David
9 months ago

This is a really great survey, but I’m mainly commenting just to say that zeuhl rocks and to thank you for spreading the word.

Curtis Franks
9 months ago

A ‘blog reader sent me PDF images of Kaplan’s book, which I had not managed to locate on my own. As I expected, it handles conditionals very well. And as I had surmised, it presents a proof system in natural deduction style without an eye on the idea of relational definition of connectives.

David M Braun
David M Braun
Reply to  Curtis Franks
9 months ago

I believe you mean Terry Parsons’s book, which is meant to be used in tandem with Kaplan’s software Logic 2010.

Curtis Franks
Reply to  David M Braun
9 months ago

Correct.

Kenny Easwaran
9 months ago

I’m a little surprised that this didn’t touch on what I consider the most idiosyncratic aspect of Barwise and Etchemendy, which is their insistence on Etchemendy’s view that logical consequence *is* analytic consequence in an interpreted language, and that purely formal consequence relations like first-order consequence are mere approximations to the central (but vague) concept of analytic consequence.

But all the other points that are mentioned are definitely ones that I have dealt with as well over the years.

One other point I would mention is that in addition to a few students having issues with installing the LPL software, there have been bigger issues with students getting financial aid payments on time to purchase the registration key in the first few weeks. Over the past few years I’ve regularly had a few students who couldn’t get started on any of the assignments until at least a month into the semester, and this is what persuaded me to finally go for a free version last semester. (I made my own version of Forall x, combining several elements of the ones mentioned, and integrating a bit more probability.)

Curtis Franks
Reply to  Kenny Easwaran
9 months ago

Hi, Kenny! That is certainly an idiosyncrasy of LPL, but not one that ever proved to be an annoyance in my classes. I always read B&E as saying that there is some one real logical consequence relation “out there” based on which things are actually possible and necessary, which determination cannot be made with logic itself — a claim that can thereafter be ignored as it isn’t reflected in the subject matter. Now that you bring my attention to it, I think you are likely correct that this view is the explanation for the prevalence of “analytic consequence” and the Tarksi’s World environment throughout the course, which has a ripple effect (iii) I described.

Christopher Gauker
Reply to  Kenny Easwaran
9 months ago

I don’t share the B&E philosophy of logic, but I always thought it was a good thing that B&E present a coherent philosophy of logic. Before we learn to ask questions about logic, we are taught to evaluate validity by quantifying over all structures (or “models”) for a language, and then it never occurs to us to ask questions like: How could there be any interest in an “interpretation” of English that assigns the set of gorillas to the predicate “is yellow”? B&E give a clear answer: Those structures are merely formally tractable approximations to possible worlds. (You can prove things about them.) To the extent that we can restrict the structures that we quantify over to better approximations to possible worlds, the more accurate our account of validity will be. In illustrating the possibility of better approximations, it is possible to go too far, e.g., in assuming that everything is large, medium or small. The trouble, of course, is that we don’t want to say that “2 + 2 = 4” is logically valid just because it is true in every possible world.

Ant Eagle
9 months ago

This is wonderfully thorough. I was quite anxious when I saw my name appear but glad to emerge largely unscathed.

My efforts have turned away from the Elements of Deductive Logic book (now I teach different students) and instead towards my own version of forallx (like many others): https://github.com/antonyeagle/forallx-adl/ I hope too that its treatment of conditionals, implication, and non-overlapping natural deduction rules is also going to be ‘fine’!

Curtis Franks
Reply to  Ant Eagle
9 months ago

I’m sure it will be. And I hope you get a chance to use EoDL again — it is unique among textbooks in its focus on facilitating students’ competence with basic metatheory, and I was glad to learn about it.

Sam Duncan
Sam Duncan
9 months ago

This is a really helpful. I’m really glad that these are mostly free for students, which is a huge deal. The cost of most logic textbooks is really unconscionable. Take some of the ones that community college teachers like me often use such as Copi ($155), Kelley ($75), or Sinnott-Armstrong ($65). Even $65 is a lot of money for many of my students. Heck it’s a decent chunk of change for me. Besides free there are a few other things I’d like to see in a logic book: 1. It has the right level of difficulty and progresses in difficulty at the right pace. “Language, Proof, and Logic” or McGee’s book would be way too hard for my students, and that’s fine but it’s good to have options like Van Cleve that might challenge them but not so much they give up in despair. I’ve found that more than a few books marketed to community college students are so easy that they just waste student’s time. I’ll add that one huge problem I have with logic textbooks is that there’s often a point where there’s a huge jump in difficulty. I tried Smith’s book this semester and its very much guilty of that. The first few chapters do an excellent job of slowly and gently introducing validity and the basic idea of proofs but then there’s a gigantic jump in difficulty very soon after it starts in on formal methods. I literally lost three students when this jump happened. And I mean lost as in they just stopped turning stuff in. 2. This might seem minor but it really helps if the book uses symbols that students can type on a keyboard (~, v, &, ->) as far as that’s possible. 3. I’d really like it to at least introduce the tableux/truth tree method. A lot of my students find natural deduction wildly hard and having another method they can use to prove validity is good. Having more tools in one’s kit is good for everyone in fact. Plus, getting in the habit of drawing a diagram to solve a problem is very good I think. 4. It actually does some philosophy. There’s a real opportunity to examine interesting larger philosophical questions with some logical concepts but I rarely see books do this. For instance, how hard would it be to say introduce Taylor’s argument for fatalism when covering excluded middle? 5. It covers some actual inductive logic and connects it to the other material on formal logic and doesn’t just throw in some random stuff like Mill’s methods or arguments by analogy with no obvious connection to anything that’s come before. On that score I’d add that Weisberg’s “Odds and Ends” is just incredible for inductive logic. I usually teach the first few chapters of it when I do logic.

Craig Agule
Reply to  Sam Duncan
9 months ago

I want to echo this—the typeable symbols, the tableaux/tree, and the no-big-jumps are all important to me as well!

I only have so many days of big conceptual leaps, and I’d rather spend them on interpretation, theory, the operators, etc. than on a complex deduction system, and students really take to tableaux/trees.

Harriet Baber
9 months ago

I love Gregory Formal Logic. It’s straight, no gimmicry, it’s simple, and cheap. $35 for the pdf version. I don’t know why it’s not more popular. I do supplement it with my own stuff including class website.

HRL
HRL
9 months ago

Another cheap ($17 on Amazon) textbook that follows Gentzen’s system of natural deduction (proof trees with introduction and elimination rules that relationally define the logical constants) is Volker Halbach’s Logic Manual. The first chapter also facilitates basic literacy with the various possible properties of binary relations that readers of philosophy papers are usually expected to understand.

Curtis Franks
Reply to  HRL
9 months ago

Yes, that’s a great example. One could write a sequel to Pelletier’s paper about how in the 21st Century the idea behind natural deduction has made a comeback. Pelletier identifies some accidental trends in teaching and publishing as the reason that these ideas got obfuscated for most of the 20th Century (I don’t think Pelletier points it out, but perhaps the most hilarious exhibit in this museum is Quine’s 1950 “On Natural Deduction” where he introduces so much strange apparatus that he says “enhances” the advantages that Gentzen’s framework has over the axiomatic approach that there are no longer any rules specific to any individual propositional connectives except for modus ponens, and then bemoans the fact that natural deduction “lacks certain traits of elegance which grace” axiomatic systems). I wonder what explains their recent resurgence in the textbook presentations of natural deduction? Perhaps the growing popularity of category theory among philosophers, or influences from computer science?

Jill Hernandez
9 months ago

Putting in a plug for Gensler & Logicola. It’s been around for decades (I learned logic as an undergrad with a floppy disk version of Logicola), but I’ve used it with undergrads and graduate students who lacked a background in logic. Plus, c’mon, Logicola is addictive :).

Selmer Bringsjord
9 months ago

Thx for your very stimulating post. But are you sure, Curtis, that your trio of (a), (b), and (c) are well-formed? I don’t think your parentheses match. We can do this manually in keeping with s-expressions much used in AI since the lambda calculus courtesy of McCarthy was “ported” to Lisp [count parens left to right with increment by 1 for ( and decrement by 1 for ), in which case I don’t end up with 0: ungrammatical. Or, since my eyes may be deceiving me, we can use modern computational-logic systems that should be used to teach logic. I did that for corresponding s-expressions in HyperSlate** and also get failure to automatically parse against a standard formal grammar of FOL. Are you perhaps missing a final )? I want to make absolutely sure I have the formulae you want, before diagnosing relative to the attributes you discuss in connection with the trio (ie provability, unsatisfiability, etc) you press against B&E tech.

**HyperSlate has been under development for rather a long time, is wired to the absolute state-of-the-art in 21st-century AI/automated reasoning (and large language models like GPT-4), and uses /hypergraphical/ natural deduction, a novel approach I suspect (for better or worse I don’t know) Gentzen would find rather interesting. I presented on the under-development 3D version of such natural deduction in Milan @ the ASL summer mtg (Logic Colloquium 2023); book of abstracts here: https://lc2023.unimi.it/wp-content/uploads/2023/06/book-of-abstracts-LC2023.pdf.

To move from the mundane (counting parens) to rather more weighty matters, I claim to have found Leibniz’s Art of Infallibility (L’art de L’infallibilité; aka his “Universal Logic”), with HyperSlate being the /calculus ratiocinator/ component of this Art. (Also needed is the second component: the /characteristic universalis/.) Outside of “paper-and-pencil” philosophy/logic (or use of simple computer programs for teaching logic), a surprising number of folks in computational logic/AI/automated reasoning have been steadily searching for Leibniz’s Art in earnest for a long, long time, quietly. The nature of that search, and the painstaking consideration of implemented computation, pays homage to Leibniz and his willingness to grapple with physical computing machines.

Curtis Franks
Reply to  Selmer Bringsjord
9 months ago

Yeah, those sentences each need a final right parenthesis.

Selmer Bringsjord
Reply to  Justin Weinberg
9 months ago

Ok. Thx Chris and Justin. Then below’s a link that takes one directly into a simple HyperSlate file I created for that trio. You don’t need an account. The verdict from the FOL oracle, suitably called, is as I recall this:

~ 7 milliseconds for a proof that (a) is inconsistent (proof of absurdum from it alone).
~ 7 milliseconds for a proof that (a) entails (b).
~ 9 milliseconds for a proof that (b) entails (c).
~ Two proof-theoretic confirmations that both (b) and (c) are not inconsistent/are satisfiable.

Since hypergraphical proofs/arguments are morphable to semantic graphs that subsume truth trees, that could be done as well to answer your questions about satisfiability.

While I don’t provide public access, I played around with that trio in SOL as well. SOL augmentation of (a)–(c) would be something motivated students of the caliber of Stanford would, I submit, find interesting.

The link:

https://world.logicamodernapproach.com/hyperslate/public/S[email protected]/FranksTrio

Curtis Franks
Reply to  Selmer Bringsjord
9 months ago

Hyperslate looks very interesting, but it appears from my end that I do need an account to use it.

I was able to see the file you created, though.

If Hyperslate does answer my questions about satisfiability, that would interest me. A colleague asked the SAT solver Z3-Prover to try to find a finite model for (b) and got an error message.

Selmer Bringsjord
Reply to  Curtis Franks
9 months ago

Yes, it’s perhaps not infelicitous to view HyperSlate as the analogue of Mathematica for computational logic. You have to have an account. Are you familiar with Mathematica?

From the teaching point of view, an account allows for use of patented problem generation, grading, tracking of performance, etc. The system is HyperGrader (which includes access to HyperSlate). All the problems in fixed textbooks, unless very, very advanced, end up online and accessible to students for cheating. So our AI is used to solve this problem as well, courtesy of auto-generated, personalized, timed problems. That we claim is the only way to go in the 21st century, alas. This is all the more true because GPT-4 and its successors will have all the trivial/standard problems in the prop calc and FOL digested, and will be able to regurgitate answers.

As I recall, Z3 is an SMT solver. (If SAT, it couldn’t handle more than the prop calc, by definition.) Z3 would be the wrong kind of system to consult, I should think. Why not ask a FOL model finder? These have been around for decades, and could handle these formulae long ago. But, fundamentally, model-based approaches are fatally flawed at present because only /finite/ models can be found. That is one of the reasons why our team is mostly inferential. I believe that only proof-theoretic/argument-theoretic semantics is feasible in light of what has happened in AI (though that’s a long story). After all, many of the interesting logics here are multi-modal quantified, and you can’t have a fixed semantics for that, because for all the main verbs you have a modal operator! HyperSlate offers e.g. the Deontic Cognitive Event Calculus, featured, specified, and used in many publications in computational forma ethics. A short-paper intro is https://www.ijcai.org/proceedings/2017/0658.pdf.

I’m giving demos of HyperSlate not in machine ethics in The Netherlands, oddly enough …

Selmer Bringsjord
Reply to  Selmer Bringsjord
9 months ago

I.e. not in machine ethics => in machine ethics :).

Curtis Franks
Reply to  Selmer Bringsjord
9 months ago

Thanks for this information. The point about the sentences in my example is that it is very easy to determine of one of them that it is satisfiable and very hard (worth tasking a student with) to determine that it even has finite models.

Selmer Bringsjord
Reply to  Curtis Franks
9 months ago

Hi Curtis,

I see.

It strikes me that the constituents of your three (a)–(c) formulae are in general syntactically reflective of the famous, old Kleene example of a formula (F, let’s label it) in FOL that is only satisfiable in infinite cases. Only a single binary relation is used by Kleene, R as I recall, in fact. There are three conjuncts in F, and it’s quick pointing and clicking in HyperSlate to explore the provability of the conjuncts from your (b) and (c). (a) is of course irrelevant. So this is one trajectory of exploration for students (assuming they understand F, which is pretty simple). Once they explore inferentially, they can build semantic hypergraphs in HyperSlate on their own, if they forego an oracle.
Then the satisfying interpretations are there in front of one in the workspace. Gentzen’s tree format was a seminal start in graphical natural deduction, but these trees are subsumed by hypergraphs, which are much more inferentially expressive. And the model-finding side is there as well, in the very same unitary format. One distressing thing about Jaskowski-Fitch is that there’s no semantic side available at all. Hypergraphical natural reasoning changes this once and for all, instantly. Gentzen, as I’ve said, would be happy.

(I taught using MACE and Paradox for model finding for long time. You may want to look at the feature of Paradox from long ago that as I recall flagged inputs as likely requiring infinite models. Infinite model finding for HyperSlate is under active development.)

Yrs, //Selmer

Selmer Bringsjord
Reply to  Curtis Franks
9 months ago

Chris => Curtis in my previous post re using oracular reasoning in HyperSlate. Autocomplete. My apologies! Yrs, //Selmer

Matt LaVine
Matt LaVine
9 months ago

Thanks for this! It’s a very helpful post and discussion!

For those who might be interested, I wrote a longish review of the Carnap book and software for Teaching Philosophy two years ago. https://philpapers.org/rec/LAVTCB

I sum it up as follows:

Graham Leach-Krouse’s Carnap is much more than just an ordinary logic text. That said, since Carnap, the textbook, is a rather ordinary logic text (even by its author’s own admission), it must be made clear that the Carnap project is much more than just Carnap, the textbook. Most obviously, it also
contains “a software framework for creating and manipulating languages” (Leach-Krouse 2018a) which provides immediate feedback for students and automatic grading for teachers on well-formedness of formulas, validity of derivations, and truth-value assignments for truth tables. But, it also involves an attempt to create a unique type of community around the software and text. Furthermore, each portion of the Carnap project—textbook, software, and community—has mathematical, pedagogical, philosophical, and sociological aspects to it. With each of these many aspects in mind, my judgments range from extremely positive and encouraged to extremely critical and concerned. While it is a bit of an oversimplification, the order in the previous two sentences generally map on to each other—my judgments of
the mathematical aspects being universally positive, getting more and more critical, until my judgments of the sociological aspects reach significant levels of concern. In what follows, I will discuss each portion of the Carnap project—textbook, software, and community—and try to explain the wide range of my evaluations.

Dave Barker-Plummer
9 months ago

FWIW, last week we made browser-based version of the LPL applications available on our website. They are in open beta, and will become the official versions at the end of this year. No more installation issues.

Curtis Franks
Reply to  Dave Barker-Plummer
9 months ago

That is great news.

Dave Barker-Plummer
9 months ago

It is perhaps also worth pointing out that if you want Disjunctive Syllogism as a rule in LPL’s Fitch natural deduction system, you can have it. If you prove Disjunctive Syllogism using our basic system, and save the resulting proof, then that proof can be used as a lemma elsewhere. This feature has been available in the application since the second edition of the text.

Richard Zach
Richard Zach
9 months ago

Just wanted to mention that forall x: Calgary now also exists as HTML with proofs that are screen-reader optimized https://openlogicproject.org/2023/07/27/forall-x-now-in-html-for-extra-accessibility/