What materials exist for teaching large introductory logic courses, and how do they compare?
This summer, Curtis Franks, associate professor of philosophy at the University of Notre Dame, has taken up the task of examining a wide range of logic texts and software, many of them free and open-access. He has long made use of the popular Language, Proof and Logic package, but, as he explains in the first part of this guest post, he has some complaints about it and is looking for an alternative. In the second part of the post, he shares his reviews of over 20 other logic courseware options.
Logic Courseware, Surveyed
by Curtis Franks
Like many logic instructors, I am a long-time user of the multimedia package Language, Proof and Logic (LPL) by Jon Barwise, John Etchemendy, and Dave Barker-Plummer. I believe I taught a full course based on this package at least once in each of the past twenty years. It has its merits, including a lightly-worded, easily read and easily navigated textbook, a user-friendly graphical interface for model construction, a point-and-click natural deduction proof editor with drop down menus for rule templates, and auto-generated grade reports. Students and teachers alike love these things, and LPL was a pioneer in most of it. Some experts have complained about the very features many others adore. Van Benthem had LPL and similar packages in mind when he wrote: “Sometimes the new technology is used to create high-tech versions of largely standard fare in the traditional curriculum with, say, sophisticated graphics interfaces for classical natural deduction proof systems, like a Latin Mass with rock guitars.”* But this put-down might not resonate for someone who has listened to much zeuhl or who has had to grade for enrollments greater than 50 students.
Van Benthem has another gripe with LPL: “[S]ometimes also, there is ideological fervor behind the effort: the course designers have a special research agenda with their own view of logic, modifying or changing existing curricula, and they want to export their revolution by by-passing the academic colleagues and instead of that, influencing the youth.” As one might have guessed, this is only a criticism if you do not share the course designers’ ideology. Van Benthem admits that his own “course Logic in Action falls in the second activist category.” More on Logic and Action anon.
1. Complaints about Language, Proof, and Logic (LPL)
My own complaints with LPL are different from these. If they resonate with you, then you might be wondering what other options for teaching large introductory logic courses are out there and how they compare on these measures. So I’ll first list my grievances with LPL in order to frame the discussion of alternative logic packages I have surveyed. In increasing order of toxicity:
(i) Inevitably between 1 and 5 percent of students experience some sort of software installation glitch, delaying their engagement with the course. There doesn’t seem to be anything wrong with the software, and these students seem to be pretty technologically engaged otherwise, but I have not once taught a section of students none of whom experienced this sort of problem.
(ii) Dorothy Edgington calls the material conditional analysis of indicative mood conditional statements “logic’s first surprise.” She’s right that “it does not strike students as obviously correct.” I do not know which is worse: declaring that indicative mood conditional statements are expressions of implication relations or insisting in a pedagogical setting that the classical, bivalent, truth-functional analysis of such statements is correct. The latter is surely more disingenuous. The authors of LPL keep an admirable distance from the first obscenity but crash head-on into the second one. I do not think it is conscionable to teach with the LPL package without issuing a disclaimer about the storied history of thoughtful reasons to question the “material” analysis of conditionals. My own disclaimer has expanded over the years to include many of Edginton’s own examples and her brilliantly simple proof of Lewis’s theorem about conditional probabilities.
(iii) LPL leans very heavily on a “standard interpretation” of a collection of predicate symbols from what is called “the blocks language.” This sets up the arena described above as a very attractive user interface for model construction: a rotating chessboard on which one can position blocks of various shapes and sizes in order to construct a “Tarski world” that satisfies or provides a counterexample to a claim like “Every large cube is larger than a small dodecahedron to its left.”
While the exercises involving proof construction are standard fare, the Tarski’s World environment is the site of LPL’s most creative problem solving challenges. And it does foster a good grasp of concepts like satisfiability. But its pervasiveness in the course definitely implants a systematic confusion in some students between the concepts of “truth in all first-order models” and “truth in all possible Tarksi worlds.” Compare the distinction between pure first-order logic and “first-order logic with identity.” In pure first-order logic, the equals sign gets treated as a binary relation symbol like any other. Expressions like “a=a” are not valid, because in the interpretation where candidate objects are people who have lived on Earth and the symbol “=” is associated with the relation ” … is the grandfather of … ,” it is false on nearly all assignments of objects to the constant symbol “a.” In first-order logic with identity, by contrast, “a=a” is valid, because here the equals sign is always interpreted as literal identity among objects. It is important in logic to have on some occasions standard interpretations of some designated vocabulary, like the equals sign. But pedagogically it is unclear that using a language in which the entire vocabulary is so designated is advisable. Instructors’ words of caution and occasional reminders do little to weather the habituation that hundreds of weekly practice problems instill.
(There is also an oddity about the standard interpretation itself. Can 9 distinct objects be in the same column? “Yes.” Apparently contingent features of the chessboard are not built into the meanings of the predicates. If a is neither a tetrahedron nor a cube, must it be a dodecahedron? “Yes.” Apparently the fact that Tarski’s world only hosts three shapes of block is. Can two large objects be adjacent? “Too hard to check.”)
(iv) One of the most powerful features of LPL is a verifier for logical consequence. In the Fitch program, whose main purpose is the construction of natural deduction proofs, it is possible to run queries of the form, “Is sentence C a classical first-order logical consequence of sentences A and B?” Not even Alonzo Church could write a program that answers all queries of this form correctly, so as one expects, LPL often responds with a question mark instead of an affirmative checkmark or repudiating X. But it does provide a correct answer for a surprisingly large class of sentences even with a bit of quantifier complexity depth. It detects the classical validity of the “drinker sentence” in a second, and more generally it handles most of the exercises from part 2 of the text. A typical exercise is of the form “If C follows from A and B, build a proof of it. Otherwise build a counterexample world.” Perceptive students quickly catch on to the idea of first running a first-order consequence query so that they will know which of these two tasks to bother with.
But what if you run a query about sentences that aren’t from the text? One of my favorite basic logic questions is to determine of each of these formulas whether it is unsatisfiable, finitely satisfiable, or satisfiable but false in every finite model:
(a) ∃y∀x∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))
(b) ∀x∃y∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))
(c) ∀x∀z∃y(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)))
Several years back, a very good student got this problem wrong because he fed the sentences and their negations into the LPL software to see if any of them were unsatisfiable and trusted the answers that the software provided. After he brought this to my attention, we poked around some more, “discovering,” among other oddities, that (b) implies (c), that (c) implies a contradiction, and that (b) does not imply a contradiction. I don’t want to spoil the problem for interested readers, but I don’t think I am revealing too much to point out that these three things cannot all be correct. In the eight years or so that have passed, as LPL has been upgraded, I have checked periodically to see that these erroneous assessments remain in place, and I have noted several other similar errors. It seems bad that this educational package would produce errors of this sort. Nothing in the text or software documentation warns users about the problem.
(v) But the thing that weighs most heavily on me when I teach from LPL is its bizarre nomenclature for the natural deduction rules. There is a history behind the prominence of natural deduction in basic logic curricula. In fact, F.J. Pelletier wrote such a history, and one can see already in its abstract that it is a history “from the view that [its] founders embraced to the widespread acceptance of the method in the 1960s” (emphasis aded). Gerhard Gentzen‘s ideas are what were left behind in the series of accidents that led to the appearance in logic textbooks of the technology he invented.
What ideas do I have in mind? Well, Gentzen said many things about natural deduction. It is modeled off the way people actually reason. Its individual rules were abstracted from the inferences that appear in published mathematical manuscripts. It frees formal logic from the need of axiomatic foundations, embracing instead the hypothetical nature of logical inference. For these reasons and a few others, it came to be believed that the value of natural deduction lay in its ability to facilitate the teaching of logic. Because they emulate patterns of natural inference, students will take to its rules and the method of constructing proofs with relative ease.
If this is one’s attitude, then of course there is always room to do Gentzen one better. For example, students do not typically find it intuitive to create an elaborate network of subproofs and to invoke the principle ex falso quodlibet, as Gentzen’s system demands, in order to establish the truth of B from the premises “A or B” and “not A.” They want to establish the inference forthright, as it seems more natural than any one of the steps in Gentzen’s system that are meant to justify it. So why not make this “disjunctive syllogism rule” a primitive component of the proof system? A more natural natural deduction.
Gentzen had another idea that is seldom encountered in textbooks. The way he put it is unfortunately a bit clumsy, but the idea has been refined and clarified into one of the major conceptual developments of 20th century logic, so we can see retrospectively just what he meant. It is well known that Gentzen’s system had, for each logical constant, an introduction rule and an elimination rule. His clumsy statement was that the introduction rules “provide the definition of” the logical constants and that the elimination rules are “consequences of” the introduction rules.
Consider the rule modus ponens. It might strike you as something more than one among the countless valid inferences one can draw from a statement governed by the conditional operator. To believe “if A, then B” is be prepared to believe B as soon as you come to know A. That’s what “if … then” means. And that’s what modus ponens ( A ⊃ B, A ⊢ B ) says. Now, the rule itself doesn’t say that it is the definition of the conditional. So when I say that it is the definition of the conditional, I am saying something more than what the rule says. I am saying something about the rule. I am saying both that A ⊃ B, A ⊢ B is correct and that if any other sentence can stand in the same inferential relationships that A ⊃ B stands in—i.e., can fill the role of A ⊃ B by being put in the blank in this sequent ____, A ⊢ B to say something correct—then that sentence has the meaning of the conditional “built into it.” That other sentence must be at least as strong as A ⊃ B, in the logical sense of standing in the premise position of at least as many inferential relations. Another way of putting this is to say that, for all C, If C, A ⊢ B, then C ⊢ A ⊃ B.
So what we have just shown is that the rule
for all C, If C, A ⊢ B, then C ⊢ A ⊃ B
although it does not follow from the rule
A ⊃ B, A ⊢ B
does follow from the conception of that rule as definitive of the ⊃ symbol. You probably recognize these two rules as the introduction and elimination rules for the ⊃ symbol in natural deduction.
One should have a reason for introducing university students to logic with a natural deduction style proof system. There are alternatives. Axiomatic systems are far easier to describe and to reason about, even if impractical to use. Semantic tableaux or truth trees are simpler and lead to extremely straightforward arguments for the completeness theorem and related reults. One bad reason to introduce students to natural deduction instead of these alternative frameworks is that it is what the instructor happens to be most familiar with. A better, though debatable, reason is that it will be easiest for the student to use in the construction of proofs.
My reason, especially in the context of a philosophy curriculum, is that the idea of defining a logical connective with an intro/elim pair of rules is a profound discovery, one of the great 20th century conceptual breakthroughs in logic. It is an idea that generalizes throughout modern mathematics (and beyond) in the concept of a “universal property.” Natural deduction is an occasion for students to experience the phenomenon of defining something in terms of its situation in the network of relations it bears to other things of the same type, just as truth-functional semantics is an occasion for students to to experience the phenomenon of defining something in terms of the abstract object it refers to or its internal constitution. You can think of these as the relational and essential styles of definition.
When Gentzen tabulated his set of introduction and elimination rules for the standard connectives ∧, ∨, ⊃, ¬, ∀, ∃, he drew our attention to two other inference figures that do not fall into this scheme: ex falso quodlibet, and the double negation rule. Concerning the latter he emphasized that although it is a method of reasoning with negations, it cannot be justified by virtue of the intro/elim rules for that connective. In other words, the double negation rule is not licensed by the meaning of the negation symbol, when one thinks of definition relationally. But of course, the double negation rule is justified on the classical truth-functional definition of the symbol. So apparently, the relational and essential schemes of definition come apart.
Now consider the names of natural deduction rules in LPL. For conjunction, disjunction, and the quantifiers, the text adheres to Gentzen’s original taxonomy. But it is clear that the whole idea of relational definition is lost on the authors when one sees that the double negation rule is there labeled “¬-Elim.” Gentzen’s whole point had been that this rule “falls outside the Intro/Elim scheme.” Similarly, the rule Gentzen called “¬-Elim” (because it, together with the rule he called “¬-Intro”, provides a relational definition of the negation symbol) is labeled in LPL “⊥-Intro.” LPL calls the ex falso quodlibet rule “⊥-Elim”. Never mind that it does not pair with the previous rule to provide the relational definition of anything. Again, Gentzen took it to be a theoretical discovery of great moment that ex falso quodlibet does not fall under and cannot be justified within the intro/elim framework. This means, among other things, that the disjunctive syllogism rule cannot be justified by relational definitions of the ¬ and ∨ connectives alone. Who would have thought?!
For years I taught with the LPL package using its nomenclature. But teaching natural deduction without mentioning anything about the theory of relational definition is like teaching music theory from stave without ever discussing what anything sounds like. A lot of the terms are Italian anyway, so one could shuffle them randomly without anyone catching on (well, maybe not at my institution). I prefer to play the “New World Symphony” very loudly in class at least twice by mid-term. So over time I began referring to the real names of the natural deduction rules in my lectures, “and here you need a ‘¬-Elim’—what the book and software call ‘⊥-Intro’.” In part because the double-talk is so exhausting, and in part because proof construction strategy is actually hindered by the mislabeling, in recent years I have just made a one-time disclaimer after which I use the real names exclusively.
This problem is far from unique to LPL. Early in Pelletier’s survey, he declares that he will call the double negation rule “¬-Elim” “without deviating too much from Gentzen, but being more in line with later developments” (p9). Why? Because “most textbook writers do not follow the int-elim ideal, but instead have a large number of ‘overlapping’ rules (presumably for pedagogical reasons)” (p10). He is correct about most textbooks. But it cannot be maintained that this is a minor deviation from Gentzen’s thought if the idea of relational definition has ever sunk in. And whatever pedagogical reasons might lead one to innovative relabeling and supplementing of Gentzen’s scheme is arguably offset by the pedagogical good of introducing students to the beautiful theoretical idea behind natural deduction. So I am making the argument. (I am not alone. Richard Zach told me just this week: “The desire to have ND with standard rule names was one of the reasons to make my own version of [one of the open textbooks surveyed below] forall x.”)
Those are my main gripes with LPL. As I said, though, I really like the project. I am not a multi-media sort of person, but the software interfaces are great for students, steering them away from sloppy formatting errors that hinder many students and have little to do with the serious ideas we want them to focus on. And of course I like the automated grading. So I am interested in projects of a similar spirit for classroom use. It shouldn’t be surprising that there is stuff going on in the vein LPL opened over two decades ago. I dug around a little. The next section covers what I found.
2. Reviews of Other Logic Courseware
First of all, there is an old blog post by Richard Zach here. You will see that most of the links are dead and in other ways some of the information is outdated. Thankfully, there is this page hosted by the Open Logic Project, most recently updated by… Richard Zach. Here the focus is on freely available (and openly licensed) texts. There are several reasons why one might want to prioritize these: price, customizability, integration. Here is what I mean by integration: People like Sean Walsh are writing books specifically to work with the Carnap software. People like Graham Leach-Krouse are writing procedures for Carnap called .ZachTFL2019 specifically to format proofs according to the style of the Calgary “remix” of P. D. Magnus‘s open textbook. I will add a reason that might not have occurred to you: this is where a lot of the action is right now.
So, let me first cover some texts that are both free and open access.
This is an elegantly and carefully worded textbook.
1. It does not have its own software, but it could be used together with other software.
2. Discussion of the conditional is nuanced. The bivalent truth-functional analysis is never called correct, so students who don’t accept it as a full specification of the truth conditions of conditional claims (or prodigious students who wonder whether indicative mood conditionals even have truth conditions) won’t feel like they are being told they are wrong. Unfortunately, on pages 20—21, the expressions “P implies Q” and “If P, then Q” are said to be equivalent.
3. A limit of the text is that its scope is pretty elementary. “Advanced topics” are relegated to a final chapter rather than being touched on along the way. This doesn’t devalue what is there, but it would only be suitable in courses that don’t aspire to introduce much reasoning about logical systems and their properties.
4. Pushing in the opposite direction of having a standard interpretation of a common vocabulary, the book turns to a wide range of applications, from analysis of important historical texts to reasoning about cards.
5. The formal proof system is in the natural deduction style, but it is not suitable to introduce the theoretical ideas behind natural deduction. Disjunctive syllogism, modus tollens, etc., are primitive rules.
1. No software.
2. The treatment of conditionals is fine. One finds the claim that the bivalent truth-functional analysis of indicative conditionals does indeed capture such expressions’ truth conditions, but it is backed by an argument (following Jackson) in a way that acknowledges that others have disagreed about this. That may be too heavy-handed for some instructors who suppose that at least some indicative mood conditionals have stronger than truth-functional truth conditions or who, following Ramsey, Rhinelander, Adams, Kratzer, and others think that conditional probabilities do a better job, but it sure beats more cavalier statements found in other texts. And Eagle goes on to distinguish truth conditions from assertibility conditions in this discussion, so he gives plenty of space for those who think that there’s more to meaning. Nowhere are conditionals taken to be expressions of implication claims.
3. The text is designed for a more advanced course. Choices are made in the basic framework to facilitate a gentle entry into basic metatheory. For example, the formal proof systems are semantic tableaux and sequent calculi with an eye towards a demonstration of completeness via Hintikka sets. (Do not expect much proof theory: The sequent system here is a “semantic” one: instead of operational rules, there is a primitive rule of “equivalents” allowing the simultaneous substitution of subformulas in a sequent with formulas true in the same structures. Things like modus ponens are justified against this backdrop. Again: all sights are on streamlined completeness results and the like.)
4. There are no oddities in the way satisfaction is discussed, but the framework is more mathematical both notationally and in its precision than what one introduces in a typical first course in logic.
5. Natural deduction is not discussed, so even if you are teaching a course at a more advanced level for which this book is well-suited, if one of your ambitions is to teach about relational definition via universal properties, it is not the right text.
1. Carnap is a free, on-line, open source proof writer and evaluator, model constructor, translation checker, and grading system. It is versatile and powerful, with functionality extending to such things as proof trees and modal logic. I have been playing around with it these past two weeks and believe that its set-up obviates the sort of start-up problems that many students report with software that has to be installed and registered locally. And I am impressed with the sort of creative functionality evident in the ability to create timed assignments, randomized problems, etc. One of the best things about the software is that it is customizable, not only in that an instructor can write their own problems and create their own assignments, but in that it can be configured to match the syntax of a wide range of proof systems and thus be used in tandem with any textbook. There is also a candidate accompanying text, the Carnap Book, to which I now turn.
2. The treatment of conditionals is great so far as it goes. Less heavy-handed than most, the Carnap Book simply says that that the classical truth-functional analysis is far simpler than alternatives and that understanding the alternatives is enhanced with familiarity with this simple starting point. We are told that, “by and large, it works.” There is no conflation with implication, but that is in part because implication isn’t introduced. The entire business with truth-functional analysis of language comes late, and the examples of truth-table evaluation are all singular (i.e., no “joint truth tables”). Nowhere is the idea of a set of sentences truth-functionally implying another sentence introduced. Even in the discussion of the proof system, where the method of conditional proof is featured prominently, there doesn’t seem to be an occasion for comparing the claims P ⊃ Q, ⊢ P ⊃ Q, and P ⊢ Q.
3. There are no problems with the treatment of satisfaction, other than that the development is probably more elementary than what many instructors are wanting. We mentioned already that the treatment of the semantics for propositional logic is abbreviated. The semantics for the quantificational language is not even introduced.
4. I have not yet encountered errors in the software. Compared to what one gets in LPL, the user interface is less graphical. The absence of point-and-click functionality is missed. The graphical model building environment is missed. The syntax of proof writing will take students longer to pick up. The instructor’s end takes much longer to master. In part due to the great versatility of the software, leading to it having specifications for a wide range of vocabularies, proof styles, etc., the documentation is difficult to navigate. As with many open-source endeavors, Carnap developers and users are true enthusiasts who are eager to help when you get stuck or confused. There is even a mailing list.
The trade off is that once one overcomes the learning curve, the greater flexibility and customizability is amazing. One can do more. It seems that the automation of grading is even enhanced. One does not need to collect grade reports and tabulate progress by hand: Carnap does everything.
I entered the sentences, ∃y∀x∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)), ∀x∃y∀z(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)), ∀x∀z∃y(R(xy) ∧ ¬R(xx) ∧ (R(yz) ⊃ R(xz)) that LPL couldn’t handle into the “countermodeler” that Carnap is equipped with. I wanted to make sure that this software can identify models to complex sentences like these. It doesn’t (yet) have a feature to search for models, but it allows you to describe models of your own construction and to evaluate sentences in them. On these tricksters and on some even more complex ones that I tested, Carnap passed with flying colors.
5. The proof system in the Carnap Book is in the natural deduction style, but it doesn’t attempt at all to capture the ideas behind natural deduction emphasized above. Again, this is comment only about the Carnap Book. The Carnap software is ready to run with a wide range of proof systems, even including the tree style natural deduction system straight out of Gentzen’s thesis.
1. This is an open source textbook introduced in 2005 and continuously revised. But beyond being revised, it has fallen into the hands of many instructors who have seen in its open license an opportunity to rewrite, expand, or otherwise tweak its contents to suit their own purposes. And this was Magnus’s original vision. It does not come with software, but Carnap has already been customized with settings specific to the original text as well as several of its “remixes.”
2. The treatment of the conditional varies from version to version. The Calgary remix stands out as having the most careful discussion of all the texts we surveyed: implication and indicative mood conditionals are explicitly contrasted, and the relationship between them made clear in a succinct discussion of the use/mention distinction. The material conditional analysis is called “the best candidate for a truth-functional conditional,” nothing stronger. There is a whole section about examples of nested conditionals that give rise to ridiculous conclusions on the material analysis, included Edgington’s proof of the existence of God.
3. There is nothing peculiar about the treatment of satisfaction. It should be noted that the several derivatives vary considerably. Magnus’s original text is fairly elementary. The Cambridge remix pushes towards a more advanced treatment; its author Tim Button also wrote an entire sequel text called Metatheory which is among the least intimidating treatment of basic metatheory of classical logic I have seen. In this same direction, the Calgary remix incorporates parts of Button’s Metatheory as well as material on modal logic. The UConn remix treats some issues in nonclassical logic. The Leeds remix has a chapter on probability theory (that stops short of the Lewis/Edgington result, mentioned above, about the lack of truth conditions for indicative conditonals as interpreted by Ramsey et al.). In the other direction, the Lorain County remix is designed “to move more slowly” even than the original “and be more accessible.” Speaking of accessibility, there is a version of the Calgary remix reworded and typeset for dyslexic readers.
4. I have not detected any errors in the versions of the texts I have looked at.
5. The proof system across most versions is in the natural deduction style, and the presentation adheres pretty closely to Gentzen’s scheme in most. The U. British Columbia remix has tableaux in addition to a natural deduction style system, but its version of ND is the same deviant one as in Magnus’s original. The Auckland version is the only one I noticed that doesn’t use natural deduction at all: only truth-trees. In the Cambridge remix, one finds a TND rule to characterize classical logic. In Calgary, they are using an IP rule. My own preference would be a general rule of double negation (DN), so that the sufficiency of IP (essentially, a heavily constrained use of DN) can be presented as a surprising proof-theoretical result. This is a first step towards a verification of Glivenko’s theorem (essentially the sufficiency of a maximally constrained use of DN)—beginning students can handle this. But these are cosmetic differences that do not interfere with the presentation of the main ideas behind natural deduction.
1. There is no autograding software, but the course is designed with some very specific applications in place, including engagement with the downloadable game Water World and with computer programs as such. I can imagine that at Rice (where this was developed) there are assignments designed to be submitted and evaluated mechanically.
2. A single connective is called “the conditional” in places and is said to be pronounced “implies” in other places. But the text is very unconcerned about translation of natural language. Instead the symbolism is always applied to one of three tasks: navigation in the game Water World, circuit verification, and type error detection in the Scheme programming language. For these applications, as in evaluation of claims in mathematics, the distinction collapses: There are no true conditionals that are not logically true, i.e., true because their antecedents do in fact imply their consequents.
3. You’ve surmised already that the book is quite peculiar. It won’t be suitable for most philosophy courses. But it really is a nice book, written in a straightforward manner, and bringing logic to life. Among its oddities is the introduction of the concepts of soundness and completeness in terms of the relationship between truth tables and Boolean algebras, with a proof sketch. The correspondence between these environments and the Water World proof system is considered but declared “beyond the scope” of the book. Still, the idea is there.
4. It is unusual to have a logical system that is never encountered outside of specific applications. But it seems rather to enliven the formalism rather than to implant a false sense of its meaning. In any event, students taking this course will already be familiar with programming in high-level functional languages like Scheme, so are immune from misconception of that sort.
5. The proof system is in the natural deduction style, but the rules are quite oddly labeled, and it is fair to say that the authors’ choice of proof system was not influenced in any way by Gentzen’s ideas. It is great, though, that the idea of the logical strength of a proposition is presented and emphasized. This is the right framework for discussing relational definition: (⊃-Elim tells us about the strength of conditional claims, and ⊃-Intro tells us that any other claim standing in the same sort of relation must be at least as strong as the conditional.)
1. This is a relatively more advanced text, covering such topics as the decidability of the monadic fragment of first order quantification theory, the compactness of the propositional language, algebraic semantics, and the sense/reference distinction.
2. The treatment of conditionals is accurate and thorough, if possibly too ideological for some some readers: The material analysis is flatly said to be “not very good,” and there is an entire chapter called “Trouble with ‘if’s” that introduces Adams’s and Stalnakers’ analyses without a formal presentation of Stalnaker semantics or anything like that. This is what one might expect from an author possibly most famous for his counterexample to modus ponens. There is no conflation of conditionals and implication claims.
3. There is nothing unusual or misleading about the treatment of satisfaction …
4. … nor are there any evident mistakes.
5. Proof systems are not presetned for their own sake or because of any theoretical properties they might exhibit. There are none for propositional logic, where other decision procedures abound. They are introduced for predicate logic as means for recursively enumerating theorems, in a tradition inaugurated by the “main method” in Quine’s Methods of Logic. Instructors interested in relational definitions of connectives à la natural deduction could simply supplement the material here with another unit of their own design.
1. No software. This is a fairly elementary book. It is written in fine style, but quite poorly typeset. Despite being on the less advanced end of the spectrum, it does touch on some topics that other less advanced texts ignore such as truth-functional completeness.
2. The text says explicitly that “If P, then Q” says “exactly the same thing” as “P implies Q.” By now I am accustomed to encountering the conflation of these ideas, but it is unusual to read what appears to be a fully deliberate embrace of it. What is more unusual still is the appearance of this remark in a textbook with an entire section on the use/mention distinction. I believe that the fixture of that distinction in logic curricula is due to Quine and others emphasizing it in the 1950s—70s precisely in order to distinguish the indicative conditional operator from the implication relation.
3. There is nothing about the book that I think might mislead a student about the generality of interpretation or satisfaction or anything like that.
4. And I did not detect any errors.
5. The natural deduction style proof system is not at all suitable for a presentation of the idea of relational definition of the connectives.
1. No software. This option stands out from the others. It is quite elementary as a formal logic textbook, but it is also really extensive in its treatment of logical fallacies, informal logic, and basic probability theory. It is heartening to see something in this genre also being prepared under a free and open license.
2. The treatment of conditionals rubs me wrong on both scores: the same symbol is used to translate indicative mood conditionals and implication claims, and the argument that this symbol be interpreted truth-functionally is presented as conclusive.
3. The infrastructure stops well short of a fully developed quantification theory, so there is no notion of satisfaction to be misleading about.
4. I didn’t look closely enough to detect errors.
5. The proof system is only partially modeled on natural deduction. For example, there are no subproofs.
1. This book was written explicitly to be integrated with the Carnap software and has Carnap evaluation widgets embedded in its pages (as well as short instructional videos). And the text itself refers to what “the software will do” in various cases. So it is an alternative to the Carnap Book, even hosted on the Carnap domain. It is a considerably more advanced text, with more scope and depth, building up to such topics as the BHK interpretation of intuitionistic logic, completeness, basic probability theory, normalization, etc.
2. Here we are explicitly told in a couple of places that the same symbol translates both indicative mood conditionals and implication claims, although in the translation exercises and discussion of the proof apparatus, conditionals are always the focus. The good news is that in the very well designed chapter on probability, one finds the remark: “Perhaps when we are asserting ‘if p then q’ we are not asserting the truth (or high probability) of p ⊃ q, but rather we are asserting that the probability of p conditional on q is high.” Perhaps!
3. No misleading aspects of the treatment of interpretation or satisfaction.
4. No errors detected.
5. The treatment of natural deduction is excellent, and this is the only text we surveyed that discusses the relational definition of logical particles (in somewhat different terms than we use). There is an entire section called “Natural deduction, definitions, and normalization.”
Turning now to the texts labeled “free, not open” I will dispense with the five item enumeration and say only what I think is pertinent.
This is a slow, prodding text. Though written with great care, it is not the sort of reading that today’s students will relish (nor does it have a table of contents). It is very good on conditionals, although maybe too dogmatic in the opposite direction (Belnap “asks” whether ordinary language “if … then” expressions are truth-functional and provides this “hint”: “No; but philosophers differ in their opinions on this matter.”). Well, it is Belnap. Pelletier identified Fitch as the author of one of the few prominent textbooks that preserves Gentzen’s intro/elim scheme. His student, Belnap, here took the other extreme with natural deduction “rules” like “tautological implication” (I am an academic descendent in this lineage who humbly aims to restore its original legacy). The scope is wide, covering a bit of set theory, generalized quantifiers, and things like that.
This is an amazing and almost unique book. It appears that the material on deductive machinery and basic metatheory, at the end (included there to show that despite the piles of insults found in the earlier chapters and the framing of the project (e.g., “the usual emphasis on formal proof somehow suggests that mathematical activities are the highest point of logical intellectual skills, a claim as debatable as thinking that the best test of someone’s moral fiber is her behavior in church.”), the authors “have no quarrel with standard curricula”), is still in progress. But the presentation of natural deduction there is almost perfectly suited for showcasing the idea behind the system (it is unfortunate that the rule of indirect proof, used here instead of DN, is called “⊥-Elim”). It is even extended to cover identity and mathematical induction in natural deduction style, which is not common in textbooks.
What is great about this book, though, is its extensive focus, at a fully accurate but still introductory level, on the logical analysis of information flow, games, and computation. There is nothing else quite like this except for …
Falling into your virtual lap in over 1000 pages, this book-in-progress covers just about everything. The long chapters on epistemic logic, decision theory, social choice theory, and game theory are all excellent and well-integrated into the broader development of formal logic. The presentation of conditionals is not just fair and accurate, it is carefully worded and developed. Almost uniquely, it recognizes that the case of a conditional with true antecedent and consequent is as debatable as any other. The natural deduction style proof system is a departure from the intro/elim paradigm like many of the others we’ve seen. The double negation rule is called “¬-Elim”. (I would also prefer seeing a LaTeX style package for formatting the “Fitch style” natural deduction proofs, as these table templates are somewhat gangly in an otherwise nicely typeset text.)
A pretty standard treatment. It handles conditionals with a succinct observation that “we have to make a choice” between treating conditionals with false antecedents as true or abandoning the truth-functional analysis of conditional expressions altogether,” proceeding to do the former. But like many other texts that recognize a subtlety here, it does not think there is room to question the truth of conditional expressions with true antecedents and conditionals. The natural deduction style proof system adheres in no way to the Gentzen paradigm.
I have known of people using this textbook together with Kaplan’s software Logic 2010. The two were designed to work together. I can now not find the textbook anywhere. Brian Rabern appears to be using it together with his own software package elogic. elogic is a proof writer/evaluator, hint giver, model checker, and automatic grader very much in the vien of LPL and Carnap. Currently it supports only the style of logic present in Parsons’s text, but other formats are works in progress. Not being able to find the text, I am not certain about its contents. The Kalish and Montague style package in Carnap, which presumably represents what’s going on in this book too, is a radical deviation from Gentzen’s taxonomy and ideas.
This book is what the title suggests: more accessible than most, but aiming at something different (“serious mathematical logic”) than the typical introductory college class. Comparable in scope to Eagle’s text, but quite different in style.
There is really no discussion about the suitedness of the material conditional as an analysis of natural language expressions. Again, the focus here is mathematical logic. Natural deduction is presented in a very careful, almost painstaking way, but without attention to or compatibility with Gentzen’s paradigm of relational definition. The method of indirect proof is called “¬-Elim”.
This is a nicely written and easily navigated text. The treatment of conditionals is a bit idiosyncratic. Natural deduction is presented before truth-functional interpretations, and there the point is made that the ⊃ symbol can be called an implication sign because the only time it appears in a proof is when in fact it has been possible to derive the consequent from the antecedent, in which case the truth of the conditional is due to its parts standing in an implication relation. And this is fine (as well as being an explanation for why intuitionists and relevant logicians tend towards the implication reading of this connective). But then when the legitimacy of the truth- functional analysis of conditionals is brought up, the claim is made that while it is not obvious, it can be justified by the natural deduction rules (via a sort of rough soundness argument), which might seem to some readers as question-begging. The presentation of natural deduction adheres very closely to the intro/elim scheme for all the connectives, although this might be a happy accident: the double negation rule is called “¬¬-Elim”, and it is paired with a (redundant) “¬¬-Int” rule, despite these rules having nothing to do with introduction and elimination as Gentzen conceived them.
The most compelling thing about the text are the challenges to the paradigm chapter, which contains some of the finest pedagogical discussion of constructive and relevant logic I have seen, and the gentle build to some basic metatheory. For example, two separate completeness proofs are given for propositional logic, one extendable to quantification theory and the other simper but not so extendable. (Because of the development of tableaux systems, everything is in place actually for a simpler completeness argument, modeled on the trivial one Samuel Buss provides for the cut-free sequent calculus in his Handbook of Proof Theory chapter.)
Speaking of Slaney, one really ought to check out his Logic4Fun site. This is an amazing, creative, and powerful logic problem solver. It is versatile and very well presented. It is not, so far as I can tell, suited for building an introductory formal logic class around. For example, it is not coordinated with any sort of enrollment roster, grade report generator, vel sim. And its design from the bottom up is more about creating logic puzzles that can have solutions computed when they are properly encoded, not a standard run through formal logic. And I can’t imagine trying to teach a group of introductory students how to encode all the parameters to even describe a problem in the interface. It is possible that this could be put to use together with Logic in Action. The database of prewritten problems even concludes with an “insane” example of an epistemic hat problem.
This is an interactive text with widgets embedded in the webpages for exercises. Conditionals are treated well. There is an appendix in which the truth-functional paradigm is both given support and challenged, and never are the truth of these things conflated with implication claims. The treatment of natural deduction doesn’t serve my purposes: the rule called “∨-Elim” is actually disjunctive syllogism!
This is a very simple text/course. There are drop down menus for questions and examples embedded in the webpage. The treatment of conditionals is somewhat cavalier (it is claimed that the material analysis is unproblematic unless a causal connection is intended), but there is no conflation with implication. The natural deduction style proof system is very far from Gentzen’s thought: the DeMorgan’s inferences are all primitive.
This text doesn’t develop a system of formal logic at all, but it is a fine repository of exercises, especially strong in that it draws, like DeLancey’s book, from actual passages from history and literature.
Perfect on conditionals, joining the ranks of Velleman, Dever, and forall x: Calgary. Natural deduction in name only: the label “∨-Elim” is slapped onto the disjunctive syllogism.
Another piece of software available for free online is Martin Frické’s Softoption. The website embeds this software together with instructional and tutorial material which some users might find to be of use. A lot of it is organized like course notes, with assigned readings from another textbook (Bergmann’s The Logic Book). In the instructional notes, the conditional is gently (i.e., in name only) conflated with the implication relation, but there is a section of “the paradoxes of material implication” with a single example and a couple of links (only one of which works). The example is presented as self-explanatory and exemplifies only one of the many things people have found wanting about the material analysis of conditional expressions. Somewhat like Carnap, the software is pretty versatile, covering propositional and predicate logic, identity, lambda calculus, induction, and other things. There are truth trees as well as a natural deduction system. The propositional natural deduction system is coded just like that of LPL: the double negation rule is called “¬-Elim” and what Gentzen called “¬-Elim” is called “Absurdity-Intro”. There are notational options to toggle. One has to dig around a bit to find how to use the software to write one’s own problems. And there doesn’t seem to be any autograding facility: All users have the same username and password, and one is advised to have students print and hand-in the work they complete with the software.
I mentioned Rabern’s elogic above. What I said about it is based on inferences from screenshots and what I know about the Kalish and Montague text. But the documentation says that other versions of deduction system are on the way. The documentation also says that “elogic follows the Software as a Service model and is available to educational institutions.” I sent a request from an institutional e-mail address for an account so that I could test the software as an option for use at my university but received no reply after several days.
Finally, because so few of these texts use natural deduction in a way that allows a presentation of the style of relational definition, I will mention two non-free books that do. An older one,
and two newer ones,
All three of these come close to conflating the conditional operator and implication relation. The Gamut and von Plato are launched from within the constructivist/intuitionist/verificationist theory of meaning, so the elision is somewhat more excusable (see the entry on Slaney’s text above). Huth and Ryan just never mention conditionals and call the arrow symbol an implication connective whose semantics is “not quite as intuitive” as that of the other connectives, something that a reader might “feel slightly uncomfortable with.” Carnap is equipped with style packages for the formalisms in each of these texts.
Different instructors will have different priorites in textbook selection. As a scorned lover of LPL, I am perhaps overly sensitive to some of its bad habits. But maybe the descriptions I have put together can be helpful for people considering these options for their own classrooms.
* The entire volume, accessible here, is highly recommended. In it one finds in addition to van Benthem’s wonderful paper, several excellent essays about logic pedagogy. None of the authors seem to have quite the same agenda as me, but Thompson and Seligman’s paper on teaching natural deduction uses a system perfectly suited to explain relational definition, as does Villadsen, Birch Jensen, and Schlichtkrull’s description of the proof assistant NaDeA for Isabelle (which references Huth and Ryan). Slaney’s description of Logic4Fun appears here, too. And there are other treats for logic pedagogues.