top of page

The origin and nature of logic

In today’s blog we explore logic with philosopher Dr. Gillian Russell, PhD from Princeton University, and Professor of Philosophy at the Dianoia Institute of Philosophy of the Australian Catholic University.


David: Does logic originate with Aristotle or is there more to the story that takes us further back in time and perhaps across cultures?


Gillian: That depends a bit on whether you are thinking of logic as something that all adult humans (and maybe some animals) use in everyday life, when they’re, say, thinking about how to get food or avoid predators, or whether you’re thinking of it as a theoretical discipline that studies things like truth-preservation, mathematical proof, paradoxes and argument forms. On the first way of thinking about it: of course logic originates way before Aristotle, and people from all cultures and those that pre-date the ancient Greeks have used logic.


If you are thinking about it as a theoretical discipline, Aristotle still wasn’t the first. There is logic in the pre-Socratic philosophers in the Greek tradition - Eubulides of Miletus is supposed to have invented the Liar Paradox for example - and even earlier in China and India. There’s also an important tradition of logic in Islamic philosophy, in which Avicenna is an important figure.


But Aristotle was a big milestone. He brought a lot of the work of his predecessors together in a systematic fashion and his work on syllogistic forms dominated the subject for millennia. It’s pretty striking that in The Critique of Pure Reason (1787) Kant was able to write: “Since Aristotle . . . logic has not been able to advance a single step, and is thus to all appearance a closed and completed doctrine.” It’s especially striking because no-one could write that now. Logic went through a period of intense development at the end of the 19th and in the 20th centuries, with the work of logicians like Frege, Pierce, Russell and Whitehead, Goedel and Tarski. From a modern perspective it can easily feel as if logic didn’t really get started until the end of the 19th century.


David: On closer inspection, the anchor or foundation to logic seems elusive. Has our exploration not hit bedrock or is logic a fluid or emergent thing that need not ubiquitously apply?


Gillian: “Foundations of logic” covers a lot of different issues. I think it would help to make things more concrete if we talk about what logic is first. That’s still a big question, but if we ask about foundations with a particular conception of logic in the background, it will help clarify the questions.


Logic is the the study of the entailment relation on sentences, and what matters to entailment is truth-preservation: we’re interested in whether certain sentences—say, the premises of an argument—could be true without other sentences—such as the conclusion of that argument—being true.


We usually do this in an artificial language where we stipulate the meanings of the expressions we use, because natural languages are really complicated and have features that can be tricky to handle—like ambiguity and context-sensitivity. So let me use this arrow: -> to mean the conditional `if … then’ and the letters p and q as short for sentences that can be true or false (it doesn’t really matter which sentences we use, but it could be something like “snow is white” for p and “grass is green” for q. So then the sentence “p->q” means ‘if snow is white then grass is green’. We’ll also write “-p” to mean that the negation of p is true (so you could think of “-p“ as short for something like “it is not the case that snow is white”.)


So here are two arguments using our very simple language (I’ll write “therefore” as |= ).


  1. p->q, q |= p

  2. p->q, -q |= -p


If we wrote these out longhand they’d say something like (1) “If snow is white then grass is green. Grass is green. Therefore, snow is white”, and (2) “If snow is white, then grass is green. Grass is not green. Therefore, snow is not white.” You can probably see why we tend not to do that… Just one of the advantages of formal languages is concision.


Classical logic is something like Newtonian Mechanics but in logic—it’s the logical theory that everyone learns first and it does a great job on a whole lot of problems—certainly way better than an unsystematic collection of intuitions. But there are some questions about whether it holds quite generally. Here’s what classical logic says about the two arguments above. First, the two different sentences p and q, could each have either one of two truth-values: true, or false. That means that when we’re thinking about whether the premises could be true without the conclusion being true, we have four possibilities to consider: p and q both true, p true but q false, p false but q true, and both p and q false.


The classical truth-table for the conditional tells us how to evaluate the truth of a conditional based on the truth-values of its parts. It says that a conditional is true if and only if either its antecedent (the left hand sentence) is false, or its consequent (the right hand sentence) is true.


And similarly, the classical table for negation tells us how to evaluate the truth of a negated sentence: -p is true if and only if p is false.


So now we can apply this to 1. and we immediately see that there is an assignment of values to p and q on which the premises of the argument are true but the conclusion is false: if p is false and q is true, then p->q is true and q is true (so both premises are true) but q is false. That means the argument doesn’t preserve truth, or as we can also express it: the premises don’t entail the conclusion. 1. is in fact a famous fallacy with a name: Affirming the Consequent.


2. on the other hand does preserve truth. We can see this by considering each of the four cases and noting that none of them are cases where all the premises are true and the conclusion is false. Suppose both p and q are true - then the conclusion -p is false, but that’s ok because so is one of the premises. If p is true and q is false, then the conditional p->q is false, and so again one of the premises is false. If p is false and q is true, then p->q is true and p is true (so both premises are true) but q is false. And finally, if both p and q are false, then the conclusion is true. So on all four scenarios you don’t get all true premises with a false conclusion—and so the argument is valid: if the premises are all true, so is the conclusion. This argument is also famous - it’s standardly known as modus tollens.


There are two main things you might have in mind when you ask about the foundations of logic. One is epistemological: how do we know the most basic logical truths—such as that affirming the consequent isn’t truth-preserving but that modus tollens is? The other is about the metaphysics of logic: in virtue of what are entailment claims true?


These are really questions in the philosophy of logic (much as asking the foundational questions: Do imaginary numbers really exist? And How do we know that 1+1=2? are questions in the philosophy of arithmetic.)


There are lot of different answers proposed to them. Some people think that we know logical truths through a special a priori talent (the word “faculty” tends to get used here) for figuring them out. Others - like the anti-exceptionalists (more about them below) - think logicians formulate complicated theories designed to explain large swathes of data (including what is going on with various paradoxes like the Liar and Sorites) and that we should accept all and only the entailments endorsed by the best theories.


When it comes to the metaphysics, again, there are different views. One associated with the logical positivists (like Rudolf Carnap and other members of the Vienna Circle) is that logical truths are analytic or true in virtue of meaning. Much like the sentence “All bachelors are unmarried”, we can tell that they are true without doing any experiments or making any observations, simply on the basis of our linguistic understanding. But a different view is that entailment claims are no different from other very general claims about the world, and they are therefore made true by what goes on (or doesn’t go on) in the ordinary world. (Imagine having a similar view about “all bachelors are unmarried” - what makes that true? Possible answer: the actually existing bachelors and their worldly histories.)


Anyway, the foundations of logic is a fascinating and very complicated and controversial topic. I don’t think we’ve tapped it out just yet.


David: Has modern physics had an influence on how philosophy looks at logic?


Gillian: Two things spring to mind here. One is an old issue about whether quantum mechanics might motivate a change to logic. Hilary Putnam, in particular, entertained the idea that it might lead us to give up on the classical distributive laws. [For example, the law that A&(BVC) is equivalent to (A&B)V(A&C).]


The thing is, almost no-one believes this. It’s mostly brought up as an example when we’re entertaining the idea that some scientific discovery could in principle bring us to reject a logic. But usually when people look at that particular proposal in detail they end up thinking that it isn’t a particularly appropriate revision - just an interesting idea.


But the other thing that comes to mind is the idea that the epistemology of logic might be a lot more like the epistemology of physics than is generally thought. For example, it’s traditional to say in philosophy that logic is special in lots of different ways. It’s a priori (knowable independently of experience), analytic (true in virtue of meaning), unrevisable, and necessary, and its epistemology is PROOF—like in mathematics. Whereas we generally say that the claims of the empirical sciences, including physics, are a posteriori, revisable in response to experience, contingent, and synthetic (not true in virtue of meanings alone, but also in virtue of the way the empirical world is arranged.) The epistemology of science is about collecting data, formulating theories that explain the data, and choosing the best (simplest, most elegant, unified etc.) theory that does.


But some philosophers - today they tend to get called “anti-exceptionalists” argue that logic is much more like science than was traditionally thought. Logicians also collect data (including the various paradoxes and consequences of formulating logics and theories the way we do) and generate theories (alternative logics - competing theories of entailment) to explain that data.


Anyway, if the anti-exceptionalists are right, logicians have a lot to learn from physics about good scientific methodology.


David: Have we made progress on, or learned anything about, the is/ought distinction that we could teach Hume?


Gillian: I think so, yes! Some of my own work is on proving things like Hume’s Law - the principle that descriptive premises never entail normative conclusions - as metatheorems about deontic logics—logics that deal with operators like O (it ought to be the case that q) and P (it is permissible that q.) Hume’s Law is named after a famous passage from Hume’s Treatise of Human Nature (1740):


“In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it's necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.“ (my bold)


One thing I take myself to have shown is that Hume’s Law is just one of a small group of other similar principles, that I’ll call Barriers to Entailment. Here are some others:


No universal conclusion from particular premises: for example, no matter how many ravens you examine and discover to be black, it will never follow from a set of particular claims (e.g. that particular ravens are black) that ALL ravens are black. Because you can’t get universal claims from particular ones using logic.


No conclusions about the future from premises about the past. (This is also a principle we associate with Hume!) No matter how many observations you make about the past, no claim about the future follows - for that you would always need some claim about the future in your premises.


No conclusions about how things have to be follow from premises about how they are. (If you like, from observations about the actual world, nothing follows logically about what it is like in other possible worlds.)


And of course: No normative conclusions from descriptive premises. (Hume’s Law)


My work shows how to formulate each of these precisely and prove them in the same way about standard logics. That, I think, would have been news to Hume. We could also show him how to respond to the various counterexamples that have been proposed to Hume’s law over the years, including by Kant and by the famous New Zealand logician A.N. Prior. I hope that would have been welcome news to Hume!


David: Thank you Professor!


141 views0 comments

Recent Posts

See All

Theories that aren't

Today we chat with mathematician Timothy Nguyen, PhD from MIT, currently working at DeepMind, and the man who showed that so-called...

On Vera Rubin: A life

Today’s blog shares insights into the life of Vera Rubin by astronomer, writer, and media consultant Dr. Jacqueline Mitton, PhD from the...

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post
bottom of page