A Programmer's Discrete Mathematics

Prologue

As programming students, you have been using Discrete Mathematics, well, for forever. Every time you have evaluated an IF statement, every time that you have counted something, every time that you have used some primitive data type, every time that you have thought about the elements of a set, every time that you have used a function, you have been using Discrete Mathematics. Over your career, you will be learning a lot about programming and computer systems, but unbeknownst to you, rather underlying and unifying it all are concepts found in Discrete Mathematics. By knowing the basis for most anything, programming ceases to be just memorization of a bunch of programming tricks and techniques and instead becomes an understanding of how it is all glued together. Real ingenuity and inventiveness comes of such. Do you really need it? No, not really, but as with so many other fields, knowing what is really under the hood makes all the rest the craziness that we programmers deal with make more sense; it helps you think easier about what you are doing while programming.

To provide some structure of what we will be looking at, this tutorial can be thought of as covering the following concepts:

So, let's get started, starting with the notion of Logic, Chapter 1.

Within Chapter 1 you will find the following sections:


Chapter 1: Logic

Specifically, Binary Logic. Or more formally, Boolean Algebra, an algebra of logic. It starts with two discrete states:

all meaning the same thing, each having symbols representing one of two states. That is all. And yet the whole of computer science as we have known it for decades is not much without that as a basis.

With it you already know that you can ask questions like:

Normally, within an IF statement, you do not normally think in terms of that IF statement testing for TRUE or FALSE, but that is exactly what it is doing. For you it often looks like some kind of comparison as in

All those comparisons, including the last, are evaluated as being TRUE or FALSE. In the last, the FALSE is the case where X is 0; any other value for X is understood to be TRUE. Or, still looking at the last bullet, X could be of the Boolean data type which really does mean directly TRUE or FALSE.

Nothing particularly special, right? All that we are going to do in what follows is formalize stuff like that and broaden on it, A LOT.

So, where to start. Let's start with a BIT. It is enough memory in your computer to represent one binary state, a 0 or a 1. Indeed, most everything within your physical computer is based only on that.

Fortunately, you do not normally need to care how a 1 or 0 is physically represented. All that, and a lot more is in the realm of the electrical engineers and physicists to determine. They know how to abstract for you what it is that you care about. From your point of view, it is just TRUE or FALSE, YES or NO, 1 or 0. A very real abstraction, but nonetheless a very real primitive concept for you that much else you use is based upon. A Bit.

Next? Truth Tables, or if you like Logic Operations.


Logical Operations: AND, OR, Exclusive OR, and NOT

For now, let's talk only about three of these logic operation, all of with which you are hopefully familiar: AND, OR, and NOT. The following represent the truth tables for each. We will look at a few more logic operations shortly.

The AND and the OR operations require two inputs, the NOT, just one. And, with you being programmers, for consistency we will be using 1 and 0, but you know that this also represents TRUE and FALSE, YES and NO, as well. With two inputs, there are of course four possible states, with one input, two possible states.

These are exactly as you would expect them to be.

Example:


When computer engineers design their logic using AND, OR, or NOT they use diagrams like the following. Each diagram represents the corresponding truth table shown above. For AND and OR, two different bits as input, one bit is output. This is just nomenclature, symbolism. And, again, the bits are 0 / 1, but their meaning could just as easily be FALSE / TRUE. These are just symbols meaning the same thing, with different electrical circuits implementing each of them.



Here, in Discrete Mathematics, we have our own nomenclature as in the following:

Each is just a symbol, a shorthand, part of the language used in Discrete Mathematics. Rather than using the terms AND, OR, or NOT in the truth tables above, we could have just as well used , , and ¬ with exactly the same meaning. As a mental shortcut to help remember these symbols, it might help to think of as the A in AND but lacking the crossbar and the as a U as in Union as in everything.

So, with that in mind, let's define a few more logic operations, their symbols, and their associated concepts. We will start with Exclusive OR. It's logic symbol, engineering system, and truth table are as follows:



In short, the result is TRUE only if the inputs are different.

Notice in the truth table that we have also changed the names of the inputs, now called A and B. So, we would say that the output (OUT) is

  1. A ⊕ B for Exclusive OR (or XOR)
  2. A ⋀ B for AND
  3. A ⋁ B for OR

Logical Operations: Implication, If And Only If

The previous three (AND, OR, XOR) are just a few possible truth tables. Two inputs, A and B, and from the full set of those two inputs, we get corresponding output. Each for AND, OR, and Exclusive OR had differing outputs. That is just three of the possible ways to mix and match to define a logical operation. So, next we delve into two more, the logic operations of Implication and If And Only If.

For these we show in the following two more truth tables and their symbols.

  1. A→B for Implication
  2. A⟷B for If And Only If (IFF)

An Implication ( A→B ) is read as the proposition "If A, then B". Notice that it is False only if A is True when B is False. The order, A implying B matters.

For Implication, do not think of this "then" as the THEN in your programs as in

Instead, think of both of these as the binary logic (i.e., the truth table) used in evaluating the IF itself, the operation(s) fitting within the parentheses.

I know of no single operation in any processor hardware or in any programming language that exactly implements Implication. But as a hint for later, we will be seeing that A→B has the same truth table as (¬A) ⋁ B. If you do need to program logic like that of Implication, you can use the latter identical operation.

Similarly, there is no programming language or processor hardware which does If And Only If (A⟷B) as a single operation. Even so, the truth table for it has some meaning. So, how would you implement it? Taking a good hard look at its truth table and that of Exclusive OR, you will notice that it is no different than Exclusive OR with the output inverted. Said differently, it is the NOT of the Exclusive OR in that it is False only if the inputs are different, True if the inputs are the same. If there were hardware logic for doing it, it's symbol might look like the following:



Propositions and Propositional Variables

Bear with me in what is to come. And, for now, all of the above are only just the truth tables shown with associated symbols. We are going to next make a subtle addition.

So, first, a reminder. Recall that A and B are essentially each just separate questions, each with a binary answer.

Ask your question with a True/False as an answer; each such question is your A or B.

So, let's define a couple.

  1. For an A, let's make it "Is X greater than 10?"
  2. For a B, let's make it "Is X less than -10?"

We can OR those with this being represented as A ⋁ B, meaning "Is X either greater than 10 or less than -10?" Or programmatically,

A is the test X for exceeding 10 and B is the test of X less than -10. This basic form is true for not only the OR, but for all the truth tables we have looked at as well.

Those questions get formalized in Discrete Mathematics as the term Proposition. A and B are then called Propositional Variables. Those, more functions than variables, are anything that evaluates to a Boolean.

Propositional Variables; just another term used in Discrete Mathematics, and you are already familiar with the concept. You have also likely run across these in your programs as Boolean variables as in something like

Notice how we are slowly abstracting stuff that you already know?

(As a quick aside, the term Boolean comes from George Boole who is credited with defining the foundations for Boolean algebra.)

Next we look at the meaning of a couple new terms, Tautology and Contradiction.


Tautology

A Tautology is a formula, not really a truth table, and one which is always True for every value of its propositional variables. Shown as a truth table, a Tautology is nothing more than the following:

This does not really look all that useful, but it is as it relates to some formula in proving it is a tautology. (Yes, I know, it seems circular for now.) A simple example of a tautology is

where the latter compare means NOT EQUAL. Or said slightly differently,

Either way, the results are always TRUE.

For more complex examples, consider the following formulas which happen to each be tautologies:

  1. ( A ⊕ B ) ⋁ ( A⟷B )
  2. [ ( A→B ) ⋀ A ]→B )

We are going to prove that they are, by way of a couple of truth tables, ones where – as usual – all possible pairs of A and B are used. Read these from left to right, with each formula broken down into its component pieces. Result, the formula, is in green at the right.

In this second proof, keep in mind that the order for Implication matters, A→B.

Notice, again, that since both results are fully True for all input values A and B, these formulae are Tautologies.

Yes, I know, not particularly interesting from a programming point of view. So far all we are really doing are defining terms.


Contradiction

A Contradiction is the opposite of a Tautology. A Contradiction is a formula which is always False for every value of its propositional variables. Perhaps obvious here, the truth table would look like the following:

And, yes, the means of proving that a formula is a Contradiction is similar to what we did for Tautology. The following are Contradictions (all outputs are False):

  1. A ⋀ (¬A)
  2. ( A ⊕ B ) ⋀ ( A⟷B)
  3. ( A ⋁ B ) ⋀ [( ¬A ) ⋀ ( ¬B )]

The first - A ⋀ (¬A) - almost goes without saying, but let's say it anyway.

As you would expect, all False.

As a proof of the latter two, which also use truth tables, consider the following:

Notice in the above that this is proof that Exclusive OR is the inverse of If And Only If.



Contingency

A Contingency is, well, everything else. Like Tautology and Contradiction, a Contingency is a formula, one though which has a mix of both some True and False values for every value of the formula's propositional variables. By this definition, even a formula of two input variables A and B containing only AND ( A ⋀ B ) or OR ( A ⋁ B ) would be a Contingency. Still let's look at whether the following is a Contingency:

As you can see, it is a mix of True and False, making this formula a Contingency.

For now, just treat these – Tautology, Contradiction, and Contingency – as three definitions of the words. Clearly enough, the reason for them is not yet obvious.


Propositional Equivalences

Now for something that you can use in your programming even now. Think of the following as different ways to write the same logic in an IF statement. Picture two IF statements, or different statements defining two Boolean variables. We will call them X and Y. These two statements are Logically Equivalent if either of the following two conditions hold:

For example, picture two IF statements you are considering writing as in the following. Are these two producing the same results?

  1. ¬( A ⋁ B)
    IF ( ! ( A OR B ) ) { ⋯ }
  2. ( ¬A ) ⋀ ( ¬B )
    IF ( ( !A ) AND ( !B) ) { ⋯ }

So, yes, based on the truth table, the two statements are equivalent because they produce the same results. Why does it matter? In the first, there are two operations to create the final Boolean value to test, in the second there are three. So, you could write your IF statement either way and it has exactly the same meaning. Or, perhaps one seems more readable to you than the other; go with that!

Still, is this also a tautology? First, though, go back and review what produces the operation X ⟷ Y (If and Only If).

Yes, a tautology, making these two statements equivalent for this reason as well. Bottom line, you could code this either way with the same results.

As in the above, given two propositional variables p and q, you denote them as being Logically Equivalent by saying

Or, in that example where we have

then

Let's try another. Given statements

are p ≡ q?

What makes this comparison interesting is that physical processors don’t have any operation - ALU (Arithmetic Logical Unit) operation - supporting an instruction which executes an Implication operation A→B on bits. But, there is a logical equivalency to that which you can do on the hardware quickly enough. The Implication operation itself has a logical equivalence; it is OR( NOT(A), B), the second of these two operations. Let's take a look at that to be sure.

Yes, same results.

Hopefully you see the thought process. If two statements are logically equivalent, either will produce the same results in your Boolean operations such as IF statements. If you see the need for a truth table like that of Implication, use this instead.


Inverse, Converse, and Contra-positive

In this section we get even more abstract, adding still more terminology. We start with the following:

We return also to the notion of Implication and expand on it.

Repeating the Implication truth table here:

Implication: A→B

Here, we are going to try to further explain the IF-Then nature of Implication, here using hypothesis (p) and conclusion (q) as in p→q

This IF-Then, though, should not be read the way that you do in your programming languages. There you read it as

It is not that. Here we really are dealing with the test of a Hypothesis and an associated Conclusion, one which may or may not be True. For example, normally you might have a Hypothesis p reading as follows:

Yes, perhaps, normally, but not necessarily always. Conclusion q could be True or False, as could Hypothesis p.

Some examples of the meaning of this are shown in a subsequent section.

But before going there, let's consider Inverse in this light.


Inverse

The proposition

is called the Inverse of p→q. It requires the NOT of both p and q, not just p.

Where Implication is the statement "If p, then q", the Inverse will be "If not p, then not q".

Notice that these are not what you might think of as opposite.

We will see in a moment that the truth table for the Implication part of these remains the same; we are just going to use the NOT of p and q as the inputs.

But first, a couple more definitions of terms:


Converse and Contrapositive

In the following truth table, we are seeing that



Boolean Algebra Properties

From the algebra you have known for like forever, you are familiar with the concepts of distributive, associative, and commutive properties. You know that these are true for addition and multiplication alone, for example, but not for subtraction and division. Boolean algebra has similar concepts and, yes, different terminology.

Consider AND (⋀) and OR (⋁) for example. The statement (A ⋀ B) produces the same result as (B ⋀ A). (A ⋁ B) produces the same result as (B ⋁ A). The symmetry holds for Exclusive OR (⊕) as well.

You likely know already what this means to your IF statement in your programming. It relates to the order of the execution of the logical operations. For example, does it really matter whether you say

versus

Not really, only to performance if you suspect that X is typically greater than 10. (With that knowledge, you can avoid executing the second comparison.)

Similarly, expanding to more propostional variables, each of the following statements produce the same result, with the () representing higher precedence and order otherwise executed from left to right.

And, of course, more. Order is irrelevant. The situation is the same if we replace the operator with or .

But you cannot say the same thing for Implication. Just looking at the earlier truth tables, you can tell that for Implication (A→B) produces a different result than (B→A).

So, being a bit more formal, ⋯

Let p, q, and r be Boolean proposition variables. We can define logical equivalencies for quite a few sets of operations as in the following:

What this all means to you as a programmer is that you could write your IF statements in either of these two ways with identical results, whatever reads better to you.

As an engineer or working with engineers, you might also have the opportunity of taking the step-by-step logic of your programs and mapping it into hardware and have it be executed in picoseconds. So, let's again flip back to the symbols used by engineers and take a look at such equivalencies from their point of view. In the following, both figures represent the same logic; they are logically equivalent.


p ⋀ ( q ⋁ r ) ≡ ( p ⋀ q ) ⋁ ( p ⋀ r )


¬( p ⋀ q ) ≡ (¬p) ⋁ (¬q)