Recall in module 0144 that we explore the addition and subtraction operations of binary numbers. While we can certainly appreciate the simplistic nature of binary number addition and subtraction, how do we do that with a computer?
The key is to look at the result of addition/subtraction and the result of carry/borrow as two different functions. Let us review the addition rules:
Likewise, we can look at subtraction similarly:
To make the notation more concenient, let us define be the result of adding or subtracting two single digit binary numbers and . This means that
We can also define to represent the carry of adding two single bit numbers and . The definition is as follows:
Last, we define to represent the borrow of subtracting a single bit number from another single bit number :
Of course, so far, we have not made much progress to implement addition or subtraction using transistors.
Because the functions , and all take zero and one as parameters, and they only return zero and one as results, we suspect that they can be implemented using boolean operations. This is done by looking at 0 as false, 1 as true.
Let us pick the easiest one, . whereas all other inputs has a result of 0. Doesn’t this remind you of conjunction? Indeed, . In electrical engineering, the conjunction operator is often omitted so that we can say .
The next one to implement is . Only one input combination yields a result of 1, namely . There is no single boolean operation to do this, but we can define . It is a little more complex than , but still doable using boolean operations! Anything that can be done using boolean operations can also be done using logic gates.
In electrical engineering, the notation is often as where the bar on top means negation.
The last one is . We notice that this function returns a value of 1 in two instances. We can capture each instance with a conjunction, then use a disjunction to connect the two instances. This means . In short hand, the disjunction is represented by a plus symbol, and so .
Let us consider the following addition. Letters are used to represent individual bits:
In this example, “m n” is a two-bit number, and “o p” is another two-bit number. “a b” is the sum. “c” is a left over carry because we want the result to use the same number of bits as the numbers being added.
Immediately, we know that because it is the result of adding those two bits. We also know that because the least significant column has no carry from another column.
Another easy one is , that is still easy to understand. “s” is the carry of the addition of the least significant column, which means . The result bit “a” is then .
Now we have the last carry “c” to resolve. This time it gets a little more tricky because a carry can result from “m+o” or “q+s”. As a result, .
Let denotes bit (least significant bit is bit 0) of a multi-bit value , and denote bit of . Let be bit of the intermediate sum (“q” and “r” in the previous example), let be the carry of bit (“c”, “s” and 0 in the previous example). Let be bit of the sum.
In general, we can express , , and . We can assume .
This is a rather simple way to add numbers of multiple bits. However, there is one major drawback to this approach.
Note how the final answer (the sum) is . is not a problem because it can be computed using just the supplied numbers. . This means that the carry of a previous digit is needed.
Logic gates do not change their output simultaneously. There is a propagational delay involved. This is the amount of time that it takes for the input to stabilize and the output becoming valid. Let us examine the propagational delay of an adder like this.
, this involves just one gate. However, . We don’t have to count the delay of or even because these are computed from the given numbers. There is a or-gate involved in the computation of in addition to the and-gate used to compute .
This means that each bit of a multi-bit adder has a delay of two gates. For a 64-bit integer, the total propagational delay is 128 times the propagational delay of one gate.
Generally, this method of rippling carry linearly is not considered acceptable in commercial processors.
Carry lookahead is a method to “cut the ripple”. Let us reexamine the equations. . Expanding the functions (to boolean operations) results in . Then we expand , resulting in .
At this point, we can simplify the equation a little bit.
Now we example some individual terms. is given either as 0 or as some input value, but either case it has no equation.
, that is a easy one!
At this point, it is handy to introduce the following definitions. Let be the “generate term” and be the “propagate term”.
We can then rewrite and .
In general, . This is a recursive definition, we can theorize that . This can be proven by induction, but that is a topic in CISP440. Nonetheless, here is the proof:
It is easier to prove looking at . As such, .
Now on the the induction step. The induction step states that assume . is true when . Now we need to prove the case when .
The bottomline is that we now have a fast way to come up with all the . Recall that is easy to compute. This means the sum iself is now much faster to compute.
How much faster?
It depends on the actual gates available. If we can have gates that can take as many inputs as we need, then the terms can all be computed using one layer of and gates followed by a layer of or gates, a total of two gate propagation. The terms still require the and . Note that is just exclusive-or, which can be done by a gate. This means the sum can be computed in 6 gate propagations.
In reality, there is a limit to the number of inputs to gates. By structuring gates in a hierarchy, we can still manage to make the propagation delay be proportional to the log of the number of bits. This is still far better than the linear carry rippling method!
Subtraction works the same way in the “ripple” method, except the borrow is rippled instead of the carry. Let us concentrate on the “lookahead” method here.
First, we have to relate the various parts of a subtraction. Assuming and are bit of two numbers, and , we use to represent the intermediate different of bit . is the borrow from bit . Last, let be bit of the difference.
is the same as an adder. is also similar to addition. The main difference is . This equation expands differently:
This means that we can redefined and , then rewrite .. There is no need to prove this again because the proof did not rely on the definitions of and !
Once we have figured out, and we are done!
After a n-bit computation involving , with a result of , we define some “flags” as follows:
These flags indicate important aspects of a computation, especially after a subtraction. We will revisit the flags when we need to implement instructions that make decisions, as these flags are the only basic mechanisms (at the lowest level) for any code to make decisions!