Module 0255: Binary addition and subtraction as boolean operations

Tak Auyeung, Ph.D.

January 26, 2017

1 About this module

2 Arithmetic operations

Recall in module 0144 that we explore the addition and subtraction operations of binary numbers. While we can certainly appreciate the simplistic nature of binary number addition and subtraction, how do we do that with a computer?

The key is to look at the result of addition/subtraction and the result of carry/borrow as two different functions. Let us review the addition rules:

Likewise, we can look at subtraction similarly:

To make the notation more concenient, let us define R(x,y) be the result of adding or subtracting two single digit binary numbers x and y. This means that

We can also define C(x,y) to represent the carry of adding two single bit numbers x and y. The definition is as follows:

Last, we define B(x,y) to represent the borrow of subtracting a single bit number y from another single bit number x:

Of course, so far, we have not made much progress to implement addition or subtraction using transistors.

3 Recognizing boolean operations

Because the functions R, C and B all take zero and one as parameters, and they only return zero and one as results, we suspect that they can be implemented using boolean operations. This is done by looking at 0 as false, 1 as true.

Let us pick the easiest one, C. C(1,1) = 1 whereas all other inputs has a result of 0. Doesn’t this remind you of conjunction? Indeed, C(x,y) = x y. In electrical engineering, the conjunction operator is often omitted so that we can say C(x,y) = xy.

The next one to implement is B. Only one input combination yields a result of 1, namely B(0,1) = 1. There is no single boolean operation to do this, but we can define B(x,y) = (¬x) y. It is a little more complex than C, but still doable using boolean operations! Anything that can be done using boolean operations can also be done using logic gates.

In electrical engineering, the notation is often as B(x,y) = x¯y where the bar on top means negation.

The last one is R(x,y). We notice that this function returns a value of 1 in two instances. We can capture each instance with a conjunction, then use a disjunction to connect the two instances. This means R(x,y) = ((¬x) y) (x (¬y)). In short hand, the disjunction is represented by a plus symbol, and so R(x,y) = x¯y + xy¯.

4 But that’s only half the operation!

Let us consider the following addition. Letters are used to represent individual bits:

    m n  
+   o p  
-------  
    q r  
+ c s 0  
-------  
    a b  
  

In this example, “m n” is a two-bit number, and “o p” is another two-bit number. “a b” is the sum. “c” is a left over carry because we want the result to use the same number of bits as the numbers being added.

Immediately, we know that r = R(n,p) because it is the result of adding those two bits. We also know that b = R(n,p) because the least significant column has no carry from another column.

Another easy one is q = R(m,o), that is still easy to understand. “s” is the carry of the addition of the least significant column, which means s = C(n,p). The result bit “a” is then a = R(q,s) = R(R(m,o),C(n,p)).

Now we have the last carry “c” to resolve. This time it gets a little more tricky because a carry can result from “m+o” or “q+s”. As a result, c = C(m,o) + C(q,s) = C(m,o) + C(R(m,o),C(n,p)).

5 Carry ripple

Let xi denotes bit i (least significant bit is bit 0) of a multi-bit value x, and yi denote bit i of y. Let wi be bit i of the intermediate sum (“q” and “r” in the previous example), let ci be the carry of bit i (“c”, “s” and 0 in the previous example). Let si be bit i of the sum.

In general, we can express wi = R(xi,yi), ci+1 = C(xi,yi) + C(wi,ci), and si = R(wi,ci). We can assume c0 = 0.

This is a rather simple way to add numbers of multiple bits. However, there is one major drawback to this approach.

Note how the final answer (the sum) is si = R(wi,ci). wi is not a problem because it can be computed using just the supplied numbers. ci = C(xi1,yi1) + C(wi1,ci1). This means that the carry of a previous digit is needed.

Logic gates do not change their output simultaneously. There is a propagational delay involved. This is the amount of time that it takes for the input to stabilize and the output becoming valid. Let us examine the propagational delay of an adder like this.

C(x,y) = xy, this involves just one gate. However, ci+1 = C(xi,yi) + C(wi,ci). We don’t have to count the delay of C(xi,yi) or even wi = R(xi,yi) because these are computed from the given numbers. There is a or-gate involved in the computation of ci+1 in addition to the and-gate used to compute C(wi,ci).

This means that each bit of a multi-bit adder has a delay of two gates. For a 64-bit integer, the total propagational delay is 128 times the propagational delay of one gate.

Generally, this method of rippling carry linearly is not considered acceptable in commercial processors.

6 Carry lookahead

Carry lookahead is a method to “cut the ripple”. Let us reexamine the equations. ci+1 = C(xi,yi) + C(wi,ci). Expanding the functions (to boolean operations) results in ci+1 = xiyi + wici. Then we expand wi, resulting in ci+1 = xiyi + (xi¯yi + xiyi¯)ci.

At this point, we can simplify the equation a little bit.

ci+1 = xiyi + (xi¯yi + xiyi¯)ci = xiyi + xi¯yici + xiyi¯ci = xiyi + xiyici + xi¯yici + xiyici + xiyi¯ci (introduce more specific terms) = xiyi + (xi + xi¯)yici + xi(yi + yi¯)ci (factoring) = xiyi + yici + xici (simplication) = xiyi + (xi + yi)ci (factoring)

Now we example some individual ci terms. c0 is given either as 0 or as some input value, but either case it has no equation.

c1 = x0y0 + (x0 + y0)c0, that is a easy one!

c2 = x1y1 + (x1 + y1)c1 = x1y1 + (x1 + y1)(x0y0 + (x0 + y0)c0) (expand) = x1y1 + (x1 + y1)(x0y0) + (x1 + y1)(x0 + y0)c0 (distribute)

At this point, it is handy to introduce the following definitions. Let gi = xiyi be the “generate term” and pi = (xi + yi) be the “propagate term”.

We can then rewrite c1 = g0 + p0c0 and c2 = g1 + p1g0 + p1p0c0.

In general, ci+1 = gi + pici. This is a recursive definition, we can theorize that ci+1 = j=0igj( k=j+1ipk) + c0 j=0ipj. This can be proven by induction, but that is a topic in CISP440. Nonetheless, here is the proof:

It is easier to prove looking at ci = j=0i1gi( k=j+1i1pk) + c0 j=0i1pj. As such, c0 = c0.

Now on the the induction step. The induction step states that assume ci = j=0i1gj( k=j+1i1pk) + c0 j=0i1pj. is true when i = n. Now we need to prove the case when i = n + 1.

cn+1 = xnyn + (xn + yn)cn (by definition) = gn + pncn (by definition of g and p terms) = gn + pn( j=0n1g j( k=j+1n1p k) + c0 j=0n1p j) (based on induction step assumption) = gn + pn( j=0n1g j( k=j+1n1p k)) + pn(c0 j=0n1p j) (distribution) = gn + ( j=0n1g j( k=j+1np k)) + c0 j=0np j (absorb standalone term) = j=0ng j( k=j+1np k) + c0 j=0np j (absorb standalone term)

The bottomline is that we now have a fast way to come up with all the ci. Recall that wi is easy to compute. This means the sum iself is now much faster to compute.

How much faster?

It depends on the actual gates available. If we can have gates that can take as many inputs as we need, then the ci terms can all be computed using one layer of and gates followed by a layer of or gates, a total of two gate propagation. The si terms still require the wi and R(wi,ci). Note that R(x,y) is just exclusive-or, which can be done by a gate. This means the sum can be computed in 6 gate propagations.

In reality, there is a limit to the number of inputs to gates. By structuring gates in a hierarchy, we can still manage to make the propagation delay be proportional to the log of the number of bits. This is still far better than the linear carry rippling method!

7 What about subtraction?

Subtraction works the same way in the “ripple” method, except the borrow is rippled instead of the carry. Let us concentrate on the “lookahead” method here.

First, we have to relate the various parts of a subtraction. Assuming xi and yi are bit i of two numbers, x and y, we use wi to represent the intermediate different of bit i. bi is the borrow from bit i. Last, let di be bit i of the difference.

wi = R(xi,yi) is the same as an adder. di = R(wi,bi) is also similar to addition. The main difference is bi+1 = B(xi,yi) + B(wi,bi). This equation expands differently:

bi+1 = B(xi,yi) + B(wi,bi) = xi¯yi + wi¯bi = xi¯yi + (xi¯yi + xiyi¯)¯bi = xi¯yi + (xi¯yi¯)(xiyi¯¯)bi = xi¯yi + (xi + yi¯)(xi¯ + yi)bi = xi¯yi + (xixi¯ + xiyi + xi¯yi¯ + yiyi¯)bi = xi¯yi + (xiyi + xi¯yi¯)bi = xi¯yi + xiyibi + xi¯ yi¯bi = xi¯yi + xi¯yibi + xiyibi + xi¯yibi + xi¯ yi¯bi = xi¯yi + (xi¯ + xi)yibi + xi¯(yi + yi¯)bi = xi¯yi + yibi + xi¯bi = xi¯yi + (yi + xi¯)bi = xi¯yi + (xi¯ + yi)bi

This means that we can redefined gi = xi¯yi and pi = xi¯ + yi, then rewrite bi = j=0i1gj( k=j+1i1pk) + b0 j=0i1pj.. There is no need to prove this again because the proof did not rely on the definitions of gi and pi!

Once we have bi figured out, di = R(wi,bi) and we are done!

8 Defining some “flags”

After a n-bit computation involving x, y with a result of r, we define some “flags” as follows:

These flags indicate important aspects of a computation, especially after a subtraction. We will revisit the flags when we need to implement instructions that make decisions, as these flags are the only basic mechanisms (at the lowest level) for any code to make decisions!