21 minute read
In programming, functions and modules are simple ideas but the systems built from them are often complex. The same is true in digital electronics: logic gates like AND and OR are simple but, as in software, systems built from them are unavoidably complex. To understand the large, complex systems, we begin with their simple components.
Like any engineering discipline, electrical engineering has its own technical terminology, applied mathematics, conventions for diagrams, and so on. In this basic introduction, we dispense with most of those details (and all of the formal mathematics), instead relying on familiar ideas from the world of programming. For accessibility, many of the ideas here will be presented both in code and in equivalent digital circuit diagrams, which will be built up from their simplest forms.
We'll begin with logic gates, the building blocks of digital systems, expanding until we have a circuit that adds two four-bit numbers.
Logic Gates
Logic gates are the electrical equivalent of logical operators like AND in our programming languages.
The most familiar are AND, OR, and NOT (&&
, ||
, and !
in many languages).
There are others as well, some of which we'll get to later.
If "logic gate" is offputting, it can always be read as synonymous with "logical operator" in this article: these are functions of two Boolean arguments, like &&
in a language like JavaScript.
We use truth tables to show the inputs and outputs of logic gates and logical operators.
The table below describes the AND operator present in every programming language.
For example, it shows that true AND false == false
.
To save space, we'll write 0
and 1
instead of false
and true
, but the meaning is the same for our purposes.
(0
and 1
are also the convention in electrical engineering.)
x | y | x AND y |
---|---|---|
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 1 |
We can write our own implementation of AND without using any of the language's built-in logical operators.
Here's a naive version in Python.
All code examples here are shown as interactive REPL sessions, so Python statements are prefixed by >>>
and ...
, which aren't part of Python's syntax.
>>> def AND(x, y):
... if x == 0 and y == 0: return 0
... if x == 0 and y == 1: return 0
... if x == 1 and y == 0: return 0
... if x == 1 and y == 1: return 1
To write OR from scratch, we'd have to do the same thing, even though we already have AND. There's no way to write OR in terms of AND; or, likewise, to write AND in terms of OR. However, there are some universal gates, from which we can build any other gate. We'll get to those soon.
De Morgan's Laws
There are eight possible AND expressions with two variables, shown in the table below.
It's not necessary to examine each one closely, but note that there are no possibilities left out: if we add a !
anywhere in one of these expressions, it will be equivalent to one of the other expressions.
For example, one of the entries is x and !y
.
If we add another !
to the y
, making it x AND !!y
, the !!
will cancel and give us x AND y
, which is already in the table.
AND expressions |
---|
x AND y |
!x AND y |
x AND !y |
!x AND !y |
!(x AND y) |
!(!x AND y) |
!(x AND !y) |
!(!x AND !y) |
(Incidentally, there are 8 entries because there are three positions where a !
can go.
Each of those can have a !
or not, giving two possibilities per position.
The positions can have a !
or not independently of the other positions, so we have 2*2*2 = 8 expressions in total.)
Some of these AND expressions are confusing – the last one, for example, with three NOTs. De Morgan's laws are two tools for transforming complex boolean expressions into simpler ones. We can also transform simple expressions into complex ones if we like.
Our table is repeated below with equivalent OR expressions included for each AND expression. Again, there's no need to exhaustively read it, but notice that simple expressions in one column correspond to complex expressions in the other. If this idea is new, try choosing one line of the table, assigning a 0 or 1 value to x and y, and working out the results of both the AND expression and the OR expression. They should be the same!
AND version | OR version |
---|---|
x AND y |
!(!x OR !y) |
!x AND y |
!(x OR !y) |
x AND !y |
!(!x OR y) |
!x AND !y |
!(x OR y) |
!(x AND y) |
!x OR !y |
!(!x AND y) |
x OR !y |
!(x AND !y) |
!x OR y |
!(!x AND !y) |
x OR y |
Memorizing this table would be difficult, but fortunately no one does that.
Instead, there's a process to perform this transformation on any AND or OR expression.
Here are the steps, using the second expression above, !x AND y
, as an example.
- Negate both arguments to the AND or OR. This turns
!x AND y
intox AND !y
. - Negate the whole expression, adding or removing parens. This gives us
!(x AND !y)
. - Switch the AND to an OR or vice-versa. This gives us
!(x OR !y)
.
Looking in the table, that's exactly what we see: !x AND y
is equivalent to !(x OR !y)
.
That's all there is to De Morgan's laws: to get an equivalent expression, we negate the arguments, negate the expression, then flip the operator.
They're called "laws", plural, only because they work for both AND and OR.)
This is occasionally useful in programming. For example, we might end up with the expression below through a series of code changes, months apart, each sensible on its own.
!(!user.is_admin AND !user.is_moderator)
It's much easier to understand if we De Morgan it. With a little practice, it doesn't even require much thought: we notice that there's a lot of negation in the expression, then we mechanically apply the rules above.
user.is_admin OR user.is_moderator
In truth, this doesn't come up as often as computer science professors might wish, but the time saved more than justifies the small time investment to learn it.
Building gates from NAND
We can't use AND to build OR or vice-versa. However, we can use NAND to build any logic gate: AND, OR, NOT, etc. This is surprising at first, but there's no esoteric reason underneath; it falls out of the truth tables.
NAND is "NOT-AND", so it's true unless x and y are both true.
In programming terms, x NAND y
, means !(x AND y)
.
Here's a truth table for both AND and NAND.
x | y | x AND y | x NAND y |
---|---|---|---|
0 | 0 | 0 | 1 |
0 | 1 | 0 | 1 |
1 | 0 | 0 | 1 |
1 | 1 | 1 | 0 |
And, for completeness, the corresponding Python function:
>>> def NAND(x, y):
... if x == 0 and y == 0: return 1
... if x == 0 and y == 1: return 1
... if x == 1 and y == 0: return 1
... if x == 1 and y == 1: return 0
This function will serve as the basis for everything in this article; we'll eventually build up our four-bit adder using this primitive gate and no others.
First we build NOT, which can be defined as NAND(x, x)
.
The truth table above shows that NAND(1, 1) == 0
and NAND(0, 0) == 1
, which is exactly the behavior of NOT!
Here it is in Python:
>>> def NOT(x):
... return NAND(x, x)
>>> NOT(1)
0
>>> NOT(0)
1
NAND and AND are opposites, so we can NOT either one to get the other, allowing us to build AND
using our two existing gates:
>>> def AND(x, y):
... return NOT(NAND(x, y))
# Check
>>> [AND(0, 0), AND(0, 1), AND(1, 0), AND(1, 1)]
[0, 0, 0, 1]
With NOT and AND, we can use De Morgan's laws to build OR.
To implement x OR y
, we can do !(!x AND !y)
, which is equivalent by De Morgan.
(This was row one in our table in the previous section.)
>>> def OR(x, y):
... return NOT(AND(NOT(x),
... NOT(y)))
# Check
>>> [OR(0, 0), OR(0, 1), OR(1, 0), OR(1, 1)]
[0, 1, 1, 1]
Note that only NAND and NOR are universal in this way, allowing us to build all other gates from them. To make this concrete, try to define NOT using only AND. It won't work!
The Universality of NAND
Here's the OR function again:
>>> def OR(x, y):
... return NOT(AND(NOT(x),
... NOT(y)))
To make NAND's universality a bit more tangible, we'll translate the OR function into NAND calls.
We expand x OR y
piece by piece:
Expression Notes
---------- -----
x OR y
!(!x AND !y) By De Morgan's Laws.
!x NAND !y We had a NOT-AND, which is exactly what NAND is!
(x NAND x) NAND (y NAND y) !x can be rewritten as x NAND x.
Here it is written in Python:
>>> def OR(x, y):
... return NAND(NAND(x, x),
... NAND(y, y))
# Check
>>> [OR(0, 0), OR(0, 1), OR(1, 0), OR(1, 1)]
[0, 1, 1, 1]
Using Python is a fine way to understand logical operations. However, unfortunately for programmers, hardware isn't built in Python. Still, it's all ones and zeros, so digital electronics often map closely to our understanding of Boolean expressions in programming.
Here are four digital logic diagrams as an electrical engineer would draw them on paper. First, a buffer, which outputs whatever value is sent to it, and thus has no implications for us in this article. It's included only because it shows that a little circle on the right side of a gate indicates negation: the NOT symbol is a buffer symbol with the little circle added. Next comes AND, then NAND, which is drawn as an AND plus the negation circle. Each symbol includes labels with programmer-friendly boolean expressions for the outputs.
Things get more interesting when we combine gates.
Here's a diagram for our Python OR
function above.
It's a direct translation of the Python expression NAND(NAND(x, x), NAND(y, y))
.
Each NAND gate has two inputs and one output, corresponding to the NAND
function's two arguments and one return value.
Some intermediate values in the diagram are also called out for clarity.
(In English: the input x is sent to both inputs of NAND1, outputting !x
.
The input y is sent to both inputs of NAND2, outputting !y
.
Those two outputs are NANDed together, giving !(!x AND !y)
, which is x OR y
.)
Translating logical operators like OR into NAND gates is strange at first, but there are reasons to do it in the real world. For circuits built out of individual logic gate chips, it's easier and cheaper to buy many of one type of chip than to buy smaller numbers of different chips. In many situations, NAND is also electrically simpler than other gates, requiring fewer transistors. Finally, complex digital logic circuits usually go through an automated simplification step anyway, so that step may as well target one type of universal gate.
To make this a bit more concrete, take a look at this hobbyist computer built out of 352 NAND gates.
It has one minor complication: many of those gates have more than two inputs.
But that's an extension of the two-input NAND gates we've already seen, just as x AND y AND z
is an extension of the more basic x AND y
.
For example, the "133" row in the NAND gate usage table on that page refers to the Texas Instruments SN74ALS133 chip, which is a 13-input NAND gate.
As another real-world example, the Apollo Guidance Computer was built from 2,800 three-input NOR gates, which are universal in the same way that NAND is. (A useful exercise: write a naive NOR (NOT-OR) function. Use it to build AND, OR, and NOT. The process will be similar to our NAND process, with slight differences.)
Adding numbers in computers
Our goal is to build a four-bit adder. Its inputs are two four-bit numbers, so anything from 0 to 15 in base 10. Its output is another four-bit number. We'll approach it in three steps, following a classical progression used to teach digital electronics since the days when a computer took up a whole room. First we build a "half adder" that can't do much on its own, then a "full adder" that does a single column of addition, then finally the four-bit adder.
In primary school, we learn to do multi-digit addition by writing the numbers in columns:
33
+ 49
----
82
When a column adds up to 10 or more, we carry the one into the next column:
1 <- carried 1 because 3 + 9 >= 10 in the right column
33
+ 49
----
82
Computers do addition in a essentially the same way, adding up columns one at a time.
The numbers are in binary, though, so each column is a zero or a one.
Here's the binary version of 33 + 49 == 82
, the same example as above:
1 1 <- carried ones
0100001
+ 0110001
---------
1010010
In base 10, base 2, or any other base, adding two digits gives us both a sum and a carried value. For each column, we need to produce a sum digit plus a carry. We also need to consider the previous column's carry, adding it into the current column's sum.
Putting that together, each column has three inputs: the two digits being added, plus the previous column's carry; and it has two outputs: the sum digit plus a new carry. These three-input-two-output columns are full adders: they can do everything required for one column of binary addition. To get there, we first build a simpler device called a half adder that only does half of the work.
Half adders
A half adder has two inputs (the two bits to be added) and two outputs (the sum and the carry).
Here's our target truth table for a half adder x + y
.
inputs (x and y) | carry | sum |
---|---|---|
0 + 0 | 0 | 0 |
0 + 1 | 0 | 1 |
1 + 0 | 0 | 1 |
1 + 1 | 1 | 0 |
The sum column is 1 when exactly one input is 1.
This is exclusive or, another logic gate.
We don't use it often, but most programming languages support it with the ^
operator, including Python.
Here are the four possible binary inputs to Python's exclusive or.
Notice that they exactly match the sum column in the truth table above.
>>> [0^0, 0^1, 1^0, 1^1]
[0, 1, 1, 0]
The Python function below is an XOR gate built from our existing gates, all of which are ultimately built from NAND. In English, this code reads: "XOR is true when at least one of the bits is true, but they're not both true."
>>> def XOR(x, y):
... return AND(OR(x, y),
... NOT(AND(x, y)))
# Check
>>> [XOR(0, 0), XOR(0, 1), XOR(1, 0), XOR(1, 1)]
[0, 1, 1, 0]
We can also represent this as a circuit diagram. Every logic gate here corresponds directly to an AND, OR, or NOT in the Python code. As before, all intermediate values are labeled with logical expressions.
(We could use NAND instead of the AND followed by a NOT, but we'll usually stick to NOT and AND because they're more familiar.)
That takes care of the sum, but we still need to compute the carry.
In our table, the carry column is only 1 when both inputs are 1, so we can use AND: carry = x AND y
.
Putting the sum and carry together, we can write a half adder in Python:
>>> def HALF(x, y):
... carry = AND(x, y)
... sum = XOR(x, y)
... return (carry, sum)
>>> [HALF(0, 0), HALF(0, 1), HALF(1, 0), HALF(1, 1)]
[(0, 0), (0, 1), (0, 1), (1, 0)]
The output of this function correctly computes both the sum and the carry, matching the truth table that we started with. We can also express it as a circuit diagram, which takes up more space but also makes it easier to see how data flows through the system. To make reading easier, the XOR gate is shown as a single unit, as is traditional in electronics.
Full adders
To add multi-bit numbers, we use the primary school process: first add the digits in the rightmost column, giving a sum and a carry. Then, add the digits in the second-rightmost column, plus the previous column's carry bit. There are three total inputs for each column: one from each number, plus the previous column's carry.
Our half adders can produce a carry, but they can't accept incoming carries from the previous column. Fortunately, we can combine two half adders to build a full adder that both accepts and produces a carry. The truth table is below, but it's twice as long as the previous one because there are three inputs instead of two. There's no need to read this in its entirety, but it can be handy to refer back to when working out specific cases.
x | y | carry_in | carry_out | sum |
---|---|---|---|---|
0 | 0 | 0 | 0 | 0 |
0 | 1 | 0 | 0 | 1 |
1 | 0 | 0 | 0 | 1 |
1 | 1 | 0 | 1 | 0 |
0 | 0 | 1 | 0 | 1 |
0 | 1 | 1 | 1 | 0 |
1 | 0 | 1 | 1 | 0 |
1 | 1 | 1 | 1 | 1 |
We could turn this table into a series of logic gates using one of the many automated methods available. Like generated code, those automatically generated gates are often needlessly complex, so there are additional methods to simplify them into compact expressions like those below. These methods are out of scope for this article, though. In truth, the full adder function below was written by translating a circuit diagram into code.
>>> def FULL(x, y, carry_in):
... carry1, sum1 = HALF(x, y)
... carry2, sum2 = HALF(carry_in, sum1)
... carry_out = OR(carry1, carry2)
... return (carry_out, sum2)
# Check
>>> for x in [0, 1]:
... for y in [0, 1]:
... for carry_in in [0, 1]:
... print(FULL(x, y, carry_in))
(0, 0)
(0, 1)
(0, 1)
(1, 0)
(0, 1)
(1, 0)
(1, 0)
(1, 1)
The circuit diagram below shows the relationships between FULL
's half adders more directly.
Half adders are represented as a single unit, even though they each have a complex internal structure; abstraction is as important in hardware as it is in software.
Like before, the intermediate values are labeled, this time using the names of the intermediate variables in the Python function.
If the Python function or this diagram are unclear, tracing the correspondences between those intermediate variables in the function and the diagram should help. Working through some entries in the truth table by setting the inputs and calculating the intermediate values will also help. It's a perennial involuntary passtime of first-year electrical engineering students.
We can now add one bit, including incoming and outgoing carry bits. This doesn't sound like a big achievement, but we're close to adding arbitrarily large binary numbers. The only remaining step is to combine these full adders to form the "columns" of the addition process, each one feeding into the next.
Multi-bit addition
First, a note about representation: we'll represent multi-bit numbers with arrays of 0s and 1s. We could use Python's native integers, but keeping the bits explicit makes it clear that there's no magic involved.
To add multi-bit numbers, we need one full adder per bit. In our case, that's four full adders. The full adders will cascade into each other: the carry output of each full adder feeds the carry input of the next. This is still the same process from primary school: add each column of digits, possibly carrying a 1 over to the next column.
1 1 <- carried ones
0100001
+ 0110001
---------
1010010
Our adder begins with the rightmost, least significant digit in the numbers.
For example, the number 1 (in base 10) is 0001 (in base 2), which we represent as [0, 0, 0, 1]
.
The number 9 (in base 10) is 1001 (in base 2), which we represent as [1, 0, 0, 1]
.
>>> def ADD4(left, right):
... carry3, sum3 = FULL(left[3], right[3], 0)
... carry2, sum2 = FULL(left[2], right[2], carry3)
... carry1, sum1 = FULL(left[1], right[1], carry2)
... carry0, sum0 = FULL(left[0], right[0], carry1)
... return [sum0, sum1, sum2, sum3]
# Check: 0 + 0 == 0
>>> print(ADD4([0, 0, 0, 0], [0, 0, 0, 0]))
[0, 0, 0, 0]
# Check: 1 + 1 == 2
>>> print(ADD4([0, 0, 0, 1], [0, 0, 0, 1]))
[0, 0, 1, 0]
# Check: 7 + 1 == 8
>>> print(ADD4([0, 1, 1, 1], [0, 0, 0, 1]))
[1, 0, 0, 0]
# Check: 7 + 8 == 15 (the largest 4-bit number)
>>> print(ADD4([0, 1, 1, 1], [1, 0, 0, 0]))
[1, 1, 1, 1]
The problem with fixed-size adders like this one (and the ones in all of our computers) is that they stop being "adders" in the everyday sense when numbers get too large. For example, if we try to add 15 + 1, which should give us 16, we get 0 instead.
>>> print(ADD4([1, 1, 1, 1], [0, 0, 0, 1]))
[0, 0, 0, 0]
The adder worked correctly here: it set the final carry
variable to 1, but that value was discarded because our function didn't do anything with it.
If we built a five-bit adder, that final carry
would be an input to the fifth full adder, and we'd get [1, 0, 0, 0, 0]
, which is 16 in base 10.
As usual, here's the four-bit adder as a circuit diagram made out of our full adder components.
The first carry_in
and last carry_out
are intentionally unused.
Our 4-bit adder is a specific type called a ripple adder because the carry bit "ripples" through the four full adder stages, one by one. This is roughly how integers worked in many early microprocessors. Later, in more advanced processors like Intel's 8008 CPU, faster carry-lookahead designs took over. Modern CPUs contain several adders doing different jobs, each specialized for its task, all of which are more complex than this basic design. Still, ripple adders are both the historical and conceptual basis for modern adders.
The diagram below shows the full four-bit adder with all of the individual AND, OR, and XOR gates shown. We could simplify it in several ways, but some redundancy is left in to show the repetitive structures that we've built up incrementally by reusing gates, half adders, and full adders.
It's not important to read and understand this diagram in detail by tracing each connection. Instead, compare it to the more familiar process of programming. We write high level code broken into self-contained functions and modules, like the the full adders in the diagram above act as "black boxes" with inputs and outputs. Ultimately, our high-level code gets compiled down into native code that would be difficult to read, just as the full diagram below is more difficult than the one above.
Instead of tracing each connection, here are a few salient points to note. Half adders are always one XOR and one AND, with the XOR placed above the AND. (The first half adder is highlighted with a box.) Full adders are always two half adders plus an OR, so a total of two XORs, two ANDs, and one OR. (The second full adder is highlighted with a box.) The four-bit adder is made of four full adders stacked vertically here due to screen width limitations, but they'd be a bit easier to read if they were arranged horizontally.
All of this was built from our original NAND implementation. (Remember: we wrote all of the other gates in terms of NAND!)
We could break the above discrete four-bit adder diagram down even further, showing how each gate can be made from NAND gates. But, like in programming, the final stages of optimization are subtle, repetitive, and better left to automated tools. Our compilers automatically optimize our code when compiling it. Digital electrical engineers' tools automatically optimize their designs as well.
For reference, here are all of the functions from this article in one place, showing the full four-bit adder built from NAND gates. This code has no external dependencies and uses Python as little more than a calculator.
>>> def NAND(x, y):
... if x == 0 and y == 0: return 1
... if x == 0 and y == 1: return 1
... if x == 1 and y == 0: return 1
... if x == 1 and y == 1: return 0
>>> def NOT(x):
... return NAND(x, x)
>>> def AND(x, y):
... return NOT(NAND(x, y))
>>> def OR(x, y):
... return NAND(NAND(x, x), NAND(y, y))
>>> def XOR(x, y):
... return AND(OR(x, y),
... NOT(AND(x, y)))
>>> def HALF(x, y):
... carry = AND(x, y)
... sum = XOR(x, y)
... return (carry, sum)
>>> def FULL(x, y, carry_in):
... carry1, sum1 = HALF(x, y)
... carry2, sum2 = HALF(carry_in, sum1)
... carry_out = OR(carry1, carry2)
... return (carry_out, sum2)
>>> def ADD4(left, right):
... carry3, sum3 = FULL(left[3], right[3], 0)
... carry2, sum2 = FULL(left[2], right[2], carry3)
... carry1, sum1 = FULL(left[1], right[1], carry2)
... carry0, sum0 = FULL(left[0], right[0], carry1)
... return [sum0, sum1, sum2, sum3]
>>> print(ADD4([0, 0, 0, 1], [1, 0, 0, 1]))
[1, 0, 1, 0]