Truth tables can appear in two ways depending on your mental model.
Without context, they look like grids of 0s and 1s with patterns to learn. The relationship between inputs and outputs feels abstract.
With a mental model of switches, voltage, and physical state, the same grid becomes transparent. The truth table documents something you already understand at a conceptual level.
The difference is context. Truth tables are logical abstractions that can be physically implemented. Once you see how physical systems reliably implement binary state, the grid stops being mysterious and starts being inevitable. This post provides that context.
To read the adversarial perspective, see Exploiting Truth Tables: Side Channel Attacks.
How I Understood Truth Tables
I had years of professional experience writing code, worked with bitwise operations in security contexts, and built circuits on breadboards. When I took a logic gates class, the instructor wrote patterns like 0000, 0100, and 1110 on the board without explaining what they represented.
The symbol-first approach works well for many people. It did not work for me. The material only clicked after I mapped it back to electronics. Once I connected the symbols to voltage states and switching behavior I already understood, the truth tables became transparent.
For me, the physical model came first. The symbols described it. That mapping transformed truth tables from abstract patterns into documentation of something concrete.
Truth tables are compression tools. They summarize behavior efficiently. But I needed to understand what they were summarizing before the notation made sense.
What 1 and 0 Actually Mean
In digital logic, 1 and 0 are not numbers. They are symbols in a logical abstraction that map to physical voltage states.
At the hardware level, these symbols correspond to voltage ranges:
- 0 represents LOW voltage, typically close to 0 volts
- 1 represents HIGH voltage, typically in a higher voltage range
For classic TTL logic running on a 5V supply:
- Logic LOW is 0V to approximately 0.8V
- Logic HIGH is approximately 2.0V to 5V
CMOS behaves closer to the rail voltages (near 0V and near supply voltage). The exact thresholds vary by technology, but the principle remains constant at the digital abstraction level: a wire is interpreted as being in one of two stable logical states under normal operating conditions.
Truth tables describe the logical behavior of circuits, which is enforced by how those circuits react to voltage states.
What Bit Patterns Actually Represent
When you see patterns in a truth table like:
0000
0100
1110
These are not inherently numbers. They are not memory addresses. They are not byte values. They represent simultaneous input states on multiple wires.
Note: Bit patterns can be interpreted as numbers, and CPUs deliberately exploit that dual interpretation in ALUs and counters. But that is a layer on top. At the truth table level, each position represents a logical input, and each value represents one of two states. The logic abstraction maps to voltage in physical implementations. Treating bit patterns as numbers first obscures this foundation.
How Bit Patterns Scale
2 inputs (2^2 = 4 states):
If a circuit has two inputs A and B:
A B
0 0 - both inputs LOW
0 1 - A is LOW, B is HIGH
1 0 - A is HIGH, B is LOW
1 1 - both inputs HIGH
This is manageable. Four configurations.
3 inputs (2^3 = 8 states):
Add a third input C:
A B C
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
Still reasonable to write out. Eight configurations. You are just counting in binary from 000 to 111.
4 inputs (2^4 = 16 states):
Add a fourth input D:
A B C D
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1 - only B is HIGH
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 1 0 - A, B, and C are HIGH, D is LOW
1 1 1 1
Sixteen configurations. The enumeration follows binary counting order. With 4 bits, you count from 0000 to 1111. That is 0 to 15 in decimal. This is a convenient ordering convention for listing all possible input combinations.
With N inputs, there are exactly 2^N possible configurations. A truth table enumerates all of them exhaustively. A 4-input circuit has 16 possible states. A 2-input gate has only 4 states (00, 01, 10, 11). As you add more inputs, the table grows exponentially, but the principle remains the same: every possible combination of input states gets documented.
Nothing is arbitrary. It is complete enumeration of possible input combinations. The binary counting order is simply a systematic way to ensure all combinations are listed.
What Logic Gates Actually Are
A logic gate is not fundamentally a symbol on a diagram. It is a physical arrangement of transistors that act as controlled switches.
AND gate: Conceptually, switches arranged in series. Current flows only if all switches close. Output is HIGH only when all inputs are HIGH.
OR gate: Conceptually, switches arranged in parallel. Current flows if any switch closes. Output is HIGH if any input is HIGH.
NOT gate: Inverts the control signal. Output is HIGH when input is LOW and vice versa.
NAND gate: AND followed by NOT. Output is LOW only when all inputs are HIGH. Everything else is HIGH.
NOR gate: OR followed by NOT. Output is HIGH only when all inputs are LOW. Everything else is LOW.
XOR gate: Combines arrangements so output is HIGH only when inputs differ. This gate is particularly interesting because it detects difference rather than presence or absence. Two switches in different positions produce a HIGH output. Two switches in the same position (both on or both off) produce a LOW output. This makes XOR essential for comparison operations, parity checking, and basic arithmetic. In a half-adder circuit, XOR produces the sum bit while AND produces the carry bit. XOR also appears in encryption algorithms, checksums, and error detection because XORing a value with itself always produces zero, making it reversible.
XNOR gate: The inverse of XOR. Output is HIGH when inputs match (both 0 or both 1), and LOW when inputs differ. This gate is useful for equality checking and detecting when two signals are in the same state. XNOR is also called the equivalence gate because it returns true when inputs are equivalent.
Note: The series/parallel switch model is a mental model for understanding gate behavior, not a literal description of how gates are built. In actual CMOS technology (the basis for modern CPUs and digital circuits), NAND and NOR gates are the fundamental building blocks because they require fewer transistors than AND and OR gates. To build an AND gate in CMOS, you actually construct a NAND gate and then invert its output with a NOT gate. Similarly, OR gates are built from NOR gates plus a NOT gate. The switch analogy captures what the gate does logically. The transistor implementation is how it's actually built in silicon. Both perspectives are valid. This post focuses on the logical behavior.
These gates implement Boolean functions. The behavior is enforced by physics. The truth table describes the Boolean function that the physical circuit implements.
The Truth Tables
For two inputs A and B:
AND
A B OUT
0 0 0
0 1 0
1 0 0
1 1 1
Output is HIGH only when both inputs are HIGH.
OR
A B OUT
0 0 0
0 1 1
1 0 1
1 1 1
Output is HIGH when any input is HIGH.
NAND
A B OUT
0 0 1
0 1 1
1 0 1
1 1 0
Output is LOW only when both inputs are HIGH. This is AND inverted. In CMOS, this is the native primitive. AND gates are built from NAND plus NOT.
NOR
A B OUT
0 0 1
0 1 0
1 0 0
1 1 0
Output is HIGH only when both inputs are LOW. This is OR inverted. Like NAND, NOR is a native CMOS primitive. OR gates are built from NOR plus NOT.
XOR
A B OUT
0 0 0
0 1 1
1 0 1
1 1 0
Output is HIGH when inputs differ. Output is LOW when inputs match.
This behavior makes XOR unique among the basic gates. While AND checks for joint presence and OR checks for any presence, XOR checks for difference. When both inputs are the same (both 0 or both 1), the output is 0. When the inputs differ, the output is 1.
This difference-detection property gives XOR several important applications:
Comparison: XOR reveals whether two bits are different. In wider circuits, XORing corresponding bits of two values produces a result where each 1 indicates a difference at that position.
Parity: XOR chains produce even parity bits. XORing multiple bits together yields 1 if an odd number of inputs are 1, and 0 if an even number are 1. This forms the basis of simple error detection.
Arithmetic: In binary addition, XOR produces the sum bit. A half-adder uses XOR for the sum and AND for the carry. A full-adder extends this with additional XOR gates to handle the carry-in.
Reversibility: XOR has a unique reversible property. If C = A XOR B, then A = C XOR B and B = C XOR A. This reversibility makes XOR central to many encryption schemes and data manipulation techniques. XORing a value with a key produces ciphertext. XORing the ciphertext with the same key recovers the original value.
XNOR
A B OUT
0 0 1
0 1 0
1 0 0
1 1 1
Output is HIGH when inputs match. Output is LOW when inputs differ.
XNOR is the inverse of XOR. It functions as an equality checker, returning 1 when both inputs are in the same state. This makes it useful for comparison circuits and detecting signal equivalence.
NOT
A OUT
0 1
1 0
Output is the inverse of input.
These truth tables describe Boolean functions. Physical circuits implement these functions through transistor arrangements that respond to voltage.
Why the Physical Model Matters
Once you understand how logic is physically implemented, the abstraction becomes transparent.
A wire carrying a signal is logically interpreted as one of two states: ON or OFF. Inputs combine in a finite number of ways (2^N for N inputs). The circuit responds deterministically based on its transistor arrangement. The truth table describes that deterministic Boolean function.
Note for the technically curious: Real wires can also be high-impedance (tri-state) or metastable during transitions. This post deliberately stays in the two-state model because that is the foundation. Tri-state and metastability are real, but they are edge cases on top of this model, not alternatives to it. Additionally, truth tables describe logical behavior, not physical characteristics like rise/fall times, noise margins, fan-out, or drive strength.
Once this connection between logic and implementation is clear, memorization becomes minimal. You can derive the table from the Boolean function or verify it by building the circuit on a breadboard and measuring it.
Understanding how physical systems implement binary state transforms truth tables from patterns you memorize into logical functions you recognize as inevitable.
The Same Logic at Different Abstraction Levels
The truth tables do not change as you move between abstraction levels. What changes is how explicitly you deal with physical state.
Python: Highest Abstraction
def AND(a, b):
return a and b
def OR(a, b):
return a or b
def XOR(a, b):
return a ^ b
def NOT(a):
return (~a) & 1
Python operates at the semantic level. You describe intent. The interpreter handles everything else. Variables a and b are abstract truth values. There is no visible notion of voltage, registers, or instructions. The mapping to hardware exists but is almost completely hidden.
Important note: Python's and and or are control-flow operators, not direct logic gate analogs. They short-circuit and return one of their operands rather than strictly 0 or 1. This works here because we assume a and b are already normalized to 0 or 1. The NOT function uses bitwise inversion with a mask (& 1) to stay consistent with the single-bit semantics used in the C and assembly examples below. Without that mask, Python's not operator returns a boolean rather than a bit.
C: Mid-Level Abstraction
int AND(int a, int b) {
return a & b;
}
int OR(int a, int b) {
return a | b;
}
int XOR(int a, int b) {
return a ^ b;
}
int NOT(int a) {
return ~a & 1;
}
C sits closer to the machine. Symbols map more directly to CPU instructions. You deal with bits rather than truth objects. Operators like &, |, and ^ correspond closely to CPU instructions. You must manage bit width and masking yourself. The abstraction is thinner but still present.
ARM64 Assembly: Lowest Software Abstraction
Assume register x0 holds input A, x1 holds input B, and result goes into x2:
AND:
AND x2, x0, x1
OR:
ORR x2, x0, x1
XOR:
EOR x2, x0, x1
NOT:
MVN x2, x0
AND x2, x2, #1
Assembly instructions directly correspond to hardware logic units. There is no interpretation layer. Each instruction configures transistor-level logic inside the CPU. You are no longer describing logic. You are commanding it.
Note: Microarchitecture (scheduling, reordering, execution units) sits below this and does real work between the instruction and the transistor. Assembly is technically not the bottom. It is, however, the lowest level that developers actually work at. Everything below is the CPU's internal business.
How the Layers Connect
The abstraction stack:
- Python expresses intent using symbols
- C expresses bitwise operations with minimal abstraction
- Assembly directly drives logic hardware
- Microarchitecture schedules and routes instructions to execution units
- Hardware enforces truth tables using transistors
- Transistors enforce state using voltage
When you write if (A and B) in Python, you trigger:
- Bitwise operations in compiled code
- Logical instructions in assembly
- Gate-level evaluation in silicon
- Voltage-dependent switching in transistors
The truth table remains invariant across all layers. The abstraction ladder exists to hide complexity, not to change behavior.
The Software Connection
This understanding applies directly to software.
When you write:
if (A and B):
# ...
You are invoking a Boolean function that, at the lowest level, maps to transistor-level switching behavior conditioned on voltage.
Bitwise operators, CPU flags, branch conditions, protocol state machines, and access control rules all rely on the same foundation: Boolean logic implemented through physical state transitions.
When the connection between logical abstraction and physical implementation is missing, symbols feel arbitrary. When that connection is clear, symbols become shorthand for predictable behavior.
Conclusion
Truth tables are not mysterious. They are not arbitrary grids of digits.
They describe Boolean functions, which are logical abstractions that exist independently of any particular implementation. Physical circuits are one way to realize these functions. Electronics is a particularly efficient implementation, but the logic precedes the physics.
Understanding how physical systems implement binary state makes the logical abstraction transparent. The truth table describes what the function does. The circuit enforces that behavior through voltage-dependent switching.
Teach symbols without connecting them to any implementation model and students memorize patterns.
Teach how logic maps to physical systems and the symbols explain themselves.
Understanding that 1 and 0 represent logical states implemented through physical voltage levels transforms truth tables from something you memorize into logical functions you recognize as inevitable.
Note on scope: This post focuses on combinational logic, where outputs depend only on current inputs. Truth tables alone do not describe sequential logic involving state, memory, time, or feedback (flip-flops, FSMs, etc.). That requires additional formalism.