Unit II — Combinational Circuits


UNIT II — Combinational Circuits


2.1 Introduction to Combinational Logic

What is Combinational Logic?

Think of a vending machine’s coin display: you insert coins (inputs) and the screen shows the total amount (output) based only on what you inserted right now. It doesn’t “remember” yesterday’s coins. That’s the essence of combinational logic.

A combinational circuit is one whose output depends only on the present inputs — no memory, no feedback, no stored state. Given the same inputs, you always get the same output.

Combinational vs Sequential — The Big Distinction

This is one of the most fundamental distinctions in digital design:

Feature Combinational Circuit Sequential Circuit
Memory None — outputs depend only on current inputs Has memory — outputs depend on current inputs AND previous state
Feedback No feedback loops Has feedback (flip-flops, latches)
Clock Not needed Usually needs a clock signal
Example Adder, MUX, Decoder Counter, Register, Flip-Flop
Think of it as A pure math function: $f(x) = y$ A function with memory: $f(x, \text{previous state}) = y$

Simple test: Can you describe the circuit’s behavior using only a truth table (without any time/clock concept)? If yes → combinational. If you need to know “what happened before” → sequential.

The 4-Step Design Process

Every combinational circuit follows the same design recipe:

  1. Understand the problem → Translate the English/real-world description into a precise truth table. This is the hardest step — get this wrong and everything else fails.
  2. Write the Boolean expression → From the truth table, write the SOP (sum of minterms for 1-outputs) or POS (product of maxterms for 0-outputs).
  3. Simplify → Use K-Map or Boolean algebra to minimize the expression. Fewer terms = fewer gates = cheaper, faster circuit.
  4. Draw the logic circuit → Convert the simplified expression directly into gates.

Real-world design example: You need a circuit that turns on a car alarm (output Y) when: the car is locked (A=1) AND either a door opens (B=1) OR the window breaks (C=1).

  1. Truth table: Y = 1 when A=1 AND (B=1 OR C=1)
  2. Expression: $Y = A(B + C) = AB + AC$
  3. Simplify: Already minimal
  4. Circuit: One OR gate (B+C) feeding into an AND gate with A

What’s Coming in Unit II

Everything in Unit II is a combinational circuit. Here’s the roadmap:

Section Circuit What it does
2.2 Code Converters Translate between binary codes (Binary ↔ Gray)
2.3 Comparators Compare two numbers (A > B? A = B? A < B?)
2.4–2.5 Adders & Subtractors Arithmetic: addition and subtraction
2.6 Serial/Parallel/BCD Adders Faster/specialized addition circuits
2.7 Multiplexers (MUX) Select one of many inputs — a “data switch”
2.8 Demultiplexers (DEMUX) Route one input to one of many outputs
2.9 Encoders Convert one-hot to binary code
2.10 Decoders Convert binary code to one-hot

Each of these follows the same 4-step design process. Once you master the process, every new circuit is just a new truth table.


2.2 Code Converters

Why Do We Need Code Converters?

Different parts of a digital system use different binary codes for different reasons:

When these different subsystems need to talk to each other, you need code converters — circuits that translate one code to another.

Real-world analogy: A code converter is like a language translator. A French subsystem produces data in “French” (Gray code). The “English” (Binary) subsystem can’t understand it. You need a translator circuit in between.

Code converters translate one binary code to another.

Binary to Gray Code Converter

The Conversion Rule — Intuitive Understanding

Rule: $G_i = B_{i+1} \oplus B_i$ (XOR adjacent bits), $G_{MSB} = B_{MSB}$

In plain English:

Why XOR? XOR tells you “are these two bits different?” A Gray bit is 1 when the two adjacent binary bits differ, and 0 when they’re the same. That’s literally what XOR computes.

Truth Table (4-bit):

B₃ B₂ B₁ B₀ G₃ G₂ G₁ G₀
0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1
0 0 1 0 0 0 1 1
0 0 1 1 0 0 1 0
0 1 0 0 0 1 1 0
0 1 0 1 0 1 1 1
0 1 1 0 0 1 0 1
0 1 1 1 0 1 0 0

Expressions:

Circuit: 3 XOR gates in parallel.

Convert Binary 1010 to Gray Code using the circuit:

  • $G_3 = B_3 = 1$
  • $G_2 = 1 \oplus 0 = 1$
  • $G_1 = 0 \oplus 1 = 1$
  • $G_0 = 1 \oplus 0 = 1$

Result: 1010 (Binary) = 1111 (Gray)

Convert Binary 0110 to Gray Code:

  • $G_3 = 0$, $G_2 = 0 \oplus 1 = 1$, $G_1 = 1 \oplus 1 = 0$, $G_0 = 1 \oplus 0 = 1$

Result: 0110 (Binary) = 0101 (Gray)

Convert Binary 1001 to Gray Code:

  • $G_3 = 1$, $G_2 = 1 \oplus 0 = 1$, $G_1 = 0 \oplus 0 = 0$, $G_0 = 0 \oplus 1 = 1$

Result: 1001 (Binary) = 1101 (Gray)

Gray to Binary Code Converter

The Reverse Process — Why It’s Trickier

Going back from Gray to Binary is slightly more complex because each step depends on the previous result, not just the input.

Rule: $B_{MSB} = G_{MSB}$, then $B_i = B_{i+1} \oplus G_i$

In plain English:

Expressions:

Key difference — this is an exam favorite:

  • Binary → Gray: Uses XOR of input bits only — all XOR gates work in parallel → fast (1 gate delay)
  • Gray → Binary: Each output depends on the previous output — XOR gates must work sequentially (cascaded) → slower (n-1 gate delays for n bits)

This is why Binary→Gray converter is a purely parallel circuit, but Gray→Binary has propagation delay that grows with the number of bits.

Summary: Binary ↔ Gray Conversion at a Glance

Direction MSB Rule Other Bits Rule Circuit Type Speed
Binary → Gray $G_{MSB} = B_{MSB}$ $G_i = B_{i+1} \oplus B_i$ Parallel (all XOR at once) Fast — 1 gate delay
Gray → Binary $B_{MSB} = G_{MSB}$ $B_i = B_{i+1} \oplus G_i$ Cascaded (each depends on previous) Slower — (n-1) gate delays

Convert Gray 1100 to Binary:

  • $B_3 = G_3 = 1$
  • $B_2 = B_3 \oplus G_2 = 1 \oplus 1 = 0$
  • $B_1 = B_2 \oplus G_1 = 0 \oplus 0 = 0$
  • $B_0 = B_1 \oplus G_0 = 0 \oplus 0 = 0$

Result: 1100 (Gray) = 1000 (Binary)

Convert Gray 0111 to Binary:

  • $B_3 = 0$, $B_2 = 0 \oplus 1 = 1$, $B_1 = 1 \oplus 1 = 0$, $B_0 = 0 \oplus 1 = 1$

Result: 0111 (Gray) = 0101 (Binary) = 5


2.3 Binary Comparators

What is a Comparator?

A comparator answers the most basic question in computing: “How do two numbers relate to each other?” Is A bigger? Is B bigger? Are they equal?

Real-world analogy: Think of comparing two people’s heights. You start by looking at the most significant feature first (overall height), and only look at finer details (centimeters) if the overall measurement is the same.

A comparator compares two binary numbers and produces three outputs:

Important: Exactly one of these three outputs is 1 at any time (they’re mutually exclusive).

1-Bit Comparator

The simplest comparator compares two single bits. Let’s think about it logically:

A B A>B A=B A<B
0 0 0 1 0
0 1 0 0 1
1 0 1 0 0
1 1 0 1 0

Expressions:

2-Bit Comparator

Now things get interesting. Comparing 2-bit numbers $A = A_1A_0$ with $B = B_1B_0$ uses the cascading principle — exactly how you compare numbers in real life:

Step 1: Compare the MSBs first ($A_1$ vs $B_1$)

Step 2: Only if MSBs are equal, compare $A_0$ vs $B_0$

This gives us the expressions:

\[A = B : (A_1 \odot B_1) \cdot (A_0 \odot B_0)\]

Read this as: “A equals B when MSBs are equal AND LSBs are equal.”

\[A > B : A_1\overline{B_1} + (A_1 \odot B_1) \cdot A_0\overline{B_0}\]

Read this as: “A is greater when MSB of A wins OR (MSBs tie AND LSB of A wins).”

\[A < B : \overline{A_1}B_1 + (A_1 \odot B_1) \cdot \overline{A_0}B_0\]

Read this as: “A is less when MSB of B wins OR (MSBs tie AND LSB of B wins).”

The Cascading Principle: This MSB-first comparison extends to any number of bits. A 4-bit comparator (like IC 7485) uses the same idea: compare bit by bit from MSB to LSB, stop as soon as you find a difference. The 7485 also has cascade inputs ($A>B_{in}, A=B_{in}, A<B_{in}$) to chain multiple chips for larger comparisons (e.g., two 7485s → 8-bit comparator).

Compare 2-bit numbers A=10 and B=01:

  • MSB: $A_1=1, B_1=0$ → $A_1 > B_1$ → immediately, $A > B$
  • Result: $A > B = 1$, $A = B = 0$, $A < B = 0$

Compare A=11 and B=11:

  • MSB: $A_1=1, B_1=1$ → Equal → check LSB
  • LSB: $A_0=1, B_0=1$ → Equal
  • Result: $A > B = 0$, $A = B = 1$, $A < B = 0$

Compare A=01 and B=11:

  • MSB: $A_1=0, B_1=1$ → $A_1 < B_1$ → immediately, $A < B$
  • Result: $A > B = 0$, $A = B = 0$, $A < B = 1$

2.4 Half & Full Adders

The Building Blocks of All Arithmetic

Every calculator, every CPU, every phone does arithmetic. And at the very bottom, all arithmetic is built from adders — tiny circuits that add single bits. These are the most fundamental arithmetic building blocks in digital design.

Half Adder

What does “half” mean?

A half adder adds 2 single-bit inputs (A and B) and produces a Sum and a Carry. It’s called “half” because it only handles half the job — it can’t accept a carry from a previous stage. It’s like adding the rightmost column of a multi-digit addition (where there’s no carry coming in).

Think about it: What happens when you add two single bits?

A B Sum (S) Carry (C)
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
\[S = A \oplus B\] \[C = A \cdot B\]

Why these gates?

Circuit: Just 1 XOR gate + 1 AND gate — the simplest arithmetic circuit possible!

Full Adder

Why do we need the “full” version?

A half adder can only add 2 bits. But when adding multi-bit numbers (like adding two 8-bit numbers), each column after the first might have a carry coming in from the previous column. A full adder handles this by accepting 3 inputs: A, B, and Carry-in ($C_{in}$).

Think of it like decimal addition:

  1 ← carry from previous column
  47
+ 35
----
  82

When adding the tens column: 4 + 3 + 1(carry) = 8. That “adding three things” is what a full adder does.

Adds 3 single-bit inputs (A, B, and Carry-in $C_{in}$).

A B Cᵢₙ Sum Cₒᵤₜ
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
\[S = A \oplus B \oplus C_{in}\] \[C_{out} = AB + C_{in}(A \oplus B)\]

Understanding the formulas:

Exam trap: Don’t confuse the carry formula! $C_{out} = AB + C_{in}(A \oplus B)$ is not the same as $C_{out} = AB + AC_{in} + BC_{in}$… actually, they ARE equivalent! The second form is the “majority function” (output is 1 when the majority of inputs are 1). Both are correct, but the first form maps more naturally to the 2-half-adder implementation.

Full Adder: Compute A=1, B=1, Cᵢₙ=1:

  • $S = 1 \oplus 1 \oplus 1 = 0 \oplus 1 = 1$
  • $C_{out} = (1 \cdot 1) + 1 \cdot (1 \oplus 1) = 1 + 1 \cdot 0 = 1$
  • Sum = 1, Carry = 1 (1+1+1 = 3 = 11₂) ✓

Full Adder: Compute A=0, B=1, Cᵢₙ=1:

  • $S = 0 \oplus 1 \oplus 1 = 1 \oplus 1 = 0$
  • $C_{out} = (0 \cdot 1) + 1 \cdot (0 \oplus 1) = 0 + 1 \cdot 1 = 1$
  • Sum = 0, Carry = 1 (0+1+1 = 2 = 10₂) ✓

Add 4-bit numbers using Ripple Carry Adder: 1011 + 0110

Bit A B Cᵢₙ S Cₒᵤₜ
0 1 0 0 1 0
1 1 1 0 0 1
2 0 1 1 0 1
3 1 0 1 0 1

Result: 1011 + 0110 = 10001 (11 + 6 = 17) ✓

Full Adder = 2 Half Adders + 1 OR gate — Here’s Why:

Think of it as a two-step addition:

  • Step 1 (HA1): Add A + B → get partial sum $S_1 = A \oplus B$ and partial carry $C_1 = AB$
  • Step 2 (HA2): Add $S_1 + C_{in}$ → get final sum $S = S_1 \oplus C_{in}$ and second partial carry $C_2 = S_1 \cdot C_{in}$
  • Combine carries (OR): $C_{out} = C_1 + C_2$ (carry occurs if EITHER step generated a carry)

Verification: $C_{out} = AB + (A \oplus B) \cdot C_{in}$ ← matches our formula! ✓

This is important because it means you don’t need to design a full adder from scratch — you can build it from two simpler half adders.


2.5 Half & Full Subtractors

Subtraction: The Mirror of Addition

Just as adders handle addition, subtractors handle subtraction. The structure is almost identical to adders, but instead of a “carry,” we have a “borrow.”

Key insight: In practice, most digital systems don’t use dedicated subtractors. They use adders + 2’s complement instead (subtracting B is the same as adding −B). But understanding subtractors is essential for exams and for understanding the underlying logic.

Half Subtractor

What does it do?

Subtracts B from A (two 1-bit inputs), producing a Difference and a Borrow. No borrow-in (that’s why it’s “half”).

Think about each case:

A B Difference (D) Borrow (Bₒᵤₜ)
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
\[D = A \oplus B\] \[B_{out} = \overline{A} \cdot B\]

Why these formulas?

Half Subtractor vs Half Adder — spot the pattern:

  Half Adder Half Subtractor
Output 1 $S = A \oplus B$ $D = A \oplus B$ (identical!)
Output 2 $C = A \cdot B$ $B_{out} = \overline{A} \cdot B$ (only A is inverted)
Circuit XOR + AND XOR + AND (with A inverted)

You can convert a half adder into a half subtractor by just adding one NOT gate on the A input to the AND gate!

Full Subtractor

Why do we need it?

Same reason we need a full adder: when subtracting multi-bit numbers, each column (except the first) might have a borrow coming in from the previous column. A full subtractor handles three inputs: A, B, and Borrow-in ($B_{in}$).

Think of decimal subtraction with borrowing:

     0          (borrow happened here)
   4 12
-  3  5
------
   0  7

When subtracting the tens column: 4 - 3 - 0(borrow) = 1… but wait, we had a borrow from the units column, so it’s actually 4 - 3 - 1 = 0. That “three-way subtraction” is what a full subtractor computes.

Three inputs: A, B, and Borrow-in ($B_{in}$).

A B Bᵢₙ D Bₒᵤₜ
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1
\[D = A \oplus B \oplus B_{in}\] \[B_{out} = \overline{A}B + B_{in}(\overline{A \oplus B})\]

Understanding $B_{out}$: A borrow-out occurs when:

Notice the beautiful symmetry: The Difference (D) formula is identical to the Sum formula in a Full Adder ($S = A \oplus B \oplus C_{in}$)! The Borrow expression is what distinguishes them. Just like the half subtractor, the full subtractor is essentially a full adder with inversions on A.

Full Subtractor = 2 Half Subtractors + 1 OR gate:

  • HS1: $D_1 = A \oplus B$, $B_1 = \overline{A} \cdot B$
  • HS2: $D = D_1 \oplus B_{in}$, $B_2 = \overline{D_1} \cdot B_{in}$
  • $B_{out} = B_1 + B_2$

Same structure as “Full Adder = 2 Half Adders + OR”!

Full Subtractor: Compute A=1, B=1, Bᵢₙ=1 (i.e., 1 - 1 - 1 = -1):

  • $D = 1 \oplus 1 \oplus 1 = 0 \oplus 1 = 1$
  • $B_{out} = \overline{1} \cdot 1 + 1 \cdot (\overline{1 \oplus 1}) = 0 + 1 \cdot 1 = 1$
  • Difference = 1, Borrow = 1 → In context of multi-bit subtraction, the result digit is 1 with a borrow to the next column ✓

2.6 Serial & Parallel Adders, BCD Adder

Now that we have the full adder as a building block, the question is: how do we add multi-bit numbers? There are two fundamental approaches, representing the classic engineering trade-off between speed and cost.

Ripple Carry (Parallel) Adder

The Idea: Line Up Full Adders Side by Side

Think of how you add two numbers by hand: you line up the columns, start from the rightmost digit, and carry to the left. A ripple carry adder does exactly this — it chains $n$ full adders in parallel, one for each bit.

Chain $n$ full adders in parallel. The carry output of each stage feeds into the carry input of the next stage.

   A₃ B₃    A₂ B₂    A₁ B₁    A₀ B₀
    │  │      │  │      │  │      │  │
   ┌▼──▼┐   ┌▼──▼┐   ┌▼──▼┐   ┌▼──▼┐
C₄←│ FA │←C₃│ FA │←C₂│ FA │←C₁│ FA │←C₀=0
   └──┬─┘   └──┬─┘   └──┬─┘   └──┬─┘
      S₃       S₂       S₁       S₀

Disadvantage: The Ripple Problem

Propagation delay — the carry must ripple through all stages from right to left before the final answer is ready. Total delay = $n \times t_{FA}$ where $t_{FA}$ is the delay of one full adder.

Why this matters: For a 32-bit adder, the carry might need to ripple through all 32 stages. FA₃₁ can’t produce its correct sum until it receives the carry from FA₃₀, which needs the carry from FA₂₉, and so on. The entire chain must settle before the answer is valid.

Solution (not in syllabus but good to know): Carry Lookahead Adders (CLA) compute all carries simultaneously using extra logic, reducing delay from $O(n)$ to $O(\log n)$. This is what real CPUs use.

Serial Adder

The Cheap Alternative

What if you can’t afford $n$ full adders? Use just one and process bits one pair at a time!

Uses a single full adder + a flip-flop (memory element) for the carry. Bits are fed one pair at a time (LSB first). The flip-flop stores the carry for the next clock cycle.

How it works:

  1. Clock cycle 1: Feed $A_0, B_0$, $C_{in}=0$ → get $S_0$, store carry in flip-flop
  2. Clock cycle 2: Feed $A_1, B_1$, $C_{in}$ from flip-flop → get $S_1$, store new carry
  3. … repeat for all n bits
  4. After $n$ clock cycles, you have the complete sum
Feature Serial Adder Parallel Adder
Hardware 1 FA + 1 FF n FAs
Speed Slow (n clock cycles) Fast (combinational)
Cost Low High

BCD Adder

The Problem: Binary Adders Don’t Speak BCD

This is one of the most interesting circuits in the course. Here’s the issue:

A 4-bit binary adder can produce results from 0 to 30 (if adding two 4-bit numbers + carry). But BCD only allows 0–9 for each digit. So when the sum exceeds 9, we get an invalid BCD result that needs correction.

Adds two BCD digits (0–9). Since BCD is valid only for 0–9 and a 4-bit binary adder can produce 0–19 (max: 9+9+1=19), a correction is needed.

The “Add 6” Correction — Why 6?

When is correction needed? If the 4-bit sum > 9 (i.e., 1010–1111) or if carry out = 1, add 6 (0110) to the result.

But WHY add 6? Here’s the intuition:

\[\text{Correction condition: } C_{out} = 1 \text{ OR } (S_3 \cdot S_2) \text{ OR } (S_3 \cdot S_1)\]

Understanding the condition: The sum > 9 when:

Add BCD: 7 + 6

  • $0111 + 0110 = 1101$ (=13, which is > 9)
  • Correction: $1101 + 0110 = 10011$
  • Result: 0001 0011 → BCD for 13

Add BCD: 4 + 3

  • $0100 + 0011 = 0111$ (=7, which is ≤ 9)
  • No correction needed
  • Result: 0111 → BCD for 7

Add BCD: 8 + 9

  • $1000 + 1001 = 10001$ (=17, carry out = 1)
  • Correction needed (carry out = 1): $0001 + 0110 = 0111$, carry = 1
  • Result: 0001 0111 → BCD for 17

BCD Adder Circuit — Block Diagram:

A₃A₂A₁A₀  B₃B₂B₁B₀
    │           │
    └─────┬─────┘
    ┌─────▼─────┐
    │ 4-bit     │
    │ Binary    │←── C_in
    │ Adder #1  │
    └─────┬─────┘
     C₄  S₃S₂S₁S₀
      │    │
      ├────▼───┐
      │ Detect │ ← Is sum > 9?
      │ >9 ?   │ ← C₄ OR (S₃·S₂) OR (S₃·S₁)
      └───┬────┘
     Correction = 0 or 0110
          │
    ┌─────▼─────┐
    │ 4-bit     │
    │ Binary    │ ← Adds 0000 or 0110
    │ Adder #2  │
    └─────┬─────┘
   BCD Carry  BCD Sum

The circuit uses two 4-bit adders: the first does the raw addition, and the second adds the correction (0110) only when needed. The correction logic between them is just a few AND/OR gates.

Common exam mistake: Students forget that the BCD carry-out is not just the carry from adder #1. The final BCD carry = (correction was needed). If the correction condition is true, the carry to the next BCD digit is 1, regardless of whether the second adder generates a carry.


2.7 Multiplexers (MUX)

What is a Multiplexer? — The Data Selector

Real-world analogy: Think of a TV channel selector. Your TV antenna receives 100 channels simultaneously, but you can only watch one at a time. The channel selector (remote control) picks which channel’s data reaches your screen. That’s exactly what a MUX does!

A multiplexer is a combinational circuit that selects one of $2^n$ inputs and forwards it to a single output, using $n$ select lines to choose which input wins.

Key relationship: $n$ select lines → $2^n$ possible inputs → 1 output

Why MUXes matter: They’re incredibly versatile. Besides data selection, MUXes can implement any Boolean function — this makes them a “universal” building block, similar to how NAND gates are universal. In exams, MUX-based function implementation is one of the most frequently tested topics.

2:1 MUX — The Simplest Case

The most basic MUX: one select line (S) chooses between two inputs ($I_0$ and $I_1$).

S Y
0 $I_0$
1 $I_1$
\[Y = \overline{S} \cdot I_0 + S \cdot I_1\]

Read the formula: “When S=0, the output equals $I_0$. When S=1, the output equals $I_1$.”

How the circuit enforces this: The $\overline{S}$ term “enables” $I_0$ when S=0 (AND gate passes $I_0$). The $S$ term “enables” $I_1$ when S=1. The OR gate combines them. Only one AND gate is active at a time, so only one input reaches the output.

4:1 MUX — The Workhorse

Select lines: $S_1, S_0$. Inputs: $I_0, I_1, I_2, I_3$. The two select lines form a 2-bit binary number that picks which input to pass through.

\[Y = \overline{S_1}\overline{S_0} \cdot I_0 + \overline{S_1}S_0 \cdot I_1 + S_1\overline{S_0} \cdot I_2 + S_1 S_0 \cdot I_3\]

Pattern: Each term is a minterm of the select lines ANDed with the corresponding input. The select lines act like an “address” that identifies which input to route to the output.

S₁ S₀ Y
0 0 I₀
0 1 I₁
1 0 I₂
1 1 I₃

Implementing Boolean Functions with MUX

This is one of the most important exam topics. The key insight is:

An n-variable function can be implemented with a $2^{n-1}$:1 MUX (one size smaller than you’d expect) or a $2^n$:1 MUX directly.

Method: Using a $2^{n-1}$:1 MUX for an $n$-variable function

  1. Choose n-1 variables as select lines (usually the first variables: A, B, …)
  2. The remaining variable (usually the last one: C, D, …) appears at the data inputs
  3. For each combination of select lines, figure out what F equals in terms of the remaining variable
  4. The data input will be one of: 0, 1, X, or $\overline{X}$ (where X is the remaining variable)

Why this works: The select lines of the MUX partition the truth table into groups. Each group has only 2 rows (for the remaining variable = 0 and = 1). The behavior of F within each group matches one of the four options (always 0, always 1, equals X, or equals $\overline{X}$).

Implement $F(A,B,C) = \sum m(1, 2, 6, 7)$ using 4:1 MUX:

Use A, B as select lines. For each combination of A, B, express F in terms of C:

A B F for C=0 F for C=1 Input
0 0 F(0,0,0)=0 F(0,0,1)=1 C
0 1 F(0,1,0)=1 F(0,1,1)=0 $\overline{C}$
1 0 0 0 0
1 1 F(1,1,0)=1 F(1,1,1)=1 1

Connect: $I_0 = C$, $I_1 = \overline{C}$, $I_2 = 0$, $I_3 = 1$, $S_1 = A$, $S_0 = B$

Implement $F(A,B,C) = \sum m(0, 3, 5, 6)$ using 4:1 MUX:

Use A, B as select lines:

A B F for C=0 F for C=1 Input
0 0 F(0,0,0)=1 F(0,0,1)=0 $\overline{C}$
0 1 F(0,1,0)=0 F(0,1,1)=1 C
1 0 F(1,0,0)=0 F(1,0,1)=1 C
1 1 F(1,1,0)=1 F(1,1,1)=0 $\overline{C}$

Connect: $I_0 = \overline{C}$, $I_1 = C$, $I_2 = C$, $I_3 = \overline{C}$

Implement $F(A,B,C) = \sum m(0, 1, 2, 3)$ using 4:1 MUX:

A B F for C=0 F for C=1 Input
0 0 1 1 1
0 1 1 1 1
1 0 0 0 0
1 1 0 0 0

Connect: $I_0 = I_1 = 1$, $I_2 = I_3 = 0$ → This simplifies to $F = \overline{A}$

Building larger MUX from smaller — Hierarchical Design:

8:1 MUX = two 4:1 MUX + one 2:1 MUX

How it works:

  • Split the 8 inputs into two groups of 4: $I_0$–$I_3$ go to MUX-A, $I_4$–$I_7$ go to MUX-B
  • The lower-order select lines ($S_0, S_1$) control both 4:1 MUXes simultaneously
  • The higher-order select line ($S_2$) controls the final 2:1 MUX that chooses between MUX-A’s output and MUX-B’s output

General rule: To build a $2^n$:1 MUX, you need two $2^{n-1}$:1 MUXes + one 2:1 MUX. This is a recursive construction that chips like the 74151 (8:1 MUX) use internally.


2.8 Demultiplexers (DEMUX)

What is a DEMUX? — The Reverse of MUX

Real-world analogy: A MUX is like choosing which of 8 microphones to listen to. A DEMUX is like a speaker system where you have one audio source and you route it to one of 8 rooms. You’re sending one input to one of many outputs.

A demultiplexer takes one input (data line D) and routes it to one of $2^n$ outputs based on $n$ select lines. All other outputs stay at 0.

MUX vs DEMUX — Mirror Images:

  MUX DEMUX
Direction Many inputs → 1 output 1 input → many outputs
Role Data selector Data distributor
Select lines Choose which input to read Choose which output to send to
Analogy TV channel selector Mail sorter (one letter → one mailbox)

1:4 DEMUX

One data input D, two select lines ($S_1, S_0$), four outputs ($Y_0$–$Y_3$).

The select lines form a 2-bit “address” telling the DEMUX which output to activate:

S₁ S₀ Y₀ Y₁ Y₂ Y₃
0 0 D 0 0 0
0 1 0 D 0 0
1 0 0 0 D 0
1 1 0 0 0 D

\(Y_0 = D \cdot \overline{S_1} \cdot \overline{S_0}\) \(Y_1 = D \cdot \overline{S_1} \cdot S_0\) \(Y_2 = D \cdot S_1 \cdot \overline{S_0}\) \(Y_3 = D \cdot S_1 \cdot S_0\)

Pattern: Each output = Data input AND one minterm of the select lines. Only the output whose “address” matches the select line combination gets the data; all others receive 0.

How to read the formula: $Y_2 = D \cdot S_1 \cdot \overline{S_0}$ means “output Y₂ receives the data D only when S₁=1 and S₀=0 (i.e., select = 10 = 2).”

DEMUX = Decoder with Enable — The Key Connection:

Compare the DEMUX outputs with a 2:4 decoder:

  • Decoder: $Y_0 = \overline{S_1}\overline{S_0}$, $Y_1 = \overline{S_1}S_0$, etc.
  • DEMUX: $Y_0 = D \cdot \overline{S_1}\overline{S_0}$, $Y_1 = D \cdot \overline{S_1}S_0$, etc.

The DEMUX is literally a decoder with each output ANDed with D! If you use the data input D as the “enable” of a decoder, you get a DEMUX. This is why the same IC can serve as both a decoder and a DEMUX.

1:4 DEMUX with D=1, S₁=1, S₀=0:

  • $Y_0 = 1 \cdot 0 \cdot 1 = 0$
  • $Y_1 = 1 \cdot 0 \cdot 0 = 0$
  • $Y_2 = 1 \cdot 1 \cdot 1 = 1$ ← data routed here!
  • $Y_3 = 1 \cdot 1 \cdot 0 = 0$

Only Y₂ gets the data because the select address is “10” = 2.


2.9 Encoders

What is an Encoder? — The Reverse of a Decoder

Real-world analogy: Imagine a calculator keypad with 10 buttons (0–9). When you press button “7”, the calculator internally needs to store $0111$ (the binary code for 7). The circuit that converts “button 7 is pressed” into the binary code $0111$ is an encoder.

An encoder converts $2^n$ input lines to $n$ output lines. At any time, one input is active (HIGH), and the encoder outputs the binary code corresponding to which input is active. It’s the reverse of a decoder.

Key relationships:

4:2 Encoder (Simple)

4 inputs ($I_0$–$I_3$), 2 outputs ($Y_1, Y_0$). The encoder outputs the 2-bit binary number of whichever input is HIGH.

I₃ I₂ I₁ I₀ Y₁ Y₀
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1

Critical Assumption: Only ONE input is active at a time. If multiple inputs are active simultaneously, the output is undefined (garbage). This is the fundamental limitation of a simple encoder.

\[Y_1 = I_2 + I_3\] \[Y_0 = I_1 + I_3\]

How to derive these: Look at which inputs make each output 1:

So each output is just an OR of the inputs that have a 1 in that bit position.

Problem with simple encoders:

  1. If no input is active, the output is 00 — same as if $I_0$ is active! You can’t tell “no input” from “input 0.”
  2. If multiple inputs are active (say $I_1$ and $I_2$), the output is $Y_1Y_0 = 11$ (code for $I_3$!) — completely wrong.

Both problems are solved by the priority encoder.

8:3 Priority Encoder — The Practical Version

Why priority?

When multiple inputs can be active simultaneously, a priority encoder has a built-in hierarchy — it outputs the code for the highest-priority (usually highest-numbered) active input, ignoring all lower-priority ones.

Real-world analogy: In a hospital emergency room, if patients with severity levels 2, 5, and 7 all arrive at once, the system reports level 7 (highest priority) first.

I₇ I₆ I₅ I₄ I₃ I₂ I₁ I₀ Y₂ Y₁ Y₀ V
0 0 0 0 0 0 0 0 X X X 0
0 0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 0 1 X 0 0 1 1
                     
1 X X X X X X X 1 1 1 1

$V$ (valid bit) = 1 when at least one input is active. This solves the “no input vs $I_0$” ambiguity:

The X entries in the table mean “don’t care” — when a higher-priority input is active, it doesn’t matter what the lower-priority inputs are doing.

IC: 74148 (8:3 priority encoder) — a commonly used chip.

Priority encoder with $I_7=0, I_6=0, I_5=1, I_4=0, I_3=1, I_2=1, I_1=0, I_0=1$:

Multiple inputs are active ($I_5, I_3, I_2, I_0$). The priority encoder outputs the code for the highest active input:

  • Highest active input = $I_5$ → Output = $101_2$ = 5
  • $V = 1$ (valid)

The lower-priority inputs ($I_3, I_2, I_0$) are completely ignored.

Simple Encoder vs Priority Encoder:

Feature Simple Encoder Priority Encoder
Multiple inputs Undefined output Outputs highest priority
No input active Output = 000 (ambiguous) V=0 indicates “no valid input”
Circuit complexity Simple OR gates More complex (cascaded logic)
Practical use Rarely used alone Used in interrupt controllers, keyboards

2.10 Decoders

What is a Decoder? — Binary Code to One-Hot

Real-world analogy: Think of an elevator floor selector. You press button “3” (a binary code input), and the elevator activates floor 3’s relay (one specific output out of many). The button panel is the encoder (you→code), and the floor relay system is the decoder (code→action).

A decoder converts $n$ input lines to $2^n$ output lines. For each input combination (binary code), exactly one output is activated (set to 1), and all others remain 0. This “one-hot” output is why decoders are so useful — each output corresponds to exactly one minterm!

Key insight: Each decoder output is a minterm of the input variables. This makes decoders incredibly powerful for implementing Boolean functions.

2:4 Decoder — The Basic Case

2 inputs ($A_1, A_0$), 4 outputs ($Y_0$–$Y_3$). The input forms a 2-bit binary number that selects which output to activate.

A₁ A₀ Y₀ Y₁ Y₂ Y₃
0 0 1 0 0 0
0 1 0 1 0 0
1 0 0 0 1 0
1 1 0 0 0 1
\[Y_0 = \overline{A_1} \cdot \overline{A_0}, \quad Y_1 = \overline{A_1} \cdot A_0, \quad Y_2 = A_1 \cdot \overline{A_0}, \quad Y_3 = A_1 \cdot A_0\]

Notice: Each output is literally a minterm!

Circuit: Each output just needs one AND gate (with appropriate inversions). A 2:4 decoder is just 4 AND gates + 2 NOT gates.

3:8 Decoder (IC 74138)

3 inputs ($A_2, A_1, A_0$) → 8 outputs ($Y_0$–$Y_7$). Each output corresponds to a minterm of the 3 input variables.

Why 3:8? 3 input bits → $2^3 = 8$ possible combinations → 8 outputs. The decoder “decodes” a 3-bit binary number into which of the 8 outputs to activate.

Implementing Boolean Functions with Decoders

This is the most powerful application of decoders and a major exam topic.

Since each decoder output = one minterm, implementing any function is trivial — just OR together the outputs corresponding to the minterms you need.

Why This is Brilliant

Implement $F(A,B,C) = \sum m(1, 3, 5, 7)$ using a 3:8 decoder:

Simply: $F = Y_1 + Y_3 + Y_5 + Y_7$ (OR the required minterm outputs)

Use a 3:8 decoder + 1 OR gate.

Implement $F_1(A,B,C) = \sum m(0, 2, 5, 7)$ and $F_2(A,B,C) = \sum m(1, 3, 6)$ using a single 3:8 decoder:

  • $F_1 = Y_0 + Y_2 + Y_5 + Y_7$ (4-input OR gate)
  • $F_2 = Y_1 + Y_3 + Y_6$ (3-input OR gate)

Both share the same decoder — only the OR gates differ. This is a major advantage of decoder-based implementation for multiple functions.

Implement a full adder using a 3:8 decoder:

A full adder has inputs A, B, Cᵢₙ and outputs Sum and Cₒᵤₜ:

  • $Sum = \sum m(1, 2, 4, 7)$ → $Sum = Y_1 + Y_2 + Y_4 + Y_7$
  • $C_{out} = \sum m(3, 5, 6, 7)$ → $C_{out} = Y_3 + Y_5 + Y_6 + Y_7$

One 3:8 decoder + two OR gates → complete full adder!

Building Larger Decoders from Smaller Ones

4:16 decoder from two 3:8 decoders: Use the 4th input ($A_3$) as an enable signal:

The lower 3 inputs ($A_2, A_1, A_0$) go to both decoders simultaneously. The enable line ensures only one decoder is active, giving you 16 mutually exclusive outputs.

General rule: To build a $n$-to-$2^n$ decoder, use two $(n-1)$-to-$2^{n-1}$ decoders + 1 NOT gate (for the enable). This is the same hierarchical construction idea as building larger MUXes.

Decoder vs Demux vs Encoder vs MUX — The Complete Picture:

These four circuits are deeply related. Understanding their connections is key to DLD mastery:

Component Function Inputs → Outputs Analogy
Decoder $n$ input → $2^n$ output (one-hot) Code → selecting one line Elevator: press “3” → floor 3 activates
Encoder $2^n$ input → $n$ output One-hot → code Calculator: press button 7 → output 0111
MUX $2^n$ data + $n$ select → 1 output Many-to-one TV remote: pick 1 channel from 100
DEMUX 1 data + $n$ select → $2^n$ output One-to-many Mail sorter: 1 letter → 1 of many mailboxes

Relationships:

  • Encoder ↔ Decoder are inverses (one-hot ↔ binary code)
  • MUX ↔ DEMUX are inverses (many-to-one ↔ one-to-many)
  • Decoder with enable = DEMUX (use enable as data input)
  • MUX = Encoder + Enable logic (conceptually)

These four components, together with adders and comparators, form the complete toolkit for combinational circuit design.


Unit II — Practice Questions & PYQs

Q1. Design a Binary to Gray code converter for 4 bits. Show the truth table, K-Map simplification for each output, and the final circuit diagram. (IIIT PYQ-type)

Q2. Design a Full Adder using: (Standard exam)

  • (a) Two half adders and an OR gate
  • (b) NAND gates only
  • (c) Two 4:1 MUXes

Q3. Design a BCD Adder circuit. Explain when and why the correction of adding 0110 is needed. Show an example of adding BCD 8 + 9. (Frequently asked)

Q4. Implement the following using an 8:1 MUX: (IIIT PYQ-type)

  • $F(A,B,C) = \sum m(0, 2, 3, 5, 7)$
  • $F(A,B,C) = \sum m(1, 3, 4, 6)$

Q5. Implement $F(A,B,C,D) = \sum m(0, 1, 3, 4, 8, 9, 15)$ using a 4:1 MUX (use A, B as select lines). (Important)

Q6. Design a 2-bit binary comparator. Show truth table, derive expressions for $A>B$, $A=B$, $A < B$, and draw the circuit. (IIIT PYQ-type)

Q7. Using a 3:8 decoder and OR gates, implement: (Standard exam)

  • $F_1(A,B,C) = \sum m(0, 1, 4, 7)$
  • $F_2(A,B,C) = \sum m(2, 3, 5, 6)$

Show how both functions share the same decoder.

Q8. Compare: (Theory)

  • (a) Half Adder vs Full Adder
  • (b) Half Subtractor vs Full Subtractor
  • (c) Encoder vs Decoder
  • (d) MUX vs DEMUX
  • (e) Serial Adder vs Parallel Adder

Q9. Design a Full Subtractor circuit. Derive $D$ and $B_{out}$ using K-Maps. Show how it can be built using two half subtractors and an OR gate. (Important)

Q10. A 4:1 MUX has inputs $I_0 = 1$, $I_1 = 0$, $I_2 = B$, $I_3 = \overline{B}$, select lines $S_1 = A$, $S_0 = C$. Find the Boolean function $F(A,B,C)$ and express as a sum of minterms. (Tricky)

Q11. Design a BCD to Excess-3 code converter using logic gates. Show the complete truth table and K-Map simplification for each output bit. (IIIT PYQ-type)

Q12. Explain priority encoder with a truth table. Why is it preferred over a simple encoder? Implement an 8:3 priority encoder. (Theory + Design)

Q13. Construct a 16:1 MUX using five 4:1 MUXes. Draw the block diagram. (Design)

Q14. Construct a 4:16 decoder using two 3:8 decoders. Explain the enable line usage. (Design)


Back to Digital Logic & Design