Coding and decoding with convolutional codes. The Viterbi Algor

Similar documents
Lezione 6 Communications Blockset

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Image Compression through DCT and Huffman Coding Technique

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

Lecture 3: Linear methods for classification

Research on the UHF RFID Channel Coding Technology based on Simulink

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

Statistical Machine Learning

Efficient Recovery of Secrets

LDPC Codes: An Introduction

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

Chapter 1 Introduction

Sheet 7 (Chapter 10)

Capacity Limits of MIMO Channels

Coding Theorems for Turbo-Like Codes Abstract. 1. Introduction.

G(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s).

Digital Video Broadcasting By Satellite

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract

Adaptive Equalization of binary encoded signals Using LMS Algorithm

6.02 Fall 2012 Lecture #5

Reinforcement Learning

Coded Bidirectional Relaying in Wireless Networks

CODING THEORY a first course. Henk C.A. van Tilborg

8 MIMO II: capacity and multiplexing

TTT4110 Information and Signal Theory Solution to exam

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska

MOST error-correcting codes are designed for the equal

ELEC3028 Digital Transmission Overview & Information Theory. Example 1

Polarization codes and the rate of polarization

Mathematical Modelling of Computer Networks: Part II. Module 1: Network Coding

A New Interpretation of Information Rate

Signal Detection C H A P T E R SIGNAL DETECTION AS HYPOTHESIS TESTING

A Practical Scheme for Wireless Network Operation

Revision of Lecture Eighteen

Introduction to General and Generalized Linear Models

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Towards running complex models on big data

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

Inference of Probability Distributions for Trust and Security applications

Linear Codes. Chapter Basics

Data Link Layer Overview

Design of LDPC codes

10.2 Series and Convergence

EE 42/100 Lecture 24: Latches and Flip Flops. Rev B 4/21/2010 (2:04 PM) Prof. Ali M. Niknejad

Digital Communications

MIMO CHANNEL CAPACITY

STA 4273H: Statistical Machine Learning

Multiple Connection Telephone System with Voice Messaging

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p

USB 3.0 Jitter Budgeting White Paper Revision 0.5

CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM

Reinforcement Learning

plc numbers Encoded values; BCD and ASCII Error detection; parity, gray code and checksums

Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks

Factor Graphs and the Sum-Product Algorithm

IN current film media, the increase in areal density has

Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels

NRZ Bandwidth - HF Cutoff vs. SNR

Entropy and Mutual Information

RS-485 Protocol Manual

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

5 Signal Design for Bandlimited Channels

Adaptive Linear Programming Decoding

Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes

Lecture 2 Linear functions and examples

Largest Fixed-Aspect, Axis-Aligned Rectangle

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Introduction to Algebraic Coding Theory

Solutions to Exam in Speech Signal Processing EN2300

Classification Problems

Time and Frequency Domain Equalization

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Non-Data Aided Carrier Offset Compensation for SDR Implementation

(2) (3) (4) (5) 3 J. M. Whittaker, Interpolatory Function Theory, Cambridge Tracts

Single channel data transceiver module WIZ2-434

Applications to Data Smoothing and Image Processing I

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

Compression techniques

How To Find A Nonbinary Code Of A Binary Or Binary Code

Gamma Distribution Fitting

Solving Simultaneous Equations and Matrices

Gaussian Conjugate Prior Cheat Sheet

Coding Theorems for Turbo Code Ensembles

General Certificate of Education Advanced Subsidiary Examination June 2015

Transcription:

Coding and decoding with convolutional codes. The Viterbi Algorithm. 8

Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Geometric view k= bit of information code words (length n=): () () RX: how can the receiver decide about transmitted words: (),(),(): Detection + correction () (),(),(): Detection + correction () () () (Probably) right

Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 95, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =

Convolutional encoding: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations: Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples A convolutional code can be described by an infinite matrix : G G G M k n k n G G M G M............ G =. k n G G..... G...... k n... This matrix depends on K = M + k n sub-matrices {G i } i=..m. K is known as the constraint length of the code.

Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples A convolutional code can be described by an infinite matrix : G G G M k n k n G G M G M............ (C, C ) = (I, I ). k n G G..... G...... k n... It looks like a block coding: C = IG

Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples Denoting by: I j = (I j I jk ) the j th block of k informative bits, C j = (C j C jn ) a block of n coded bits at the output. Coding an infinite sequence of blocks (length k) I = (I I ) produces an infinite sequence C = (C C ) of coded blocks (length n). C = I G C = I G + I G Block form of the coding scheme: it looks like a block coding: C = IG. C M = I G M + I G M + + I M G. C j = I j M G M + + I j G for j M.

Principles st point of view: infinite length block code nd point of view: convolutions Some examples Infinite generator matrix performs a convolution Using the convention I i = for i <, the encoding structure C = IG is clearly a convolution : C j = M I j l G l. l= For an informative bits sequence I whose length is finite,only L < + blocks of k bits are different from zero at the input of the coder: I = (I I L ). The sequence C = (C C L +M ) at the coder output is finite too. This truncated coded sequence is generated by a linear block code whose generator matrix is a size kl n(l + M) sub-matrix of G

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Let us write g (l) αβ elements of matrix G l. We now expand the convolution C j = M l= I j lg l to explicit the n components C j,, C jn of each output block C j : C j = [C j,, C jn ] = [ M k l= α= I j l,α g (l) α,, M k l= α= I j l,α g (l) αn If the length of the shift register is L, there are M L different internal configurations. The behavior of the convolutional coder can be captured by a M L states machine. ]

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples depends on: C jβ = k M α= l= I j l,α g (l) αβ the present input I j M previous input blocks I j,, I j M. C jβ can be calculated by memorizing M input values in shift registers One shift register α k for each k bits of the input. For register α, only memories for which g (l) αβ = are connected to adder β n.

Rate of a convolutional code Principles st point of view: infinite length block code nd point of view: convolutions Some examples Asymptotic rate For each k bits long block at the input, a n bits long block is generated at the output. At the coder output, the ratio [number of informative bits] over [total number of bits] is given by: R = k n This quantity is called the rate of the code.

Rate of a convolutional code Principles st point of view: infinite length block code nd point of view: convolutions Some examples Finite length rate For a finite length input sequence, the truncating reduces the rate. The exact finite-length rate is exactly: r L = r L + M For L >> M, this rate is almost equal to the asymptotic rate.

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder. input (k = ), outputs (n = ). + C j + D + D + D input outputs D D D + D + D G G G G + C j Impulse responses are P(D) = + D + D + D and Q(D) = + D + D. convolutions are evaluated in parallel. The output of each convolution depends on one input and on values memorized in the shift register. At each step, the values at the output depend on the input and the internal state.

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder. input (k = ), outputs (n = ). + C j + D + D + D input outputs D D D + D + D G G G G + C j Impulse responses are P(D) = + D + D + D and Q(D) = + D + D. Rate / (k =, n = ) Constraint length K = M + = 4 Sub-matrices: G = [], G = [], G = [], G = [].

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j = I j, I j + C j + C j G = G = convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the values at the output depend on the inputs and the internal state.

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j = I j, I j + C j + C j G = G = Rate / (k =, n = ) Constraint length K = ( ) Sub-matrices: G = and G = ( )

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j convolutions are evaluated in parallel. The output of each convolution depends on one input I j, and on values memorized in the shift register I j, and I j,. At each step, the values at the output depend on the input and the internal state.

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j Rate / (k =, n = ) Constraint length K = M + = Sub-matrices: G = ( ), G = ( ) and G = ( ).

Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j This rate / (k =, n = ) code is used in the sequel to explain the Viterbi algorithm.

Convolutional encoding Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations: Transition diagram Lattice diagram

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM : transition diagram A simple FSM (,) (,) (,) (,) (,) (,) (,) (,) The state space is composed of 4 elements:,,,. Each state is represented by a node. The input is binary valued: arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

: lattice diagram Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM One slice of a lattice diagram (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) j Time j+ Lattice diagram A new input triggers a transition from the present state to a one-step future step. A lattice diagram unwraps the behavior of the FSM as a function of time.

FSM of a Convolutional encoder Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM A simple encoder... + C j I j Ij, Ij, Ij, + C j... and the FSM of this encoder (,) (,) (,) (,) (,) (,) (,) (,)

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The first value at the coder input is: I =. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The second value at the coder input is: I =. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The last informative bit to enter the coder is. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Lattice closure: zeros at the coder input to reset its state. The first activates the transition indexed by (, ), generates output C = () and sets the state to. The second activates the transition indexed by(, ), generates output C 4 = () and resets the state to.

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Received sequence In fine, the informative sequence is encoded by: [C, C, C, C, C 4 ] = [,,,, ]

Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Noisy received coded sequence The coded sequence [C, C, C, C, C 4 ] = [,,,, ] is transmitted over a Binary Symmetric Channel. Let us assume that two errors occur and that the received sequence is: [y, y, y, y, y 4 ] = [,,,, ]

Binary Symmetric Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Diagram of the BSC - p p p - p Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (, ), two possible outputs (, ),, are equally affected by errors (error probability p)

Binary Symmetric Channel Additive White Gaussian Noise Channel Branch Metric of a Binary Symmetric Channel Calculation of the Branch Metric The transition probabilities are: p( ) = p( ) = p p( ) = p( ) = p The Hamming distance between the received value y and the coder output C rs (generated by the transition from state r to state s) is the number of bits that differs between the two vectors.

Binary Symmetric Channel Additive White Gaussian Noise Channel Branch Metric of a Binary Symmetric Channel Calculation of the Branch Metric The Likelihood is written: ( ) p dh(y,c rs) p(y C rs ) = ( p) n p ( ) p log p(y C rs ) = d H (y, C rs ) log + n log ( p) p ( ) n log ( p) is a constant and log p p < : maximizing the likelihood is equivalent to minimizing d H (y, C rs ). The Hamming branch metric d H (y, C rs ) between the observation and the output of the FSM is adapted to the BS Channel.

Binary Symmetric Channel Additive White Gaussian Noise Channel Additive White Gaussian Noise Channel Diagram of the AWGN channel a k n k ak + n k Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (, +), real (or even complex)-valued output. The output is a superposition of the input and a Gaussian noise., are equally affected by errors

Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Calculation of the Branch Metric The probability density function of a Gaussian noise is given by: p (x) = ( σ π exp x ) σ The Euclidian distance between the analog received value y and the coder output C rs (generated by the transition from state r to state s) is the sum of square errors between the two vectors.

Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Calculation of the Branch Metric The Likelihood is written: [ p(y C rs ) = Since σ π exp ] ) n exp ( y C rs ( d E (y, C ) rs) σ σ log p(y C rs ) d E (y, C rs) σ maximizing the likelihood is equivalent to minimizing the Euclidian distance.

Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Binary Symmetric Channel as an approximation of the Gaussian channel Note the BS channel is a coarse approximation of the AWGN channel with an error probability p given by : p = σ π + exp ( x ) σ dx

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size ) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations? Test L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size ) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations? Test L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood The informative sequence is made of L k-bits long blocks I j : I = [I,, I L ] This sequence is completed by M blocks of k zeros to close the lattice. The coded sequence is: C = [C,, C L C L,, C L+M ] For a memoryless channel, the received sequence is: y = [ y,, y L y L,, y L+M ]

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood Notations: s j denotes the state of the machine at time j, the initial state is zero. s j s j+ denotes the edge between states s j and s j+. There exists a one-to-one relationship between the path in the lattice s,, s L+M and the sequence C,, C L+M.

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood Since observation y j depends on s j and C j or equivalently s j s j+, the likelihood function is given by: log P(y C) = L+M j= log P ( L+M ) y j s j, C j = j= log P ( y j s j s j+ ) l j (s j, s j+ ) = log P ( y j s j s j+ ) is called a branch metric. Sums of branch metrics k j= l j(s j, s j+ ) are called path (or cumulative) metrics (from the initial node to s j+ ). Searching for the optimal input sequence is equivalent to finding the path in the lattice whose final cumulative metric is minimal.

Notations Convolutional encoding Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Notations for transitions on the lattice diagram q Edge indexed by the couple (input,output) Previous state. The weight of the optimal path that reaches this node is q. (e,s) b q+b Cumulative (path) metric = q + b Branch metric q'+b' p State (e',s') Weight p and state s of the optimal path at this node p = min(q+b,q'+b') This competing branch is removed if q'+b' > q+b

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) The FSM memorises two past values: it takes two steps to reach the steady-state behavior. During the first two steps, only a fraction of the impulse response is used. The lattice is being opened during this transient period. The initial state is, since the input is binary-valued, two edges start from state and only states are reachable: and. (, ) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of branch metrics: Branch : the FSM generates the output, the Hamming distance between this FSM output and the channel output is. Branch : the FSM generates the output, the Hamming distance between this FSM output and the channel output is. Each branch metric is surrounded by a circle on the figure. (, ) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of cumulative (or path) metrics: A cumulative metric is the sum of branch metrics to reach a given node. Its value is printed on the edge just before the node. The path metric of the best path that reaches a node is printed in the black square representing this node. The initial path metrics is set to. (, ) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of cumulative (or path) metrics: Branch : the path metric to reach state is equal to the previous path metric ( for the initial path metric) plus the branch metric : path metric =. Branch : the path metric to reach state is equal to the previous path metric ( for the initial path metric) plus the branch metric : path metric =. (, ) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) (, ) (, ) (,) (, ) (, ) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-sate behavior The transient is finished. Two edges start from each node and two edges reach each node. (, ) (, ) (, ) (,) (, ) (,) (, ) (,) (, ) (,) (,) (,) (,) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-state behavior Computation of branch metrics as usual. (, ) (, ) (, ) (,) (, ) (,) (, ) (, ) (,) (,) (,) (,) (,) (,)

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-state behavior For each arrival node, the algorithm must select the path whose metric is minimal. (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) The informative sequence is finished. Two zero inputs reset the state to its initial value. (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) Computation of branch metrics as usual. (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 4 (,) (, ) (,) 4 (, ) (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi done. Backtracking (, ) (, ) (, ) (, ) (, ) 4 4 (,) (, ) (,) 4 (, ) (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4