ECEN 5682 Theory and Practice of Error Control Codes

Similar documents
Coding and decoding with convolutional codes. The Viterbi Algor

Chapter 1 Introduction

The Effect of Network Cabling on Bit Error Rate Performance. By Paul Kish NORDX/CDT

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

Lezione 6 Communications Blockset

IN current film media, the increase in areal density has

Polarization codes and the rate of polarization

Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

Digital Baseband Modulation

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

Signal Detection C H A P T E R SIGNAL DETECTION AS HYPOTHESIS TESTING

5 Signal Design for Bandlimited Channels

MODULATION Systems (part 1)

Appendix D Digital Modulation and GMSK

NRZ Bandwidth - HF Cutoff vs. SNR

Selection of data modulation techniques in spread spectrum systems using modified processing gain definition

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

Time and Frequency Domain Equalization

Example/ an analog signal f ( t) ) is sample by f s = 5000 Hz draw the sampling signal spectrum. Calculate min. sampling frequency.

Coding Theorems for Turbo Code Ensembles

Capacity Limits of MIMO Channels

Implementation of Digital Signal Processing: Some Background on GFSK Modulation

Digital Transmission of Analog Data: PCM and Delta Modulation

Contents. A Error probability for PAM Signals in AWGN 17. B Error probability for PSK Signals in AWGN 18

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p

Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks

Coding Theorems for Turbo-Like Codes Abstract. 1. Introduction.

Solutions to Exam in Speech Signal Processing EN2300

Digital Transmission (Line Coding)

Diversity and Multiplexing: A Fundamental Tradeoff in Multiple-Antenna Channels

Linear Codes. Chapter Basics

%Part of Telecommunication simulations course, Spring 2006 %Harri Saarnisaari, CWC

A WEB BASED TRAINING MODULE FOR TEACHING DIGITAL COMMUNICATIONS

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

A New Digital Communications Course Enhanced by PC-Based Design Projects*

LDPC Codes: An Introduction

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Probability and Random Variables. Generation of random variables (r.v.)

5 Capacity of wireless channels

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

8 MIMO II: capacity and multiplexing

Principles of Digital Communication

Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels

ELE745 Assignment and Lab Manual

Jitter Measurements in Serial Data Signals

Digital Video Broadcasting By Satellite

Module 13 : Measurements on Fiber Optic Systems

HD Radio FM Transmission System Specifications Rev. F August 24, 2011

BER Performance Analysis of SSB-QPSK over AWGN and Rayleigh Channel

ELEC3028 Digital Transmission Overview & Information Theory. Example 1

CODING THEORY a first course. Henk C.A. van Tilborg

Adaptive Linear Programming Decoding

RECOMMENDATION ITU-R BO.786 *

Coded Bidirectional Relaying in Wireless Networks

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

1. (Ungraded) A noiseless 2-kHz channel is sampled every 5 ms. What is the maximum data rate?

How To Understand The Theory Of Time Division Duplexing

Higher National Unit Specification. General information for centres. Transmission of Measurement Signals. Unit code: DX4T 35

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

A Practical Scheme for Wireless Network Operation

(2) (3) (4) (5) 3 J. M. Whittaker, Interpolatory Function Theory, Cambridge Tracts

Digital Communications

6.02 Fall 2012 Lecture #5

Diversity and Degrees of Freedom in Wireless Communications

Sphere-Bound-Achieving Coset Codes and Multilevel Coset Codes

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY

Lecture 8: Signal Detection and Noise Assumption

INTER CARRIER INTERFERENCE CANCELLATION IN HIGH SPEED OFDM SYSTEM Y. Naveena *1, K. Upendra Chowdary 2

MIMO CHANNEL CAPACITY

Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.

QAM Demodulation. Performance Conclusion. o o o o o. (Nyquist shaping, Clock & Carrier Recovery, AGC, Adaptive Equaliser) o o. Wireless Communications

CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM

TIMING recovery is one of the critical functions for reliable

Limits. Graphical Limits Let be a function defined on the interval [-6,11] whose graph is given as:

Radar Systems Engineering Lecture 6 Detection of Signals in Noise

Information Theory and Coding SYLLABUS

8. Cellular Systems. 1. Bell System Technical Journal, Vol. 58, no. 1, Jan R. Steele, Mobile Communications, Pentech House, 1992.

Basics of Floating-Point Quantization

DSAM Digital Quality Index (DQI) A New Technique for Assessing Downstream Digital Services

Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

RADIO FREQUENCY INTERFERENCE AND CAPACITY REDUCTION IN DSL

RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA

Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.

Non-Data Aided Carrier Offset Compensation for SDR Implementation

The Goldberg Rao Algorithm for the Maximum Flow Problem

RECOMMENDATION ITU-R F.1101 * Characteristics of digital fixed wireless systems below about 17 GHz

E190Q Lecture 5 Autonomous Robot Navigation

Rate Constraints for Packet Video Transmission. 3.2 Comparison of VBR and CBR video transmission : : : : : : : : : : : 79

Reed-Solomon Codes. by Bernard Sklar

Transcription:

ECEN 5682 Theory and Practice of Error Control Codes Convolutional Code Performance University of Colorado Spring 2007

Definition: A convolutional encoder which maps one or more data sequences of infinite weight into code sequences of finite weight is called a catastrophic encoder. Example: Encoder #5. The binary R = 1/2, K = 3 convolutional encoder with transfer function matrix G(D) = [ 1 + D 1 + D 2], has the encoder state diagram shown in Figure 15, with states S 0 = 00, S 1 = 10, S 2 = 01, and S 3 = 11.

S 1 1/11 1/01 0/00 S 0 S 3 1/00 1/10 0/10 0/01 0/11 S 2 Fig.15 Encoder State Diagram for Catastrophic R = 1/2, K = 3 Encoder

S 3 01 01 01 01 01 01 10 10 10 10 10 10 S 2 S 1 S 0 10 10 10 10 10 10 10 01 01 01 01 01 01 01 00 00 00 00 00 00 11 11 11 11 11 11 11 11 11 11 11 11 11 11 00 00 00 00 00 00 00 00 Fig.16 A Detour of Weight w =7andi= 3, Starting at Time t =0

Definition: The complete weight distribution {A(w, i, l)} of a convolutional code is defined as the number of detours (or codewords), beginning at time 0 in the all-zero state S 0 of the encoder, returning again for the first time to S 0 after l time units, and having code (Hamming) weight w and data (Hamming) weight i. Definition: The extended weight distribution {A(w, i)} of a convolutional code is defined by A(w, i) = A(w, i, l). l=1 That is, {A(w, i)} is the number of detours (starting at time 0) from the all-zero path with code sequence (Hamming) weight w and corresponding data sequence (Hamming) weight i.

Definition: The weight distribution {A w } of a convolutional code is defined by A w = A(w, i). i=1 That is, {A w } is the number of detours (starting at time 0) from the all-zero path with code sequence (Hamming) weight w. Theorem: The probability of an error event (or decoding error) P E for a convolutional code with weight distribution {A w }, decoded by a ML decoder, at any given time t (measured in frames) is upper bounded by P E A w P w (E), w=d free where P w (E) = P{ML decoder makes detour with weight w}.

Theorem: On a memoryless BSC with transition probability ɛ < 0.5, the probability of error P d (E) between two detours or codewords distance d apart is given by 8 >< P d (E) = >: dx e=(d+1)/2 1 2 d d/2!! d e ɛ e (1 ɛ) d e, d odd, ɛ d/2 (1 ɛ) d/2 + dx e=d/2+1! d ɛ e (1 ɛ) d e, d even. e Proof: Under the Hamming distance measure, an error between two binary codewords distance d apart is made if more than d/2 of the bits in which the codewords differ are in error. If d is even and exactly d/2 bits are in error, then an error is made with probability 1/2. QED

Note: A somewhat simpler but less tight bound is obtained by dropping the factor of 1/2 in the first term for d even as follows P d (E) d e= d/2 ( ) d ɛ e (1 ɛ) d e. e A much simpler, but often also much more loose bound is the Bhattacharyya bound P d (E) 1 2 [ 4ɛ (1 ɛ) ] d/2.

Probability of Symbol Error. Suppose now that A w = P i=1 A(w, i) is substituted in the bound for P E. Then X X P E A(w, i) P w (E). w=d i=1 free Multiplying A(w, i) by i and summing over all i then yields the total number of data symbol errors that result from all detours of weight w as P i=1 i A(w, i). Dividing by k, the number of data symbols per frame, thus leads to the following theorem. Theorem: The probability of a symbol error P s (E) at any given time t (measured in frames) for a convolutional code with rate R = k/n and extended weight distribution {A(w, i)}, when decoded by a ML decoder, is upper bounded by P s (E) 1 i A(w, i) P w (E), k w=d free i=1 where P w (E) is the probability of error between the all-zero path and a detour of weight w.

The graph on the next slide shows different bounds for the probability of a bit error on a BSC for a binary rate R = 1/2, K = 3 convolutional encoder with transfer function matrix G(D) = [ 1 + D 2 1 + D + D 2].

10 0 Binary R=1/2, K=3, d free =5, Convolutional Code, Bit Error Probability 10 5 10 10 P b (E) 10 15 10 20 P b (E) BSC P b (E) BSC Bhattcharyya P b (E) AWGN soft 10 25 6 5.5 5 4.5 4 3.5 3 2.5 2 1.5 1 log 10 (ε)

Upper Bounds on P b (E) for Convolutional Codes on BSC (Hard Decisions) 10 0 R=1/2,K=3,d free =5 10 1 R=2/3,K=3,d free =5 R=3/4,K=3,d free =5 R=1/2,K=5,d free =7 10 2 R=1/2,K=7,d free =10 10 3 10 4 P b (E) 10 5 10 6 10 7 10 8 10 9 10 10 4 3.5 3 2.5 2 1.5 1 log 10 (ε) for BSC

Transmission Over AWGN Channel The following figure shows a one-shot model for transmitting a data symbol with value a 0 over an additive Gaussian noise (AGN) waveform channel using pulse amplitude modulation (PAM) of a pulse p(t) and a matched filter (MF) receiver. The main reason for using a one-shot model for performance evaluation with respect to channel noise is that it avoids intersymbol interference (ISI). Noise n(t), S n (f) s(t) =a 0 p(t) + r(t) Filter b(t) h R (t) b 0 t =0 } {{ } } {{ } Channel Receiver

If the noise is white with power spectral density (PSD) S n (f ) = N 0 /2 for all f, the channel model is called additive white Gaussian noise (AWGN) model. In this case the matched filter (which maximizes the SNR at its output at t = 0) is h R (t) = p ( t) p(µ) 2 dµ H R (f ) = P (f ) P(ν) 2 dν, where denotes complex conjugation. If the PAM pulse p(t) is normalized so that E p = p(µ) 2 dµ = 1 then the symbol energy at the input of the MF is E s = E [ s(µ) 2 dµ ] = E [ a 0 2], where the expectation is necessary since a 0 is a random variable.

When the AWGN model with S n (f ) = N 0 /2 is used and a 0 = α is transmitted, the received symbol b 0 at the sampler after the output of the MF is a Gaussian random variable with mean α and variance σ 2 b = N 0/2. For antipodal binary signaling (e.g., using BPSK) a 0 { E s, + E s } where E s is the (average) energy per symbol. Thus, b 0 is characterized by the conditional pdf s f b0 (β a 0 = E s ) = e (β+ Es) 2 /N 0, πn0 and f b0 (β a 0 =+ E s ) = e (β Es) 2 /N 0. πn0 These pdf s are shown graphically on the following slide.

â 0 = E s â 0 =+ E s f b0 (β a 0 = E s ) f b0 (β a 0 =+ E s ) E s 0 + E s β 2 E s If the two values of a 0 are equally likely or if a ML decoding rule is used, then the (hard) decision threshold per symbol is to decide a 0 = + E s if β > 0 and a 0 = E s otherwise.

The probability of a symbol error when hard decisions are used is P(E A 0 = E s ) = 1 πn0 0 e (β+ E s) 2 /N 0 dβ = 1 2 erfc ( E s N 0 ), where erfc(x) = 2 x e γ2 dγ e x2. Because of the symmetry of antipodal signaling, the same result is obtained for P(E a 0 = + E s ) and thus a BSC derived from an AWGN channel used with antipodal signaling has transition probability π ɛ = 1 2 erfc ( E s N 0 ), where E s is the energy received per transmitted symbol.

To make a fair comparison in terms of signal-to-noise ratio (SNR) of the transmitted information symbols between coded and uncoded systems, the energy per code symbol of the coded system needs to be scaled by the rate R of the code. Thus, when hard decisions and coding are used in a binary system, the transition probability of the BSC model becomes ɛ c = 1 2 erfc ( R E s N 0 ), where R = k/n is the rate of the code. The figure on the next slide compares P b (E) versus E b /N 0 for an uncoded and a coded binary system. The coded system uses a R = 1/2 K = 3 convolutional encoder with G(D) = [ 1 + D 2 1 + D + D 2].

10 2 Binary R=1/2, K=3, d free =5, Convolutional Code, Hard decisions AWGN channel 10 0 P b (E) uncoded P b (E) union bound P b (E) Bhattacharyya 10 2 10 4 P b (E) 10 6 10 8 10 10 10 12 0 2 4 6 8 10 12 E /N [db], E : info bit energy b 0 b

Definition: Coding Gain. Coding gain is defined as the reduction in E s /N 0 permissible for a coded communication system to obtain the same probability of error (P s (E) or P B (E) as an uncoded system, both using the same average energy per transmitted information symbol. Definition: Coding Threshold. The value of E s /N 0 (where E s is the energy per transmitted information symbol) for which the coding gain becomes zero is called the coding threshold. The graphs on the following slide show P b (E) (computed using the union bound) versus E b /N 0 for a number of different binary convolutional encoders.

10 0 Upper Bounds on P b (E) for Convolutional Codes on AWGN Channel, Hard Decisions 10 2 10 4 P b (E) 10 6 10 8 10 10 Uncoded R=1/2,K=3,d free =5 R=2/3,K=3,d free =5 R=3/4,K=3,d free =5 R=1/2,K=5,d free =7 R=1/2,K=7,d free =10 10 12 0 2 4 6 8 10 12 E /N [db], E : info bit energy b 0 b

Soft Decisions and AWGN Channel Assuming a memoryless channel model used without feedback, the ML decoding rule after the MF and the sampler is: Output code sequence estimate ĉ = c i iff i maximizes f b (β a=c i ) = N 1 j=0 f bj (β j a j =c ij ), over all code sequences c i = (c i0, c i1, c i2,...) for i = 0, 1, 2,.... If the mapping 0 1 and 1 +1 is used so that c ij { 1, +1} then f bj (β j a j =c ij ) can be written as f bj (β j a j =c ij ) = e (β j c ij Es) 2 /N 0. πn0

Taking (natural) logarithms and defining v j = β j / E s yields N 1 ln f b (β a=c i ) = ln j=0 N 1 f bj (β j a j =c ij ) = N 1 j=0 ln f bj (β j a j =c ij ) (β j c ij Es ) 2 = N N 0 2 ln(πn 0) = E s N 0 = 2E s N 0 j=0 N 1 j=0 N 1 (v 2 j 2 v j c ij + c 2 ij) N 2 ln(πn 0) v j c ij j=0 N 1 = K 1 v j c ij K 2, j=0 ( β 2 + NE s + N ) N 0 2 ln(πn 0) where K 1 and K 2 are constants independent of the codeword c i and thus irrelevant for ML decoding.

Example: Suppose the convolutional encoder with G(D) = [ 1 1 + D ] is used and the received data is v = -0.4, -1.7, 0.1, 0.3, -1.1, 1.2, 1.2, 0.0, 0.3, 0.2, -0.2, 0.7,...

Soft Decisions versus Hard Decisions To compare the performance of coded binary systems on a AWGN channel when the decoder performs either hard or soft decisions, the energy E c per coded bit is fixed and P b (E) is plotted versus ɛ of the hard decision BSC model where ɛ = 1 2 erfc( E c /N 0 ) as before. For soft decisions the expression P w (E) = 1 2 erfc ( w E c N 0 ) is used for the probability that the ML decoder makes a detour with weight w from the correct path. Thus, for soft decisions with fixed SNR per code symbol P b (E) 1 2k Examples are shown on the next slide. ( w E ) c D w erfc. N 0 w=d free

Upper Bounds on P b (E) for Convolutional Codes with Soft Decisons (Dashed: Hard Decisions) 10 0 R=1/2,K=3,d free =5 R=2/3,K=3,d free =5 R=3/4,K=3,d free =5 R=1/2,K=5,d free =7 R=1/2,K=7,d free =10 10 5 P b (E) 10 10 10 15 4 3.5 3 2.5 2 1.5 1 log 10 (ε) for BSC

Coding Gain for Soft Decisions To compare the performance of uncoded and coded binary systems with soft decisions on a AWGN channel, the energy E b per information bit is fixed and P b (E) is plotted versus the signal-to-noise ratio (SNR) E b /N 0. For an uncoded system P b (E) = 1 2 erfc ( E b N 0 ), (uncoded). For a coded system with soft decision ML decoding on a AWGN channel P b (E) 1 ( wr E ) b D w erfc, 2k N 0 w=d free where R = k/n is the rate of the code. Examples are shown in the graph on the next slide.

10 0 Upper Bounds on P b (E) for Convolutional Codes on AWGN Channel, Soft Decisions 10 2 10 4 P b (E) 10 6 10 8 10 10 Uncoded R=1/2,K=3,d free =5 R=2/3,K=3,d free =5 R=3/4,K=3,d free =5 R=1/2,K=5,d free =7 R=1/2,K=7,d free =10 10 12 0 2 4 6 8 10 12 E /N [db], E : info bit energy b 0 b