Coding and decoding with convolutional codes. The Viterbi Algor

Size: px
Start display at page:

Download "Coding and decoding with convolutional codes. The Viterbi Algor"

Transcription

1 Coding and decoding with convolutional codes. The Viterbi Algorithm. 8

2 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

3 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

4 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible: TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY C O D I N G T O E O R Y: corrections - detection. Beyond repetition... Better codes exist.

5 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Geometric view k= bit of information code words (length n=): () () RX: how can the receiver decide about transmitted words: (),(),(): Detection + correction () (),(),(): Detection + correction () () () (Probably) right

6 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 95, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =

7 Convolutional encoding: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations: Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

8 Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples A convolutional code can be described by an infinite matrix : G G G M k n k n G G M G M G =. k n G G..... G k n... This matrix depends on K = M + k n sub-matrices {G i } i=..m. K is known as the constraint length of the code.

9 Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples A convolutional code can be described by an infinite matrix : G G G M k n k n G G M G M (C, C ) = (I, I ). k n G G..... G k n... It looks like a block coding: C = IG

10 Infinite generator matrix Principles st point of view: infinite length block code nd point of view: convolutions Some examples Denoting by: I j = (I j I jk ) the j th block of k informative bits, C j = (C j C jn ) a block of n coded bits at the output. Coding an infinite sequence of blocks (length k) I = (I I ) produces an infinite sequence C = (C C ) of coded blocks (length n). C = I G C = I G + I G Block form of the coding scheme: it looks like a block coding: C = IG. C M = I G M + I G M + + I M G. C j = I j M G M + + I j G for j M.

11 Principles st point of view: infinite length block code nd point of view: convolutions Some examples Infinite generator matrix performs a convolution Using the convention I i = for i <, the encoding structure C = IG is clearly a convolution : C j = M I j l G l. l= For an informative bits sequence I whose length is finite,only L < + blocks of k bits are different from zero at the input of the coder: I = (I I L ). The sequence C = (C C L +M ) at the coder output is finite too. This truncated coded sequence is generated by a linear block code whose generator matrix is a size kl n(l + M) sub-matrix of G

12 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Let us write g (l) αβ elements of matrix G l. We now expand the convolution C j = M l= I j lg l to explicit the n components C j,, C jn of each output block C j : C j = [C j,, C jn ] = [ M k l= α= I j l,α g (l) α,, M k l= α= I j l,α g (l) αn If the length of the shift register is L, there are M L different internal configurations. The behavior of the convolutional coder can be captured by a M L states machine. ]

13 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples depends on: C jβ = k M α= l= I j l,α g (l) αβ the present input I j M previous input blocks I j,, I j M. C jβ can be calculated by memorizing M input values in shift registers One shift register α k for each k bits of the input. For register α, only memories for which g (l) αβ = are connected to adder β n.

14 Rate of a convolutional code Principles st point of view: infinite length block code nd point of view: convolutions Some examples Asymptotic rate For each k bits long block at the input, a n bits long block is generated at the output. At the coder output, the ratio [number of informative bits] over [total number of bits] is given by: R = k n This quantity is called the rate of the code.

15 Rate of a convolutional code Principles st point of view: infinite length block code nd point of view: convolutions Some examples Finite length rate For a finite length input sequence, the truncating reduces the rate. The exact finite-length rate is exactly: r L = r L + M For L >> M, this rate is almost equal to the asymptotic rate.

16 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder. input (k = ), outputs (n = ). + C j + D + D + D input outputs D D D + D + D G G G G + C j Impulse responses are P(D) = + D + D + D and Q(D) = + D + D. convolutions are evaluated in parallel. The output of each convolution depends on one input and on values memorized in the shift register. At each step, the values at the output depend on the input and the internal state.

17 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder. input (k = ), outputs (n = ). + C j + D + D + D input outputs D D D + D + D G G G G + C j Impulse responses are P(D) = + D + D + D and Q(D) = + D + D. Rate / (k =, n = ) Constraint length K = M + = 4 Sub-matrices: G = [], G = [], G = [], G = [].

18 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j = I j, I j + C j + C j G = G = convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the values at the output depend on the inputs and the internal state.

19 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j = I j, I j + C j + C j G = G = Rate / (k =, n = ) Constraint length K = ( ) Sub-matrices: G = and G = ( )

20 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j convolutions are evaluated in parallel. The output of each convolution depends on one input I j, and on values memorized in the shift register I j, and I j,. At each step, the values at the output depend on the input and the internal state.

21 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j Rate / (k =, n = ) Constraint length K = M + = Sub-matrices: G = ( ), G = ( ) and G = ( ).

22 Shift registers based realization Principles st point of view: infinite length block code nd point of view: convolutions Some examples Rate / encoder + C j I j Ij, Ij, Ij, + C j This rate / (k =, n = ) code is used in the sequel to explain the Viterbi algorithm.

23 Convolutional encoding Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations: Transition diagram Lattice diagram

24 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM : transition diagram A simple FSM (,) (,) (,) (,) (,) (,) (,) (,) The state space is composed of 4 elements:,,,. Each state is represented by a node. The input is binary valued: arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

25 : lattice diagram Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM One slice of a lattice diagram (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) j Time j+ Lattice diagram A new input triggers a transition from the present state to a one-step future step. A lattice diagram unwraps the behavior of the FSM as a function of time.

26 FSM of a Convolutional encoder Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM A simple encoder... + C j I j Ij, Ij, Ij, + C j... and the FSM of this encoder (,) (,) (,) (,) (,) (,) (,) (,)

27 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The first value at the coder input is: I =. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

28 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The second value at the coder input is: I =. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

29 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Information bearing bits enter the coder The last informative bit to enter the coder is. According to the transition diagram, this input activates the transition indexed by (, ): this means input generates output C = (). The transition is from state to state.

30 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Lattice closure: zeros at the coder input to reset its state. The first activates the transition indexed by (, ), generates output C = () and sets the state to. The second activates the transition indexed by(, ), generates output C 4 = () and resets the state to.

31 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Coding bits with a known automate. (,) Initial coder state:. Informative sequence:. (,) (,) (,) (,) (,) (,) (,) Received sequence In fine, the informative sequence is encoded by: [C, C, C, C, C 4 ] = [,,,, ]

32 Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM Coding a sequence using the coder FSM Noisy received coded sequence The coded sequence [C, C, C, C, C 4 ] = [,,,, ] is transmitted over a Binary Symmetric Channel. Let us assume that two errors occur and that the received sequence is: [y, y, y, y, y 4 ] = [,,,, ]

33 Binary Symmetric Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Diagram of the BSC - p p p - p Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (, ), two possible outputs (, ),, are equally affected by errors (error probability p)

34 Binary Symmetric Channel Additive White Gaussian Noise Channel Branch Metric of a Binary Symmetric Channel Calculation of the Branch Metric The transition probabilities are: p( ) = p( ) = p p( ) = p( ) = p The Hamming distance between the received value y and the coder output C rs (generated by the transition from state r to state s) is the number of bits that differs between the two vectors.

35 Binary Symmetric Channel Additive White Gaussian Noise Channel Branch Metric of a Binary Symmetric Channel Calculation of the Branch Metric The Likelihood is written: ( ) p dh(y,c rs) p(y C rs ) = ( p) n p ( ) p log p(y C rs ) = d H (y, C rs ) log + n log ( p) p ( ) n log ( p) is a constant and log p p < : maximizing the likelihood is equivalent to minimizing d H (y, C rs ). The Hamming branch metric d H (y, C rs ) between the observation and the output of the FSM is adapted to the BS Channel.

36 Binary Symmetric Channel Additive White Gaussian Noise Channel Additive White Gaussian Noise Channel Diagram of the AWGN channel a k n k ak + n k Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (, +), real (or even complex)-valued output. The output is a superposition of the input and a Gaussian noise., are equally affected by errors

37 Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Calculation of the Branch Metric The probability density function of a Gaussian noise is given by: p (x) = ( σ π exp x ) σ The Euclidian distance between the analog received value y and the coder output C rs (generated by the transition from state r to state s) is the sum of square errors between the two vectors.

38 Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Calculation of the Branch Metric The Likelihood is written: [ p(y C rs ) = Since σ π exp ] ) n exp ( y C rs ( d E (y, C ) rs) σ σ log p(y C rs ) d E (y, C rs) σ maximizing the likelihood is equivalent to minimizing the Euclidian distance.

39 Branch Metric of an AWGN Channel Binary Symmetric Channel Additive White Gaussian Noise Channel Binary Symmetric Channel as an approximation of the Gaussian channel Note the BS channel is a coarse approximation of the AWGN channel with an error probability p given by : p = σ π + exp ( x ) σ dx

40 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size ) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations? Test L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

41 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size ) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations? Test L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

42 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

43 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood The informative sequence is made of L k-bits long blocks I j : I = [I,, I L ] This sequence is completed by M blocks of k zeros to close the lattice. The coded sequence is: C = [C,, C L C L,, C L+M ] For a memoryless channel, the received sequence is: y = [ y,, y L y L,, y L+M ]

44 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood Notations: s j denotes the state of the machine at time j, the initial state is zero. s j s j+ denotes the edge between states s j and s j+. There exists a one-to-one relationship between the path in the lattice s,, s L+M and the sequence C,, C L+M.

45 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Maximum Likelihood Sequence Estimation (MLSE) Likelihood Since observation y j depends on s j and C j or equivalently s j s j+, the likelihood function is given by: log P(y C) = L+M j= log P ( L+M ) y j s j, C j = j= log P ( y j s j s j+ ) l j (s j, s j+ ) = log P ( y j s j s j+ ) is called a branch metric. Sums of branch metrics k j= l j(s j, s j+ ) are called path (or cumulative) metrics (from the initial node to s j+ ). Searching for the optimal input sequence is equivalent to finding the path in the lattice whose final cumulative metric is minimal.

46 Notations Convolutional encoding Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Notations for transitions on the lattice diagram q Edge indexed by the couple (input,output) Previous state. The weight of the optimal path that reaches this node is q. (e,s) b q+b Cumulative (path) metric = q + b Branch metric q'+b' p State (e',s') Weight p and state s of the optimal path at this node p = min(q+b,q'+b') This competing branch is removed if q'+b' > q+b

47 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) The FSM memorises two past values: it takes two steps to reach the steady-state behavior. During the first two steps, only a fraction of the impulse response is used. The lattice is being opened during this transient period. The initial state is, since the input is binary-valued, two edges start from state and only states are reachable: and. (, ) (,)

48 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of branch metrics: Branch : the FSM generates the output, the Hamming distance between this FSM output and the channel output is. Branch : the FSM generates the output, the Hamming distance between this FSM output and the channel output is. Each branch metric is surrounded by a circle on the figure. (, ) (,)

49 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of cumulative (or path) metrics: A cumulative metric is the sum of branch metrics to reach a given node. Its value is printed on the edge just before the node. The path metric of the best path that reaches a node is printed in the black square representing this node. The initial path metrics is set to. (, ) (,)

50 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) Computation of cumulative (or path) metrics: Branch : the path metric to reach state is equal to the previous path metric ( for the initial path metric) plus the branch metric : path metric =. Branch : the path metric to reach state is equal to the previous path metric ( for the initial path metric) plus the branch metric : path metric =. (, ) (,)

51 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... open the lattice (/) (, ) (, ) (,) (, ) (, ) (,)

52 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-sate behavior The transient is finished. Two edges start from each node and two edges reach each node. (, ) (, ) (, ) (,) (, ) (,) (, ) (,) (, ) (,) (,) (,) (,) (,)

53 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-state behavior Computation of branch metrics as usual. (, ) (, ) (, ) (,) (, ) (,) (, ) (, ) (,) (,) (,) (,) (,) (,)

54 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... steady-state behavior For each arrival node, the algorithm must select the path whose metric is minimal. (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (,) (,) (,) (,) (,) 4

55 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) The informative sequence is finished. Two zero inputs reset the state to its initial value. (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

56 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) Computation of branch metrics as usual. (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) (,) 4 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

57 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

58 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

59 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 (,) (, ) (,) (, ) 4 (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

60 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi in progress... close the lattice (/) (, ) (, ) (, ) (, ) (, ) 4 4 (,) (, ) (,) 4 (, ) (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

61 Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm Viterbi for decoding the output of a BSC Viterbi done. Backtracking (, ) (, ) (, ) (, ) (, ) 4 4 (,) (, ) (,) 4 (, ) (, ) (,) 4 5 (, ) (, ) (,) (,) (,) (,) (,) (,) 4

Lezione 6 Communications Blockset

Lezione 6 Communications Blockset Corso di Tecniche CAD per le Telecomunicazioni A.A. 2007-2008 Lezione 6 Communications Blockset Ing. Marco GALEAZZI 1 What Is Communications Blockset? Communications Blockset extends Simulink with a comprehensive

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUM OF REFERENCE SYMBOLS Benjamin R. Wiederholt The MITRE Corporation Bedford, MA and Mario A. Blanco The MITRE

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

FACULTY OF GRADUATE STUDIES. On The Performance of MSOVA for UMTS and cdma2000 Turbo Codes

FACULTY OF GRADUATE STUDIES. On The Performance of MSOVA for UMTS and cdma2000 Turbo Codes FACULTY OF GRADUATE STUDIES On The Performance of MSOVA for UMTS and cdma2000 Turbo Codes By Hani Hashem Mis ef Supervisor Dr. Wasel Ghanem This Thesis was submitted in partial ful llment of the requirements

More information

Research on the UHF RFID Channel Coding Technology based on Simulink

Research on the UHF RFID Channel Coding Technology based on Simulink Vol. 6, No. 7, 015 Research on the UHF RFID Channel Coding Technology based on Simulink Changzhi Wang Shanghai 0160, China Zhicai Shi* Shanghai 0160, China Dai Jian Shanghai 0160, China Li Meng Shanghai

More information

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Efficient Recovery of Secrets

Efficient Recovery of Secrets Efficient Recovery of Secrets Marcel Fernandez Miguel Soriano, IEEE Senior Member Department of Telematics Engineering. Universitat Politècnica de Catalunya. C/ Jordi Girona 1 i 3. Campus Nord, Mod C3,

More information

LDPC Codes: An Introduction

LDPC Codes: An Introduction LDPC Codes: An Introduction Amin Shokrollahi Digital Fountain, Inc. 39141 Civic Center Drive, Fremont, CA 94538 amin@digitalfountain.com April 2, 2003 Abstract LDPC codes are one of the hottest topics

More information

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks Jia-Qi Jin, Tracey Ho California Institute of Technology Pasadena, CA Email: {jin,tho}@caltech.edu Harish Viswanathan

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1. Shannon s Information Theory 2. Source Coding theorem 3. Channel Coding Theory 4. Information Capacity Theorem 5. Introduction to Error Control Coding Appendix A : Historical

More information

Sheet 7 (Chapter 10)

Sheet 7 (Chapter 10) King Saud University College of Computer and Information Sciences Department of Information Technology CAP240 First semester 1430/1431 Multiple-choice Questions Sheet 7 (Chapter 10) 1. Which error detection

More information

Capacity Limits of MIMO Channels

Capacity Limits of MIMO Channels Tutorial and 4G Systems Capacity Limits of MIMO Channels Markku Juntti Contents 1. Introduction. Review of information theory 3. Fixed MIMO channels 4. Fading MIMO channels 5. Summary and Conclusions References

More information

Coding Theorems for Turbo-Like Codes Abstract. 1. Introduction.

Coding Theorems for Turbo-Like Codes Abstract. 1. Introduction. Coding Theorems for Turbo-Like Codes Dariush Divsalar, Hui Jin, and Robert J. McEliece Jet Propulsion Laboratory and California Institute of Technology Pasadena, California USA E-mail: dariush@shannon.jpl.nasa.gov,

More information

G(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s).

G(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s). Transfer Functions The transfer function of a linear system is the ratio of the Laplace Transform of the output to the Laplace Transform of the input, i.e., Y (s)/u(s). Denoting this ratio by G(s), i.e.,

More information

Digital Video Broadcasting By Satellite

Digital Video Broadcasting By Satellite Digital Video Broadcasting By Satellite Matthew C. Valenti Lane Department of Computer Science and Electrical Engineering West Virginia University U.S.A. Apr. 2, 2012 ( Lane Department LDPCof Codes Computer

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP Department of Electrical and Computer Engineering Ben-Gurion University of the Negev LAB 1 - Introduction to USRP - 1-1 Introduction In this lab you will use software reconfigurable RF hardware from National

More information

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System Digital Modulation David Tipper Associate Professor Department of Information Science and Telecommunications University of Pittsburgh http://www.tele.pitt.edu/tipper.html Typical Communication System Source

More information

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING Development of a Software Tool for Performance Evaluation of MIMO OFDM Alamouti using a didactical Approach as a Educational and Research support in Wireless Communications JOSE CORDOVA, REBECA ESTRADA

More information

Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract

Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract Section T3C2 Teaching Convolutional Coding using MATLAB in Communication Systems Course Davoud Arasteh Department of Electronic Engineering Technology, LA 70813, USA Abstract Convolutional codes are channel

More information

Adaptive Equalization of binary encoded signals Using LMS Algorithm

Adaptive Equalization of binary encoded signals Using LMS Algorithm SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) volume issue7 Sep Adaptive Equalization of binary encoded signals Using LMS Algorithm Dr.K.Nagi Reddy Professor of ECE,NBKR

More information

6.02 Fall 2012 Lecture #5

6.02 Fall 2012 Lecture #5 6.2 Fall 22 Lecture #5 Error correction for linear block codes - Syndrome decoding Burst errors and interleaving 6.2 Fall 22 Lecture 5, Slide # Matrix Notation for Linear Block Codes Task: given k-bit

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Martin Lauer AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg martin.lauer@kit.edu

More information

Coded Bidirectional Relaying in Wireless Networks

Coded Bidirectional Relaying in Wireless Networks Coded Bidirectional Relaying in Wireless Networks Petar Popovski and Toshiaki Koike - Akino Abstract The communication strategies for coded bidirectional (two way) relaying emerge as a result of successful

More information

CODING THEORY a first course. Henk C.A. van Tilborg

CODING THEORY a first course. Henk C.A. van Tilborg CODING THEORY a first course Henk C.A. van Tilborg Contents Contents Preface i iv 1 A communication system 1 1.1 Introduction 1 1.2 The channel 1 1.3 Shannon theory and codes 3 1.4 Problems 7 2 Linear

More information

8 MIMO II: capacity and multiplexing

8 MIMO II: capacity and multiplexing CHAPTER 8 MIMO II: capacity and multiplexing architectures In this chapter, we will look at the capacity of MIMO fading channels and discuss transceiver architectures that extract the promised multiplexing

More information

TTT4110 Information and Signal Theory Solution to exam

TTT4110 Information and Signal Theory Solution to exam Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT4 Information and Signal Theory Solution to exam Problem I (a The frequency response is found by taking

More information

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska PROBLEM STATEMENT A ROBUST COMPRESSION SYSTEM FOR LOW BIT RATE TELEMETRY - TEST RESULTS WITH LUNAR DATA Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska The

More information

MOST error-correcting codes are designed for the equal

MOST error-correcting codes are designed for the equal IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 55, NO. 3, MARCH 2007 387 Transactions Letters Unequal Error Protection Using Partially Regular LDPC Codes Nazanin Rahnavard, Member, IEEE, Hossein Pishro-Nik,

More information

ELEC3028 Digital Transmission Overview & Information Theory. Example 1

ELEC3028 Digital Transmission Overview & Information Theory. Example 1 Example. A source emits symbols i, i 6, in the BCD format with probabilities P( i ) as given in Table, at a rate R s = 9.6 kbaud (baud=symbol/second). State (i) the information rate and (ii) the data rate

More information

Polarization codes and the rate of polarization

Polarization codes and the rate of polarization Polarization codes and the rate of polarization Erdal Arıkan, Emre Telatar Bilkent U., EPFL Sept 10, 2008 Channel Polarization Given a binary input DMC W, i.i.d. uniformly distributed inputs (X 1,...,

More information

Exact Maximum-Likelihood Decoding of Large linear Codes

Exact Maximum-Likelihood Decoding of Large linear Codes On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes arxiv:cs/0702147v1 [cs.it] 25 Feb 2007 Abstract Since the classical work of Berlekamp, McEliece

More information

Mathematical Modelling of Computer Networks: Part II. Module 1: Network Coding

Mathematical Modelling of Computer Networks: Part II. Module 1: Network Coding Mathematical Modelling of Computer Networks: Part II Module 1: Network Coding Lecture 3: Network coding and TCP 12th November 2013 Laila Daniel and Krishnan Narayanan Dept. of Computer Science, University

More information

A New Interpretation of Information Rate

A New Interpretation of Information Rate A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes

More information

Signal Detection C H A P T E R 14 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING

Signal Detection C H A P T E R 14 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING C H A P T E R 4 Signal Detection 4. SIGNAL DETECTION AS HYPOTHESIS TESTING In Chapter 3 we considered hypothesis testing in the context of random variables. The detector resulting in the minimum probability

More information

A Practical Scheme for Wireless Network Operation

A Practical Scheme for Wireless Network Operation A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving

More information

Revision of Lecture Eighteen

Revision of Lecture Eighteen Revision of Lecture Eighteen Previous lecture has discussed equalisation using Viterbi algorithm: Note similarity with channel decoding using maximum likelihood sequence estimation principle It also discusses

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Towards running complex models on big data

Towards running complex models on big data Towards running complex models on big data Working with all the genomes in the world without changing the model (too much) Daniel Lawson Heilbronn Institute, University of Bristol 2013 1 / 17 Motivation

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Recent Advances in Electrical Engineering and Electronic Devices Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Ahmed El-Mahdy and Ahmed Walid Faculty of Information Engineering

More information

TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008

TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008 Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT40 Digital Signal Processing Suggested Solution to Exam Fall 008 Problem (a) The input and the input-output

More information

The Degrees of Freedom of Compute-and-Forward

The Degrees of Freedom of Compute-and-Forward The Degrees of Freedom of Compute-and-Forward Urs Niesen Jointly with Phil Whiting Bell Labs, Alcatel-Lucent Problem Setting m 1 Encoder m 2 Encoder K transmitters, messages m 1,...,m K, power constraint

More information

954 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 3, MARCH 2005

954 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 3, MARCH 2005 954 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 3, MARCH 2005 Using Linear Programming to Decode Binary Linear Codes Jon Feldman, Martin J. Wainwright, Member, IEEE, and David R. Karger, Associate

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

Inference of Probability Distributions for Trust and Security applications

Inference of Probability Distributions for Trust and Security applications Inference of Probability Distributions for Trust and Security applications Vladimiro Sassone Based on joint work with Mogens Nielsen & Catuscia Palamidessi Outline 2 Outline Motivations 2 Outline Motivations

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Data Link Layer Overview

Data Link Layer Overview Data Link Layer Overview Date link layer deals with two basic issues: Part I How data frames can be reliably transmitted, and Part II How a shared communication medium can be accessed In many networks,

More information

Design of LDPC codes

Design of LDPC codes Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm observing the degree distribution of the code

More information

10.2 Series and Convergence

10.2 Series and Convergence 10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

More information

EE 42/100 Lecture 24: Latches and Flip Flops. Rev B 4/21/2010 (2:04 PM) Prof. Ali M. Niknejad

EE 42/100 Lecture 24: Latches and Flip Flops. Rev B 4/21/2010 (2:04 PM) Prof. Ali M. Niknejad A. M. Niknejad University of California, Berkeley EE 100 / 42 Lecture 24 p. 1/20 EE 42/100 Lecture 24: Latches and Flip Flops ELECTRONICS Rev B 4/21/2010 (2:04 PM) Prof. Ali M. Niknejad University of California,

More information

Digital Communications

Digital Communications Digital Communications Fourth Edition JOHN G. PROAKIS Department of Electrical and Computer Engineering Northeastern University Boston Burr Ridge, IL Dubuque, IA Madison, Wl New York San Francisco St.

More information

MIMO CHANNEL CAPACITY

MIMO CHANNEL CAPACITY MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Multiple Connection Telephone System with Voice Messaging

Multiple Connection Telephone System with Voice Messaging Multiple Connection Telephone System with Voice Messaging Rumen Hristov, Alan Medina 6.111 Project Proposal Fall 2015 Introduction We propose building a two-way telephone system. We will utilize two FPGAs,

More information

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE/ACM TRANSACTIONS ON NETWORKING 1 A Greedy Link Scheduler for Wireless Networks With Gaussian Multiple-Access and Broadcast Channels Arun Sridharan, Student Member, IEEE, C Emre Koksal, Member, IEEE,

More information

T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p

T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p Data Transmission Concepts and terminology Transmission terminology Transmission from transmitter to receiver goes over some transmission medium using electromagnetic waves Guided media. Waves are guided

More information

USB 3.0 Jitter Budgeting White Paper Revision 0.5

USB 3.0 Jitter Budgeting White Paper Revision 0.5 USB 3. Jitter Budgeting White Paper Revision.5 INTELLECTUAL PROPERTY DISCLAIMER THIS WHITE PAPER IS PROVIDED TO YOU AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NON-INFRINGEMENT,

More information

CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM

CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM Daniel Alam Department of Electrical Engineering & Computer Science University of Kansas Lawrence, KS 66045 danich@ku.edu Faculty Advisor: Erik Perrins

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Joschka Bödecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg jboedeck@informatik.uni-freiburg.de

More information

plc numbers - 13.1 Encoded values; BCD and ASCII Error detection; parity, gray code and checksums

plc numbers - 13.1 Encoded values; BCD and ASCII Error detection; parity, gray code and checksums plc numbers - 3. Topics: Number bases; binary, octal, decimal, hexadecimal Binary calculations; s compliments, addition, subtraction and Boolean operations Encoded values; BCD and ASCII Error detection;

More information

Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks

Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks R.A. Raghuvir, Dinesh Rajan and M.D. Srinath Department of Electrical Engineering Southern Methodist University Dallas, TX

More information

Factor Graphs and the Sum-Product Algorithm

Factor Graphs and the Sum-Product Algorithm 498 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001 Factor Graphs and the Sum-Product Algorithm Frank R. Kschischang, Senior Member, IEEE, Brendan J. Frey, Member, IEEE, and Hans-Andrea

More information

IN current film media, the increase in areal density has

IN current film media, the increase in areal density has IEEE TRANSACTIONS ON MAGNETICS, VOL. 44, NO. 1, JANUARY 2008 193 A New Read Channel Model for Patterned Media Storage Seyhan Karakulak, Paul H. Siegel, Fellow, IEEE, Jack K. Wolf, Life Fellow, IEEE, and

More information

Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels

Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels Qi Lu, Qingchong Liu Electrical and Systems Engineering Department Oakland University Rochester, MI 48309 USA E-mail:

More information

NRZ Bandwidth - HF Cutoff vs. SNR

NRZ Bandwidth - HF Cutoff vs. SNR Application Note: HFAN-09.0. Rev.2; 04/08 NRZ Bandwidth - HF Cutoff vs. SNR Functional Diagrams Pin Configurations appear at end of data sheet. Functional Diagrams continued at end of data sheet. UCSP

More information

Entropy and Mutual Information

Entropy and Mutual Information ENCYCLOPEDIA OF COGNITIVE SCIENCE 2000 Macmillan Reference Ltd Information Theory information, entropy, communication, coding, bit, learning Ghahramani, Zoubin Zoubin Ghahramani University College London

More information

RS-485 Protocol Manual

RS-485 Protocol Manual RS-485 Protocol Manual Revision: 1.0 January 11, 2000 RS-485 Protocol Guidelines and Description Page i Table of Contents 1.0 COMMUNICATIONS BUS OVERVIEW... 1 2.0 DESIGN GUIDELINES... 1 2.1 Hardware Design

More information

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Voice Transmission --Basic Concepts-- Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Amplitude Frequency Phase Voice Digitization in the POTS Traditional

More information

Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve

Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve Outline Selection methods Replacement methods Variation operators Selection Methods

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

5 Signal Design for Bandlimited Channels

5 Signal Design for Bandlimited Channels 225 5 Signal Design for Bandlimited Channels So far, we have not imposed any bandwidth constraints on the transmitted passband signal, or equivalently, on the transmitted baseband signal s b (t) I[k]g

More information

Adaptive Linear Programming Decoding

Adaptive Linear Programming Decoding Adaptive Linear Programming Decoding Mohammad H. Taghavi and Paul H. Siegel ECE Department, University of California, San Diego Email: (mtaghavi, psiegel)@ucsd.edu ISIT 2006, Seattle, USA, July 9 14, 2006

More information

Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes

Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes B.YAMUNA, T.R.PADMANABHAN 2 Department of ECE, 2 Department of IT Amrita Vishwa Vidyapeetham. Amrita School

More information

Lecture 2 Linear functions and examples

Lecture 2 Linear functions and examples EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory Outline Signal Detection M. Sami Fadali Professor of lectrical ngineering University of Nevada, Reno Hypothesis testing. Neyman-Pearson (NP) detector for a known signal in white Gaussian noise (WGN). Matched

More information

Low Delay Network Streaming Under Burst Losses

Low Delay Network Streaming Under Burst Losses Low Delay Network Streaming Under Burst Losses Rafid Mahmood, Ahmed Badr, and Ashish Khisti School of Electrical and Computer Engineering University of Toronto Toronto, ON, M5S 3G4, Canada {rmahmood, abadr,

More information

Introduction to Algebraic Coding Theory

Introduction to Algebraic Coding Theory Introduction to Algebraic Coding Theory Supplementary material for Math 336 Cornell University Sarah A. Spence Contents 1 Introduction 1 2 Basics 2 2.1 Important code parameters..................... 4

More information

Solutions to Exam in Speech Signal Processing EN2300

Solutions to Exam in Speech Signal Processing EN2300 Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.

More information

Classification Problems

Classification Problems Classification Read Chapter 4 in the text by Bishop, except omit Sections 4.1.6, 4.1.7, 4.2.4, 4.3.3, 4.3.5, 4.3.6, 4.4, and 4.5. Also, review sections 1.5.1, 1.5.2, 1.5.3, and 1.5.4. Classification Problems

More information

Time and Frequency Domain Equalization

Time and Frequency Domain Equalization Time and Frequency Domain Equalization Presented By: Khaled Shawky Hassan Under Supervision of: Prof. Werner Henkel Introduction to Equalization Non-ideal analog-media such as telephone cables and radio

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

Non-Data Aided Carrier Offset Compensation for SDR Implementation

Non-Data Aided Carrier Offset Compensation for SDR Implementation Non-Data Aided Carrier Offset Compensation for SDR Implementation Anders Riis Jensen 1, Niels Terp Kjeldgaard Jørgensen 1 Kim Laugesen 1, Yannick Le Moullec 1,2 1 Department of Electronic Systems, 2 Center

More information

(2) (3) (4) (5) 3 J. M. Whittaker, Interpolatory Function Theory, Cambridge Tracts

(2) (3) (4) (5) 3 J. M. Whittaker, Interpolatory Function Theory, Cambridge Tracts Communication in the Presence of Noise CLAUDE E. SHANNON, MEMBER, IRE Classic Paper A method is developed for representing any communication system geometrically. Messages and the corresponding signals

More information

Single channel data transceiver module WIZ2-434

Single channel data transceiver module WIZ2-434 Single channel data transceiver module WIZ2-434 Available models: WIZ2-434-RS: data input by RS232 (±12V) logic, 9-15V supply WIZ2-434-RSB: same as above, but in a plastic shell. The WIZ2-434-x modules

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences

More information

Compression techniques

Compression techniques Compression techniques David Bařina February 22, 2013 David Bařina Compression techniques February 22, 2013 1 / 37 Contents 1 Terminology 2 Simple techniques 3 Entropy coding 4 Dictionary methods 5 Conclusion

More information

How To Find A Nonbinary Code Of A Binary Or Binary Code

How To Find A Nonbinary Code Of A Binary Or Binary Code Notes on Coding Theory J.I.Hall Department of Mathematics Michigan State University East Lansing, MI 48824 USA 9 September 2010 ii Copyright c 2001-2010 Jonathan I. Hall Preface These notes were written

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

More information

Coding Theorems for Turbo Code Ensembles

Coding Theorems for Turbo Code Ensembles IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1451 Coding Theorems for Turbo Code Ensembles Hui Jin and Robert J. McEliece, Fellow, IEEE Invited Paper Abstract This paper is devoted

More information

General Certificate of Education Advanced Subsidiary Examination June 2015

General Certificate of Education Advanced Subsidiary Examination June 2015 General Certificate of Education Advanced Subsidiary Examination June 2015 Computing COMP1 Unit 1 Problem Solving, Programming, Data Representation and Practical Exercise Monday 1 June 2015 9.00 am to

More information