ECEN 5682 Theory and Practice of Error Control Codes


 Leslie Parker
 2 years ago
 Views:
Transcription
1 ECEN 5682 Theory and Practice of Error Control Codes Convolutional Codes University of Colorado Spring 2007
2 Linear (n, k) block codes take k data symbols at a time and encode them into n code symbols. Long data sequences are broken up into blocks of k symbols and each block is encoded independently of all others. Convolutional encoders, on the other hand, convert an entire data sequence, regardless of its length, into a single code sequence by using convolution and multiplexing operations. In general, it is convenient to assume that both the data sequences (u 0, u 1,...) and the code sequences (c 0, c 1,...) are semiinfinite sequences and to express them in the form of a power series.
3 Definition: The power series associated with the data sequence u = (u 0, u 1, u 2,...) is defined as u(d) = u 0 + u 1 D + u 2 D = u i D i, i=0 where u(d) is called the data power series. Similarly, the code power series c(d) associated with the code sequence c = (c 0, c 1, c 2,...) is defined as c(d) = c 0 + c 1 D + c 2 D = c i D i. i=0 The indeterminate D has the meaning of delay, similar to z 1 in the z transform, and D is sometimes called the delay operator.
4 A general rate R = k/n convolutional encoder converts k data sequences into n code sequences using a k n transfer function matrix G(D) as shown in the following figure. u(d) D e m u x u (1) (D). u (k) (D) Convolutional Encoder G(D) c (1) (D). c (n) (D) M u x c(d) Fig.1 Block Diagram of a kinput, noutput Convolutional Encoder
5 The data power series u(d) is split up into k subsequences, denoted u (1) (D), u (2) (D),..., u (k) (D) in power series notation, using a demultiplexer whose details are shown in the figure below. u(d) (u 0,u 1,..., u k 1,u k,u k+1,..., u 2k 1,u 2k,u 2k+1,..., u 3k 1,...) u (1) (D) u (1) 0 u (1) 1 u (1) u (2) (D) u (2) 0 u (2) 1 u (2) u (k) (D) u (k) 0 u (k) 1 u (k) 2. Fig.2 Demultiplexing from u(d) intou (1) (D),...,u (k) (D)
6 The code subsequences, denoted by c (1) (D), c (2) (D),..., c (n) (D) in power series notation, at the output of the convolutional encoder are multiplexed into a single power series c(d) for transmission over a channel, as shown below. c (1) (D) c (2) (D). c (n) (D) c (1) 0 c (1) 1 c (1) c (2) 0 c (2) 1 c (2) c (n) 0 c (n) 1 c (n) 2 (c 0,c 1,..., c n 1,c n,c n+1,..., c 2n 1,c 2n,c 2n+1,..., c 3n 1,...) c(d) Fig.3 Multiplexing of c (1) (D),...,c (n) (D) into Single Output c(d)
7 Definition: A qary generator polynomial of degree m is a polynomial in D of the form g(d) = g 0 + g 1 D + g 2 D g m D m = m g i D i, i=0 with m + 1 qary coefficients g i. The degree m is also called the memory order of g(x).
8 Consider computing the product (using modulo q arithmetic) Written out, this looks as follows c(d) = u(d) g(d). c 0 + c 1 D + c 2 D = = (g 0 + g 1 D + g 2 D g m D m ) (u 0 + u 1 D + u 2 D ) = = g 0 u 0 + g 0 u 1 D + g 0 u 2 D g 0 u m D m + g 0 u m+1 D m+1 + g 0 u m+2 D m g 1 u 0 D + g 1 u 1 D g 1 u m 1 D m + g 1 u m D m+1 + g 1 u m+1 D m g 2 u 0 D g 2 u m 2 D m + g 2 u m 1 D m+1 + g 2 u m D m g mu 0 D m + g mu 1 D m+1 + g mu 2 D m Thus, the coefficients of c(d) are mx c j = g i u j i, j = 0, 1, 2,..., where u l = 0 for l < 0, i=0 i.e., the code sequence (c 0, c 1, c 2,...) is the convolution of the data sequence (u 0, u 1, u 2,...) with the generator sequence (g 0, g 1,..., g m ).
9 A convenient way to implement the convolution c j = m g i u j i, j = 0, 1, 2,..., where u l = 0 for l < 0, i=0 is to use a shift register with m memory cells (cleared to zero at time t = 0), as shown in following figure. m memory cells...,u 2,u 1,u 0 g 0 g 1 g 2 g m ,c 2,c 1,c 0 Fig.4 Block Diagram for Convolution of u(d) withg(d)
10 A general kinput, noutput convolutional encoder consists of k such shift registers, each of which is connected to the outputs via n generator polynomials. Definition: A qary linear and timeinvariant convolutional encoder with k inputs and n outputs is specified by a k n matrix G(D), called transfer function matrix, which consists of generator polynomials g (l) h (D), h = 1, 2,..., k, l = 1, 2,..., n, as follows G(D) = (2) (n) (D) g 1 (D)... g 1 (D) (2) (n) (D) g 2 (D)... g 2 (D).... g (1) k (D) g (2) k (D)... g (n) k (D) g (1) 1 g (1) 2 The generator polynomials have qary coefficients, degree m hl, and are of the form g (l) h (l) (D) = g 0h + g (l) 1h D + g (l) 2h D g (l) m hl h Dm hl.
11 Define the power series vectors u(d) = [u (1) (D), u (2) (D),..., u (k) (D)], c(d) = [c (1) (D), c (2) (D),..., c (n) (D)]. The operation of a kinput noutput convolutional encoder can then be concisely expressed as c(d) = u(d) G(D). Each individual output sequence is obtained as c (l) (D) = k h=1 u (h) (D) g (l) h (D). Note: By setting u (h) (D) = 1 in the above equation, it is easily seen that the generator sequence (g (l) 0h, g (l) 1h, g (l) 2h,..., g (l) m hl h ), is the unit impulse response from input h to output l of the convolutional encoder.
12 Definition: The total memory M of a convolutional encoder is the total number of memory elements in the encoder, i.e., M = k max m hl. 1 l n h=1 Note that max 1 l n m hl is the number of memory cells or the memory order of the shift register for the input with index h. Definition: The maximal memory order m of a convolutional encoder is the length of the longest input shift register, i.e., m = max 1 h k max 1 l n m hl. Equivalently, m is equal to the highest degree of any of the generator polynomials in G(D).
13 Definition: The constraint length K of a convolutional encoder is the maximum number of symbols in a single output stream that can be affected by any input symbol, i.e., K = 1 + m = 1 + max 1 h k max 1 l n m hl. Note: This definition for constraint length is not in universal use. Some authors define constraint length to be the maximum number of symbols in all output streams that can be affected by any input symbol, which is nk in the notation used here.
14 Example: Encoder #1. Binary rate R = 1/2 encoder with constraint length K = 3 and transfer function matrix G(D) = [ g (1) (D) g (2) (D) ] = [ 1 + D D + D 2]. A block diagram for this encoder is shown in the figure below. +...,c (1) 2,c (1) 1,c (1) 0...,u 2,u 1,u ,c (2) 2,c (2) 1,c (2) 0 Fig.5 Binary Rate 1/2 Convolutional Encoder with K = 3
15 At time t = 0 the contents of the two memory cells are assumed to be zero. Using this encoder, the data sequence u = (u 0, u 1, u 2,...) = (1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0...), for example, is encoded as follows u = ud 2 = c (1) = After multiplexing this becomes u = ud = ud 2 = c (2) = c = (c 0 c 1, c 2 c 3, c 4 c 5,...) = (c (1) 0 c (2) 0, c(1) 1 c (2) 1, c(1) 2 c (2) 2,...) = (11, 10, 10, 00, 01, 11, 11, 10, 01, 10, 00, 01,...). The pairs of code symbols that each data symbol generates are called code frames.
16 Definition: Consider a rate R = k/n convolutional encoder, let u = (u 0 u 1... u k 1, u k u k+1... u 2k 1, u 2k u 2k+1... u 3k 1,...) = (u (1) 0 u (2) 0... u (k) 0, u(1) 1 u (2) 1... u (k) 1, u(1) 2 u (2) 2... u (k) 2,...), and let c = (c 0 c 1... c n 1, c n c n+1... c 2n 1, c 2n c 2n+1... c 3n 1,...) = (c (1) 0 c (2) 0... c (n) 0, c(1) 1 c (2) 1... c (n) 1, c(1) 2 c (2) 2... c (n) 2,...). Then the set of data symbols (u ik u ik+1... u (i+1)k 1 ) is called the ith data frame and the corresponding set of code symbols (c in c in+1... c (i+1)n 1 ) is called the ith code frame for i = 0, 1, 2,....
17 Example: Encoder #2. Binary rate R = 2/3 encoder with constraint length K = 2 and transfer function matrix " # " # (1) g 1 (D) g (2) 1 (D) g (3) 1 (D) 1 + D D 1 + D G(D) = = g (1) 2 (D) g (2) 2 (D) g (3) 2 (D) D 1 1 A block diagram for this encoder is shown in the figure below....,u (1) 2,u(1) 1,u(1) ,c (1) 2,c (1) 1,c (1) 0...,c (2) 2,c(2) 1,c(2) 0...,c (3) 2,c (3) 1,c (3) 0...,u (2) 2,u (2) 1,u (2) 0 Fig.6 Binary Rate 2/3 Convolutional Encoder with K = 2
18 In this case the data sequence u = (u 0 u 1, u 2 u 3, u 4 u 5,...) = (11, 01, 00, 11, 10, 10,...), is first demultiplexed into u (1) = (1, 0, 0, 1, 1, 1,...) and u (2) = (1, 1, 0, 1, 0, 0,...), and then encoded as follows u (1) = u (1) D = u (2) D = c (1) = u (1) D = u (2) = c (2) = u (1) = u (1) D = u (2) = c (3) = Multiplexing the code sequences c (1), c (2), and c (3) yields the single code sequence c = (c 0 c 1 c 2, c 3 c 4 c 5,...) = (c (1) 0 c (2) 0 c (3) 0, c(1) 1 c (2) 1 c (3) 1,...) = (110, 000, 100, 110, 110, 010,...). Because this is a rate 2/3 encoder, data frames of length 2 are encoded into code frames of length 3.
19 Definition: Let u = (u 0, u 1, u 2,...) be a data sequence (before demultiplexing) and let c = (c 0, c 1, c 2,...) be the corresponding code sequence (after multiplexing). Then, in analogy to block codes, the generator matrix G of a convolutional encoder is defined such that c = u G. Note that G for a convolutional encoder has infinitely many rows and columns. Let G(D) = [g (l) h ] be the transfer function matrix of a convolutional encoder with generator polynomials g (l) h (D) = m i=0 g (l) ih Di, h = 1, 2,..., k, l = 1, 2,..., n, where m is the maximal memory order of the encoder. Define the matrices 2 3 g (1) i1 g (2) i1... g (n) i1 g (1) i2 g (2) i2... g (n) i2 G i = , i = 0, 1, 2,..., m.. 5 g (1) ik g (2) ik... g (n) ik
20 In terms of these matrices, the generator matrix G can be conveniently expressed as (all entries below the diagonal are zero) G 0 G 1 G 2... G m G 0 G 1... G m 1 G m 0... G 0... G m 2 G m 1 G m G =... G 0 G 1 G G 0 G 1... G Note that the first row of this matrix is the unit impulse response (after multiplexing the outputs) from input stream 1, the second row is the unit impulse response (after multiplexing the outputs) from input stream 2, etc.
21 Example: Encoder #1 has m = 2, G 0 = [ 1 1 ], G 1 = [ 0 1 ], G 2 = [ 1 1 ], and thus generator matrix G = Using this, it is easy to compute, for example, the list of (nonzero) datawords and corresponding codewords shown on the next page.
22 u = (u 0, u 1,...) c = (c 0 c 1, c 2 c 3,...) 1,0,0,0,0,... 11,01,11,00,00,00,00,... 1,1,0,0,0,... 11,10,10,11,00,00,00,... 1,0,1,0,0,... 11,01,00,01,11,00,00,... 1,1,1,0,0,... 11,10,01,10,11,00,00,... 1,0,0,1,0,... 11,01,11,11,01,11,00,... 1,1,0,1,0,... 11,10,10,00,01,11,00,... 1,0,1,1,0,... 11,01,00,10,10,11,00,... 1,1,1,1,0,... 11,10,01,01,10,11,00,... One thing that can be deduced from this list is that most likely the minimum weight of any nonzero codeword is 5, and thus, because convolutional codes are linear, the minimum distance, called minimum free distance for convolutional codes for historical reasons, is d free = 5.
23 Example: Encoder #2 has m = 1, [ ] G 0 =, G = [ ] 1 1 1, and therefore generator matrix G =
24 The first few nonzero codewords that this encoder produces are u = (u 0 u 1,...) c = (c 0 c 1 c 2,...) 10,00,00, ,111,000,000,... 01,00,00, ,100,000,000,... 11,00,00, ,011,000,000,... 10,10,00, ,010,111,000,... 01,10,00, ,001,111,000,... 11,10,00, ,110,111,000,... 10,01,00, ,100,100,000,... 01,01,00, ,111,100,000,... 11,01,00, ,000,100,000,... 10,11,00, ,001,011,000,... 01,11,00, ,010,011,000,... 11,11,00, ,101,011,000,...
25 Definition: The code generated by a qary convolutional encoder with transfer function matrix G(D) is the set of all vectors of semiinfinite sequences of encoded symbols c(d) = u(d) G(D), where u(d) is any vector of qary data sequences. Definition: Two convolutional encoders with transfer function matrices G 1 (D) and G 2 (D) are said to be equivalent if they generate the same codes. Definition: A systematic convolutional encoder is a convolutional encoder whose codewords have the property that each data frame appears unaltered in the first k positions of the first code frame that it affects. Note: When dealing with convolutional codes and encoders it is important to carefully distinguish between the properties of the code (e.g., the minimum distance of a code) and the properties of the encoder (e.g., whether an encoder is systematic or not).
26 Example: Neither encoder #1 nor encoder #2 are systematic. But the following binary rate 1/3 encoder, which will be called encoder #3, with constraint length K = 4 and transfer function matrix G(D) = [ D + D D + D 2 + D 3], is a systematic convolutional encoder. Its generator matrix is G = Note that the first column of each triplet of columns has only a single 1 in it, so that the first symbol in each code frame is the corresponding data symbol from the data sequence u.
27 Much more interesting systematic encoders can be obtained if one allows not only FIR (finite impulse response), but also IIR (infinite impulse response) filters in the encoder. In terms of the transfer function matrix G(D), this means that the use of rational polynomial expressions instead of generator polynomials as matrix elements is allowed. The following example illustrates this. Example: Encoder #4. Binary rate R = 1/3 systematic encoder with constraint length K = 4 and rational transfer function matrix [ ] 1 + D + D G(D) = D + D 2 + D D 2 + D D 2 + D 3 A block diagram of this encoder is shown in the next figure.
28 » G(D) = 1 Convolutional Codes 1 + D + D D + D 2 + D D 2 + D D 2 + D 3 c (1) (D) c (2) (D) c (3) (D) u(d) + + Fig.7 Binary Rate 1/3 Systematic Convolutional Encoder with K = 4
29 Convolutional encoders have total memory M. Thus, a timeinvariant qary encoder can be regarded as a finite state machine (FSM) with q M states and it can be completely described by a state transition diagram called encoder state diagram. Such a state diagram can be used to encode a data sequence of arbitrary length. In addition, the encoder state diagram can also be used to obtain important information about the performance of a convolutional code and its associated encoder.
30 Example: Encoder state diagram for encoder #1. This is a binary encoder with G(D) = [1 + D D + D 2 ] that uses 2 memory cells and 2 2 = 4 states. With reference to the block diagram in Figure 5, label the encoder states as follows: S 0 = 00, S 1 = 10, S 2 = 01, S 3 = 11, where the first binary digit corresponds to the content of the first (leftmost) delay cell of the encoder, and the second digit corresponds to the content of the second delay cell. At any given time t (measured in frames), the encoder is in a particular state S (t). The next state, S (t+1), at time t + 1 depends on the value of the data frame at time t, which in the case of a rate R = 1/2 encoder is just simply u t. The code frame c (1) t c (2) t that the encoder outputs at time t depends only on S (t) and u t (and the transfer function matrix G(D), of course). Thus, the possible transitions between the states are labeled with u t/c (1). The t c (2) t resulting encoder state diagram for encoder #1 is shown in the following figure.
31 S 1 1/11 1/10 0/00 S 0 S 3 1/01 1/00 0/01 0/11 0/10 S 2 Fig.8 Encoder State Diagram for Binary Rate 1/2 Encoder with K =3 To encode the data sequence u = (0, 1, 0, 1, 1, 1, 0, 0, 1,...), for instance, start in S 0 at t = 0, return to S 0 at t = 1 because u 0 = 0, then move on to S 1 at t = 2, S 2 at t = 3, S 1 at t = 4, S 3 at t = 5, S 3 at t = 6 (self loop around S 3), S 2 at t = 7, S 0 at t = 8, and finally S 1 at t = 9. The resulting code sequence (after multiplexing) is c = (00, 11, 01, 00, 10, 01, 10, 11, 11,...).
32 Example: Encoder state diagram for encoder #2 with [ ] 1 + D D 1 + D G(D) = D 1 1 and block diagram as shown in Figure 6. This encoder also has M = 2, but each of the two memory cells receives its input at time t from a different data stream. The following convention is used to label the 4 possible states (the upper bit corresponds to the upper memory cell in Figure 6) S 0 = 0 0, S 1 = 1 0, S 2 = 0 1, S 3 = 1 1. Because the encoder has rate R = 2/3, the transitions in the encoder state diagram from time t to time t + 1 are now labeled with u (1) t u (2) t /c (1) t c (2) t c (3) t. The result is shown in the next figure.
33 10/010 00/111 S 1 10/110 10/101 11/001 10/001 11/110 00/000 S 0 S 3 11/101 00/011 01/100 00/100 01/000 01/011 S 2 11/010 01/111 Fig.9 Encoder State Diagram for Binary Rate 2/3 Encoder with K =2
34 Example: The figure on the next slide shows the encoder state diagram for encoder #4 whose block diagram was given in Figure 7. This encoder has rational transfer function matrix [ G(D) = D + D D + D 2 + D D 2 + D D 2 + D 3 ], and M = 3. The encoder states are labeled using the following convention (the leftmost bit corresponds to the leftmost memory cell in Figure 7) S 0 = 000, S 1 = 100, S 2 = 010, S 3 = 110, S 4 = 001, S 5 = 101, S 6 = 011, S 7 = 111.
35 S 1 1/100 S 3 1/111 0/011 0/011 0/001 0/010 1/110 0/000 S 0 0/000 S 2 S 5 1/100 1/110 S 7 1/111 1/101 1/101 0/001 S 4 0/010 S 6 Fig.10 Encoder State Diagram for R = 1/3, K = 4 Systematic Encoder
36 Trellis Diagrams Because the convolutional encoders considered here are timeinvariant, the encoder state diagram describes their behavior for all times t. But sometimes, e.g., for decoding convolutional codes, it is convenient to show all possible states of an encoder separately for each time t (measured in frames), together with all possible transitions from states at time t to states at time t + 1. The resulting diagram is called a trellis diagram. Example: For encoder #1 with G(D) = [ 1 + D D + D 2] and M = 2 (and thus 4 states) the trellis diagram is shown in the figure on the next slide.
37 S S 2 S S t =0 t=1 t=2 t=3 t=4 t=5 Fig.11 Trellis Diagram for Binary Rate 1/2 Encoder with K =3
38 Note that the trellis always starts with the allzero state S 0 at time t = 0 as the root node. This corresponds to the convention that convolutional encoders must be initialized to the allzero state before they are first used. The labels on the branches are the code frames that the encoder outputs when that particular transition from a state at time t to a state at time t + 1 is made in response to a data symbol u t. The highlighted path in Figure 11, for example, coresponds to the data sequence u = (1, 1, 0, 1, 0,...) and the resulting code sequence c = (11, 10, 10, 00, 01,...).
39 In its simplest and most common form, the Viterbi algorithm is a maximum likelihood (ML) decoding algorithm for convolutional codes. Recall that a ML decoder outputs the estimate ĉ = c i iff i is the index (or one of them selected at random if there are several) which maximizes the expression p Y X (v c i), over all codewords c 0, c 1, c 2,.... The conditional pmf p defines the Y X channel model with input X and output Y which is used, and v is the received (and possibly corrupted) codeword at the output of the channel. For the important special case of memoryless channels used without feedback, the computation of p can be Y X considerably simplified and brought into a form where metrics along the branches of a trellis can be added up and then a ML decision can be obtained by comparing these sums. In a nutshell, this is what the Viterbi algorithm does.
40 Definition: A channel with input X and output Y is said to be memoryless if p(y j x j, x j 1,..., x 0, y j 1,..., y 0 ) = p Y X (y j x j ). Definition: A channel with input X and output Y is used without feedback if p(x j x j 1,..., x 0, y j 1,..., y 0 ) = p(x j x j 1,..., x 0 ). Theorem: For a memoryless channel used without feedback N 1 p (y x) = Y X j=0 p Y X (y j x j ), where N is the length of the channel input and output vectors X and Y. Proof: Left as an exercise.
41 Definition: The ML decoding rule at the output Y of a discrete memoryless channel (DMC) with input X, used without feedback is: Output code sequence estimate ĉ = c i iff i maximizes the likelihood function p Y X (v c i) = N 1 j=0 p Y X (v j c ij ), over all code sequences c i = (c i0, c i1, c i2,...) for i = 0, 1, 2,.... The pmf p Y X is given by specifying the transition probabilities of the DMC and v j are the received symbols at the output of the channel. For block codes N is the blocklength of the code. For convolutional codes we set N = n (L + m), where L is the number of data frames that are encoded and m is the maximal memory order of the encoder.
42 Definition: The log likelihood function of a received sequence v at the channel output with respect to code sequence c i is the expression log [ p Y X (v c i) ] = N 1 j=0 log [ p Y X (v j c ij ) ], where the logarithm can be taken to any basis.
43 Definition: The path metric µ(v c i ) for a received sequence v given a code sequence c i is computed as µ(v c i ) = N 1 j=0 µ(v j c ij ), where the symbol metrics µ(v j c ij ) are defined as µ(v j c ij ) = α ( log[p Y X (v j c ij )] + f (v j ) ). Here α is any positive number and f (v j ) is a completely arbitrary realvalued function defined over the channel output alphabet B. Usually, one selects for every y B f (y) = log [ min x A p Y X (y x)], where A is the channel input alphabet. In this way the smallest symbol metric will always be 0. The quantity α is then adjusted so that all nonzero metrics are (approximated by) small positive integers.
44 Example: A memoryless BSC with transition probability ɛ < 0.5 is characterized by p Y X (v c) v = 0 v = 1 c = 0 1 ɛ ɛ c = 1 ɛ 1 ɛ min c p Y X (v c) ɛ ɛ Thus, setting f (v) = log[min c p Y X (v c)], yields f (0) = f (1) = log ɛ.
45 With this, the bit metrics become µ(v c) v = 0 v = 1 c = 0 α(log(1 ɛ) log ɛ) 0 c = 1 0 α(log(1 ɛ) log ɛ) Now choose α as α = 1 log(1 ɛ) log ɛ, so that the following simple bit metrics for the BSC with ɛ < 0.5 are obtained µ(v c) v = 0 v = 1 c = c = 1 0 1
46 Definition: The partial path metric µ (t) (v c i ) at time t, t = 1, 2,..., for a path, a received sequence v, and given a code sequence c i, is computed as t 1 µ (t) (v c i ) = µ(v (l) c (l) tn 1 i ) = µ(v j c ij ), l=0 j=0 where the branch metrics µ(v (l) c (l) i ) of the lth branch, l = 0, 1, 2,..., for v and a given c i are defined as (l+1)n 1 µ(v (l) c (l) i ) = µ(v j c ij ). j=ln
47 The Viterbi algorithm makes use of the trellis diagram to compute the partial path metrics µ (t) (v c i ) at times t = 1, 2,..., N for a received v, given all code sequences c i that are candidates for a ML decision, in the following well defined and organized manner. (1) Every node in the trellis is assigned a number that is equal to the partial path metric of the path that leads to this node. By definition, the trellis starts in state 0 at t = 0 and µ (0) (v c i ) = 0. (2) For every transition from time t to time t + 1, all q (M+k) (there are q M states and q k different input frames at every time t) tth branch metrics µ(v (t) c (t) i ) for v given all tth codeframes are computed.
48 (3) The partial path metric µ (t+1) (v c i ) is updated by adding the tth branch metrics to the previous partial path metrics µ (t) (v c i ) and keeping only the maximum value of the partial path metric for each node in the trellis at time t + 1. The partial path that yields the maximum value at each node is called the survivor, and all other partial paths leading into the same node are eliminated from further consideration as a ML decision candidate. Ties are broken by flipping a coin. (4) If t + 1 = N (= n(l + m) where L is the number of data frames that are encoded and m is the maximal memory order of the encoder), then there is only one survivor with maximum path metric µ(v c i ) = µ (N) (v c i ) and thus ĉ = c i is announced and the decoding algorithm stops. Otherwise, set t t + 1 and return to step 2.
49 Theorem: The path with maximum path metric µ(v c i ) selected by the Viterbi decoder is the maximum likelihood path. Proof: Suppose c i is the ML path, but the decoder outputs ĉ = c j. This implies that at some time t the partial path metrics satisfy µ (t) (v c j ) µ (t) (v c i ) and c i is not a survivor. Appending the remaining path that corresponds to c i to the survivor c j at time t thus results in a larger path metric than the one for the ML path c j. But this is a contradiction for the assumption that c j is the ML path. QED Example: Encoder #1 (binary R = 1/2, K = 3 encoder with G(D) = [1 + D D + D 2 ]) was used to generate and transmit a codeword over a BSC with transition probability ɛ < 0.5. The following seqence was received: v = (10, 10, 00, 10, 10, 11, 01, 00,...). To find the most likely codeword ĉ that corresponds to this v, use the Viterbi algorithm with the trellis diagram shown in Figure 12.
50 S 3 X X X X X X 10 X X 10 X S X X X 00 X X 00 X00 00 S X 11 11X X X X X S 0 X X X v: Fig.12 Viterbi Decoder: R = 1/2, K = 3 Encoder, Transmission over BSC At time zero start in state S 0 with a partial path metric µ (0) (v c i ) = 0. Using the bit metrics for the BSC with ɛ < 0.5 given earlier, the branch metrics for each of the first two brances are 1. Thus, the partial path metrics at time t = 1 are µ (1) (10 00) = 1 and µ (1) (10 11) = 1.
51 Continuing to add the branch metrics µ(v (1) c (1) i ), the partial path metrics µ (2) ((10, 10) (00, 00)) = 2, µ (2) ((10, 10) (00, 11)) = 2, µ (2) ((10, 10) (11, 01)) = 1, and µ (2) ((10, 10) (11, 10)) = 3 are obtained at time t = 2. At time t = 3 things become more interesting. Now two branches enter into each state and only the one that results in the larger partial path metric is kept and the other one is eliminated (indicated with an X ). Thus, for instance, since = 4 > = 1, µ (3) ((10, 10, 00) (00, 00, 00)) = 4 whereas the alternative path entering S 0 at t = 3 would only result in µ (3) ((10, 10, 00) (11, 01, 11)) = 1. Similarly, for the two paths entering S 1 at t = 3 one finds either µ (3) ((10, 10, 00) (00, 00, 11)) = 2 or µ (3) ((10, 10, 00) (11, 01, 00)) = 3 and therefore the latter path and corresponding partial path metric survive. If there is a tie, e.g., as in the case of the two paths entering S 0 at time t = 4, then one of the two paths is selected as survivor at random. In Figure 12 ties are marked with a dot following the value of the partial path metric. Using the partial path metrics at time t = 8, the ML decision at this time is to choose the codeword corresponding to the path with metric 13 (highlighted in Figure 12), i.e., ĉ = (11, 10, 01, 10, 11, 11, 01, 00,...) = û = (1, 1, 1, 0, 0, 1, 0, 1,...).
52 Definition: A channel whose output alphabet is the same as the input alphabet is said to make hard decisions, whereas a channel that uses a larger alphabet at the output than at the input is said to make soft decisions. Note: In general, a channel which gives more differentiated output information is preferred (and has more capacity) than one which has the same number of output symbols as there are input symbols, as for example the BSC. Definition: A decoder that operates on hard decision channel outputs is called a hard decision decoder, and a decoder that operates on soft decision channel outputs is called a soft decision decoder.
53 Example: Use again encoder #1, but this time with a soft decision channel model with 2 inputs and 5 outputs as shown in the following figure. 1 1! Input 0 0 Output Y Fig.13 Discrete Memoryless Channel (DMC) with 2 Inputs and 5 Outputs
54 The and! at the channel output represent bad 0 s and bad 1 s, respectively, whereas is called an erasure (i.e., it is uncertain whether is closer to a 0 or a 1, whereas a bad 0, for example, is closer to 0 than to 1). The transition probabilities for this channel are p Y X (v c) v = 0 v v = v =! v = 1 c = c = After taking (base 2) logarithms log 2 [p Y X (v c)] v = 0 v v = v =! v = 1 c = c = log 2 [min c p Y X (v c)]
55 Using µ(v c) = α ( log 2 [p Y X (v c)] log 2 [min c p Y X (v c)] ) with α = 1 and rounding to the nearest integer yields the bit metrics µ(v c) v = 0 v v = v =! v = 1 c = c = The received sequence v 0, 1!, 00, 0, 10,...), can now be decoded using the Viterbi algorithm as shown in Figure 14 on the next page.
56 S X X X X X X S X X X X X X X X 00 S X 11X X 11 X 11 X X X S 0 X X X ! Fig.14 Viterbi Decoder: R = 1/2, K = 3 Encoder, 2Input, 5Output DMC Clearly, the Viterbi algorithm can be used either for hard or soft decision decoding by using appropriate bit metrics. In this example the ML decision (up to t = 8) is ĉ = (11, 01, 00, 10, 10, 00, 10, 10,...), corresponding to û = (1, 0, 1, 1, 0, 1, 1, 0,...).
AN INTRODUCTION TO ERROR CORRECTING CODES Part 1
AN INTRODUCTION TO ERROR CORRECTING CODES Part 1 Jack Keil Wolf ECE 154C Spring 2008 Noisy Communications Noise in a communications channel can cause errors in the transmission of binary digits. Transmit:
More informationCoding and decoding with convolutional codes. The Viterbi Algor
Coding and decoding with convolutional codes. The Viterbi Algorithm. 8 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition
More information1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form
1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and
More informationINTRODUCTION TO CODING THEORY: BASIC CODES AND SHANNON S THEOREM
INTRODUCTION TO CODING THEORY: BASIC CODES AND SHANNON S THEOREM SIDDHARTHA BISWAS Abstract. Coding theory originated in the late 1940 s and took its roots in engineering. However, it has developed and
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information802.11A  OFDM PHY CODING AND INTERLEAVING. Fernando H. Gregorio. Helsinki University of Technology
802.11A  OFDM PHY CODING AND INTERLEAVING Fernando H. Gregorio Helsinki University of Technology Signal Processing Laboratory, POB 3000, FIN02015 HUT, Finland Email:gregorio@wooster.hut.fi 1. INTRODUCTION
More informationEfficient Recovery of Secrets
Efficient Recovery of Secrets Marcel Fernandez Miguel Soriano, IEEE Senior Member Department of Telematics Engineering. Universitat Politècnica de Catalunya. C/ Jordi Girona 1 i 3. Campus Nord, Mod C3,
More informationTheory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras
Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding
More informationLecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
More informationExample: Systematic Encoding (1) Systematic Cyclic Codes. Systematic Encoding. Example: Systematic Encoding (2)
S72.3410 Cyclic Codes 1 S72.3410 Cyclic Codes 3 Example: Systematic Encoding (1) Systematic Cyclic Codes Polynomial multiplication encoding for cyclic linear codes is easy. Unfortunately, the codes obtained
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More information3 Does the Simplex Algorithm Work?
Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture  17 ShannonFanoElias Coding and Introduction to Arithmetic Coding
More informationCircuit Analysis using the Node and Mesh Methods
Circuit Analysis using the Node and Mesh Methods We have seen that using Kirchhoff s laws and Ohm s law we can analyze any circuit to determine the operating conditions (the currents and voltages). The
More informationFACULTY OF GRADUATE STUDIES. On The Performance of MSOVA for UMTS and cdma2000 Turbo Codes
FACULTY OF GRADUATE STUDIES On The Performance of MSOVA for UMTS and cdma2000 Turbo Codes By Hani Hashem Mis ef Supervisor Dr. Wasel Ghanem This Thesis was submitted in partial ful llment of the requirements
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationLecture 3: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationApproximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NPCompleteness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
More informationLINEAR SYSTEMS. Consider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Realvalued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationU.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationFUNDAMENTALS of INFORMATION THEORY and CODING DESIGN
DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca
More informationA crash course in probability and Naïve Bayes classification
Probability theory A crash course in probability and Naïve Bayes classification Chapter 9 Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s
More informationQuotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of
More information5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
More informationCODING THEORY a first course. Henk C.A. van Tilborg
CODING THEORY a first course Henk C.A. van Tilborg Contents Contents Preface i iv 1 A communication system 1 1.1 Introduction 1 1.2 The channel 1 1.3 Shannon theory and codes 3 1.4 Problems 7 2 Linear
More informationNotes on Complexity Theory Last updated: August, 2011. Lecture 1
Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationExercises with solutions (1)
Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal
More informationCryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur
Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 12 Block Cipher Standards
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. !approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationThe finite field with 2 elements The simplest finite field is
The finite field with 2 elements The simplest finite field is GF (2) = F 2 = {0, 1} = Z/2 It has addition and multiplication + and defined to be 0 + 0 = 0 0 + 1 = 1 1 + 0 = 1 1 + 1 = 0 0 0 = 0 0 1 = 0
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationGuide to SRW Section 1.7: Solving inequalities
Guide to SRW Section 1.7: Solving inequalities When you solve the equation x 2 = 9, the answer is written as two very simple equations: x = 3 (or) x = 3 The diagram of the solution is 65 43 21 0
More information6.02 Fall 2012 Lecture #5
6.2 Fall 22 Lecture #5 Error correction for linear block codes  Syndrome decoding Burst errors and interleaving 6.2 Fall 22 Lecture 5, Slide # Matrix Notation for Linear Block Codes Task: given kbit
More information(x + a) n = x n + a Z n [x]. Proof. If n is prime then the map
22. A quick primality test Prime numbers are one of the most basic objects in mathematics and one of the most basic questions is to decide which numbers are prime (a clearly related problem is to find
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationCHAPTER 3 Numbers and Numeral Systems
CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,
More informationChapter 5: Sequential Circuits (LATCHES)
Chapter 5: Sequential Circuits (LATCHES) Latches We focuses on sequential circuits, where we add memory to the hardware that we ve already seen Our schedule will be very similar to before: We first show
More informationCOMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012
Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about
More information9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
More informationWhy? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
More informationCHAPTER 3 Boolean Algebra and Digital Logic
CHAPTER 3 Boolean Algebra and Digital Logic 3.1 Introduction 121 3.2 Boolean Algebra 122 3.2.1 Boolean Expressions 123 3.2.2 Boolean Identities 124 3.2.3 Simplification of Boolean Expressions 126 3.2.4
More informationCHANNEL. 1 Fast encoding of information. 2 Easy transmission of encoded messages. 3 Fast decoding of received messages.
CHAPTER : Basics of coding theory ABSTRACT Part I Basics of coding theory Coding theory  theory of error correcting codes  is one of the most interesting and applied part of mathematics and informatics.
More information2.3 Scheduling jobs on identical parallel machines
2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 MeanVariance Analysis: Overview 2 Classical
More informationminimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
More informationCSE 135: Introduction to Theory of Computation Decidability and Recognizability
CSE 135: Introduction to Theory of Computation Decidability and Recognizability Sungjin Im University of California, Merced 0428, 302014 HighLevel Descriptions of Computation Instead of giving a Turing
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationApplied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
More informationFactoring. Factoring 1
Factoring Factoring 1 Factoring Security of RSA algorithm depends on (presumed) difficulty of factoring o Given N = pq, find p or q and RSA is broken o Rabin cipher also based on factoring Factoring like
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationNotes 11: List Decoding Folded ReedSolomon Codes
Introduction to Coding Theory CMU: Spring 2010 Notes 11: List Decoding Folded ReedSolomon Codes April 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami At the end of the previous notes,
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More informationProject Scheduling: PERT/CPM
Project Scheduling: PERT/CPM Project Scheduling with Known Activity Times (as in exercises 1, 2, 3 and 5 in the handout) and considering TimeCost TradeOffs (as in exercises 4 and 6 in the handout). This
More informationResearch on the UHF RFID Channel Coding Technology based on Simulink
Vol. 6, No. 7, 015 Research on the UHF RFID Channel Coding Technology based on Simulink Changzhi Wang Shanghai 0160, China Zhicai Shi* Shanghai 0160, China Dai Jian Shanghai 0160, China Li Meng Shanghai
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationMATHEMATICS (CLASSES XI XII)
MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)
More informationChapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
More informationPolarization codes and the rate of polarization
Polarization codes and the rate of polarization Erdal Arıkan, Emre Telatar Bilkent U., EPFL Sept 10, 2008 Channel Polarization Given a binary input DMC W, i.i.d. uniformly distributed inputs (X 1,...,
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More informationCHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. #approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of three
More informationThe Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationIntroduction to Algebraic Coding Theory
Introduction to Algebraic Coding Theory Supplementary material for Math 336 Cornell University Sarah A. Spence Contents 1 Introduction 1 2 Basics 2 2.1 Important code parameters..................... 4
More informationElementary Number Theory We begin with a bit of elementary number theory, which is concerned
CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,
More informationHomework 5 Solutions
Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationChapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NPhard problem. What should I do? A. Theory says you're unlikely to find a polytime algorithm. Must sacrifice one
More informationCOMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS
COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V. EROVENKO AND B. SURY ABSTRACT. We compute commutativity degrees of wreath products A B of finite abelian groups A and B. When B
More informationCapacity Limits of MIMO Channels
Tutorial and 4G Systems Capacity Limits of MIMO Channels Markku Juntti Contents 1. Introduction. Review of information theory 3. Fixed MIMO channels 4. Fading MIMO channels 5. Summary and Conclusions References
More informationViterbi Decoding of Convolutional Codes
MIT 6.02 DRAFT Lecture Notes Last update: September 22, 2012 CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes an elegant and efficient method to decode convolutional codes, whose
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationProbability Interval Partitioning Entropy Codes
SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract
More informationLinear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More information. 0 1 10 2 100 11 1000 3 20 1 2 3 4 5 6 7 8 9
Introduction The purpose of this note is to find and study a method for determining and counting all the positive integer divisors of a positive integer Let N be a given positive integer We say d is a
More informationRNcoding of Numbers: New Insights and Some Applications
RNcoding of Numbers: New Insights and Some Applications Peter Kornerup Dept. of Mathematics and Computer Science SDU, Odense, Denmark & JeanMichel Muller LIP/Arénaire (CRNSENS LyonINRIAUCBL) Lyon,
More informationNODAL ANALYSIS. Circuits Nodal Analysis 1 M H Miller
NODAL ANALYSIS A branch of an electric circuit is a connection between two points in the circuit. In general a simple wire connection, i.e., a 'shortcircuit', is not considered a branch since it is known
More informationCoded modulation: What is it?
Coded modulation So far: Binary coding Binary modulation Will send R information bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth expansion by a factor 1/R Until 1976:
More informationThe Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001 599 The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding Thomas J. Richardson Rüdiger L. Urbanke Abstract
More informationA Practical Scheme for Wireless Network Operation
A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More information