ECEN 5682 Theory and Practice of Error Control Codes

Size: px
Start display at page:

Download "ECEN 5682 Theory and Practice of Error Control Codes"

Transcription

1 ECEN 5682 Theory and Practice of Error Control Codes Introduction to Block Codes University of Colorado Spring 2007

2 Definition: A block code of length n and size M over an alphabet with q symbols is a set of M q-ary n-tuples called codewords. Example: Code #1. Binary code of length n = 5 with M = 4 codewords given by C = {00000, 01011, 10101, 11110}. Definition: The rate R of a q-ary block code of length n with M codewords is given by R = log q M. n

3 Definition: The redundancy r of a q-ary block code of length n with M codewords is given by r = n log q M. Example: Code #1 has rate R = log 2 4/5 = 2/5 = 0.4 and redundancy r = 5 log 2 4 = 3 bits. Example: Code #2. 5-ary code of length n = 4 with M = 5 codewords given by C = {0000, 1342, 2134, 3421, 4213}. This code has rate R = log 5 5/4 = 1/4 = 0.25 and redundancy r = 4 log 5 5 = 3 symbols.

4 The goal when using error control codes is to detect and/or correct transmission errors. Suppose code #1 is used and the (corrupted) codeword v = (00101) is received. Comparing v with all legal codewords and marking the discrepancies with * yields: *.*.***. *... **.*. The discrepancies are the result of transmission errors. If all error positions are marked with one and all other positions with zero, then the received codeword v = (00101) corresponds to the set of possible errors E = {00101, 01110, 10000, 11010}, when code #1 is used. But which of these 4 errors is the right one?

5 To decide which error out of a set of errors is the right one, one needs to make additional assumptions about the likelihood of errors and error patterns. The two most common models for the occurrence of errors are: (i) Independent and identically distributed (iid) errors with probablity ɛ. This requires a memoryless transmission channel model. (ii) Burst errors of length L. If an error occurs, it is very likely that it is followed by L 1 more errors. Burst errors occur for instance in mobile communications due to fading and in magnetic recording due to media defects. Burst errors can be converted to iid errors by the use of an interleaver.

6 More generally, and especially for non-binary codes, one also needs a model for the error amplitudes. Two possibilities are (i) Uniformly distributed non-zero error amplitudes. This is a good model for orthogonal signaling. (ii) Non-uniformly distributed non-zero error amplitudes with smaller error magnitudes more likely than larger ones. This is a good model for QAM signaling.

7 In addition to the models that describe which error pattern e is most likely, a transmission channel model is also needed that specifies how codewords c and error patterns e are combined to form the received codeword v = f (c, e). The most prevalent model is the additive model shown in the following figure. Note that addition is often assumed to be modulo q addition for a q-ary code. error e c codeword + vector addition (often modulo q) v = c + e received codeword A concise graphical way to describe simple error models is in the form of a discrete channel model that shows all possible transitions from the channel input X to the channel output Y, together with the associated transition probabilities p Y X (y x).

8 Example: The simplest discrete channel model is the memoryless binary symmetric channel (BSC) shown in the following figure. Input X 1 ɛ 1 1 ɛ ɛ ɛ Output Y This channel is completely described by the set of four transition probabilities: p Y X (0 0) = 1 ɛ, p Y X (1 0) = ɛ, p Y X (0 1) = ɛ, p Y X (1 1) = 1 ɛ. Clearly P{Y X } = ɛ and thus the (uncoded) probability of a bit error is P b (E) = ɛ.

9 Thus, if ɛ < 0.5 on a memoryless BSC, fewer errors are more likely and the right error pattern is the one with the fewest number of 1 s in it. Note that, since all symbols are binary here, only errors of amplitude 1 are possible and no specification for the distribution of error amplitudes is needed. Example: Suppose q = 5, errors occur iid with Pr{Y X } < 0.5, and uniformly distributed amplitudes. The corresponding channel model is a memoryless 5-ary symmetric channel (5SC) with transition probabilities { ɛ/4, if y x, x, y {0, 1, 2, 3, 4}, p Y X (y x) = 1 ɛ, if y = x, x, y {0, 1, 2, 3, 4}. In this case the decoding rule assumes again that the right error pattern is the one with the fewest nonzero symbols in it.

10 Example: Suppose again q = 5 and errors occur iid with P{Y X } < 0.5. But now assume that only errors of magnitude 1 occur, with +1 and 1 being equally likely. This leads to another memoryless 5SC with transition probabilities ɛ/2, if y = x ± 1 (mod 5), x, y {0, 1, 2, 3, 4}, p Y X (y x) = 1 ɛ, if y = x, x, y {0, 1, 2, 3, 4}, 0, otherwise. Now the decoder decides on the error pattern with the fewest number of ±1 (mod 5) symbols as the right error.

11 Once an error and a channel model are defined, a distance measure between codewords can be defined. Then one can determine the minimum distance between any two distinct codewords and this in turn determines how many errors a code can detect and/or correct under the given error and channel models. For the iid error model with (discrete) uniform error amplitude distribution the most appropriate measure is Hamming distance which is defined as follows. Definition: The Hamming distance d (H) (x, y) (or simply d(x, y)) between two q-ary n-tuples x and y is the number of places in which they differ. Example: d(10221, 20122) = 3.

12 The Hamming distance is probably the most popular distance measure for error control codes. Another measure that is more suitable in cases where smaller error magnitudes are more likely than larger ones is the Lee distance which is defined next. Definition: The Lee distance d (L) (x, y) between two q-ary n-tuples x and y is defined as d (L) (x, y) = x 0 y 0 + x 1 y x n 1 y n 1, where the magnitude v of a q-ary symbol v is computed modulo q as v = min{v, q v}. Definition: The minimum distance d min of a code C = {c i, i = 0, 1,... M 1} is the smallest distance between any two distinct codewords of the code, i.e., d min = min d(c i, c j ), c i, c j C. i j

13 Example: Code #2 has has the following Hamming distances between pairs of distinct codewords: d (H) (x, y) The Lee distances between pairs of distinct codewords are: d (L) (x, y) Thus, for code #2, d (H) (L) min = 4 and d min = 6.

14 Example: Code #3. Binary code with n = 10 and the following set of M = 4 codewords C = { , , , }. For this code the Hamming distances between pairs of codewords are d (H) (x, y) That is, code #3 has minimum Hamming distance 6.

15 Theorem: A code with minimum Hamming distance d min can detect all error patterns with d min 1 or fewer nonzero components. Proof: The only error patterns that cannot be detected are those that make the transmitted codeword look like another codeword. But because the smallest Hamming distance between any two distinct codewords is d min, this can only happen if the error pattern affects d min or more coordinates of the transmitted codeword.qed Definition: The sphere of radius t about codeword c is the set S t (c) = {v d(c, v) t}, where d(.,.) is the distance measure used (e.g., Hamming distance).

16 Example: Consider the codeword c = from binary code #1. Then, using Hamming distance as the distance measure, S 0(01011) = {01011} S 1(01011) = {01011, 11011, 00011, 01111, 01001, 01010} S 2(01011) = {01011, 11011, 00011, 01111, 01001, 01010, 10011, 11111, 11001, 11010, 00111, 00001, 00010, 01101, 01110, 01000} Theorem: The Hamming distance satisfies the triangle inequality, i.e., for any 3 n-tuples x, y, z d(x, y) + d(y, z) d(x, z).

17 d(x, y) + d(y, z) d(x, z) Proof: First note that for any u = (u 0, u 1,..., u n 1 ) and v = (v 0, v 1,..., v n 1 ), the Hamming distance satisfies d(u, v) = d(u 0, v 0 )+d(u 1, v 1 )+...+d(u n 1, v n 1 ), d(u i, v i ) {0, 1}, where the addition is over the reals. Consider now a coordinate, say j, where x and z differ, i.e., x j z j. Then there are three possible cases for y j : (i) y j = x j, which implies d(y j, z j ) = 1. (ii) y j = z j, which implies d(y j, x j ) = 1. (iii) y j x j and y j z j, which implies d(y j, x j ) = 1 and d(y j, z j ) = 1. Thus, in all three cases d(x, y) + d(y, z) increases by at least one while d(x, z) increases by exactly one. QED

18 Theorem: A code with minimum Hamming distance d min can correct all error patterns with t or fewer nonzero components as long as 2t < d min. Proof: Because of the triangle inequality, the spheres S t (c i ) and S t (c j ) of any two distinct codewords c i and c j contain no common elements as long as 2t < d min. Theorem: The Hamming or sphere-packing bound for q-ary codes with d min = 2t + 1 states that the redundancy r must satisfy t ( ) r log q n (q 1) j. j j=0 Proof: Left as an exercise.

19 Definition: A code which satisfies the Hamming bound with equality is called a perfect code. Example: Code #4. The binary code with blocklength n = 7 and the following set of M = 16 codewords C = { , , , , , , , , , , , , , , , }, is a perfect code. By inspection d min = 3 is found and thus all patterns of t = 1 errors are correctable. Therefore [( ) ( )] 7 7 r log 2 + = log (1 + 7) = 3. But r = n log 2 M = 7 4 = 3, i.e., the code satisfies the Hamming bound with equality. This code is known as binary (7, 4, 3) Hamming code.

20 Example: Probably the most prominent and celebrated perfect code is the binary Golay code with blocklength n = 23, M = 2 12 codewords and minimum Hamming distance d min = 7. It can correct all error patterns with up to t = 3 errors and thus the Hamming bound requires that r log 2» = log ( ) = log 2 (2048) = 11. The actual redundancy of the code is r = 23 log = = 11 and therefore it is perfect. Note: Don t take the name perfect code too literally. Perfect just simply means that the spheres of radius t = (d min 1)/2 around all codewords fill out the whole codewordspace perfectly. It does not necessarily mean that perfect codes are the best error detecting and/or correcting codes.

21 How easy is it to find a block code by trial and error? Example: Binary rate R = 0.8 code of length n = 10. There are ( ) ways to choose 256 codwords out of 1024 possibilities. Here is the actual number:

22 Definition: A q-ary linear (n, k) blockcode C is defined as the set of all linear combinations, taken modulo q, of k independent vectors from V, where V is the set of all q-ary n-tuples. If C has minimum distance d min, C is called a q-ary (n, k, d min ) code. Definition: A generator matrix G of a linear (n, k) code C is a k n matrix whose rows form a basis for the k-dimensional subspace C of V. Definition: The q-ary k-tuple u = (u 0, u 1,... u k 1 ) is used to denote a dataword. In general, it is assumed that there are no restrictions on u, i.e., it may take on all possible q k values and, unless otherwise specified, all these values are equally likely. Other names for u are message or information word.

23 Definition: For the encoding operation, any one-to-one association between datawords u and codewords c may be used. For a linear code with generator matrix G, the most natural encoding procedure is to use c = u G. Example: Code #5. Ternary (n = 5, k = 2) code with generator matrix [ ] G = This defines a linear code C whose codewords C = {00000, 01221, 02112, 10212, 11100, 12021, 20121, 21012, 22200}, lie in a 2-dimensional subspace of V, where V is the set of all 3-ary 5-tuples.

24 Example: Code #1 is a linear binary (n, k, d min ) = (5, 2, 3) code with generator matrix [ ] G = To verify this, generate the set of codewords by taking all possible linear combinations (modulo 2, or modulo q in general) of the rows of G as follows: { C = (0, 0) G = (0, 0, 0, 0, 0), (0, 1) G = (0, 1, 0, 1, 1), } (1, 0) G = (1, 0, 1, 0, 1), (1, 1) G = (1, 1, 1, 1, 0).

25 Example: Code #2 is a linear 5-ary (4,1,4) code with generator matrix G = [ ]. The set of codewords is obtained as n C = 0 G = (0, 0, 0, 0), 1 G = (1, 3, 4, 2), 2 G = (2, 1, 3, 4), o 3 G = (3, 4, 2, 1), 4 G = (4, 2, 1, 3). Example: Code #3 is a nonlinear binary code with n = 10. The sum of codewords (0, 1, 0, 0, 1, 0, 1, 1, 1, 0) + (1, 0, 0, 1, 0, 1, 1, 1, 0, 0) = (1, 1, 0, 1, 1, 1, 0, 0, 1, 0), for instance, is not a codeword of the code. In fact, the 4 codewords C = { , , , }, are linearly independent and form the basis of a 4-dimensional subspace of the space of all binary 10-tuples.

26 Definition: The Hamming weight w(c) of a codeword c C is equal to the number of nonzero components of c. The minimum Hamming weight w min of a code C is equal to the smallest Hamming weight of any nonzero codeword in C. Definition: The Lee weight w (L) (c) of a codeword c C is defined as w (L) (c) = c 0 + c c n 1, where the magnitude v of a q-ary symbol v is computed modulo q as v = min{v, q v}. The minimum Lee weight w (L) min of a code C is equal to the smallest Lee weight of any nonzero codeword in C.

27 Theorem: For a linear code d min = w min. Proof: For any x, y C d min = min x y d(x, y) = min x y d(x y, 0) = min c 0 w(c), where c = x y C because the code is linear. QED

28 Standard Array for Decoding. A good conceptual, but practically inefficient way to visualize the decoding operation for a q-ary linear code under the Hamming distance measure is the so-called standard array. It is set up as follows: (1) The first row of the array consists of all the codewords of the code, starting on the left with the all-zero codeword that must be present in every linear code. (2) The first column starts out with all q-ary n-tuples that are in the decoding sphere S t (0) of radius t about the all-zero codeword c 0 = 0, where t is the maximum number of errors the code can correct. There are P = S t (0) n-tuples in this sphere, and, assuming an additive (modulo q) error model, all are correctable error patterns (including the the all-zero error ). The elements in this column are called the coset leaders.

29 Standard Array for Decoding (contd.) (3) Making use of the linearity of the code, anything that applies to the all-zero codeword c 0 also applies to any other codeword c j by simply translating the origin. Thus, the first P entries of the j-th column make up the decoding sphere S t (c j ). All entries in the j-th column are obtained by adding (modulo q) the error pattern on the left to the codeword above. (4) Each of the rows in the standard array is called a coset. Altogether, the first P rows contain the M = q k distinct decoding spheres S t (c j ) for j = 0, 1,... M 1. If the code is a perfect code, then P M = q n, else there are (q n P M) = (Q P)M, where Q = q n k, distinct q-ary n-tuples that do not yet appear in the array. These correspond to error patterns with more than t errors, but in general only a few of these are correctable.

30 Standard Array for Decoding (contd.) (5) To complete the standard array, organize the (Q P)M q-ary n-tuples that do not yet appear in the array into Q P cosets. For each coset, select an error pattern of smallest Hamming weight that has not yet appeared anywhere in the array as a coset leader, and complete the coset by adding the error pattern on the left to the codeword above. Because of the linearity of the code it can be shown that it is always possible to fill the bottom Q P rows in this way with distinct n-tuples and that the set of all Q rows contains all q n possible q-ary n-tuples.

31 Decoding using the standard array consists of looking up the received n-tuple in the array and returning the codeword above it as the result. Definition: A decoder which decodes only received n-tuples within decoding spheres of radius t or less, but not the whole decoding region, is called an incomplete decoder. If one of the n-tuples in the bottom Q P rows of the standard array is received, an incomplete decoder declares a detected but uncorrectable error pattern. Conversely, a complete decoder assigns a nearby codeword to every received n-tuple. Note: For a perfect t-error correcting code P = Q, and no coset leader has weight greater than t.

32 The following figure shows the contents of the standard array graphically. decoding sphere decoding of radius t region codewords ----> c0= : c1 : c2... : cm-1 : : : : : / e1 : c1+e1 : c2+e1... : cm-1+e1 :... coset -----:-> e2 : c1+e2 : c2+e2... : cm-1+e2 : : coset leader... e3 : c1+e3 : c2+e3... : cm-1+e3 : 1,2,..,t. :. :. :. : error. :. :. :. : patterns ep-1 : c1+ep-1 : c2+ep-1... : cm-1+ep-1 : P rows \... : : above line / ep c1+ep c2+ep : cm-1+ep : Q-P rows more than... :. : below line t errors... :. : \ eq-1 c1+eq-1 c2+eq-1... : cm-1+eq-1 :...

33 Example: Code #1 has minimum (Hamming) distance d min = 3 and thus is a single error correcting code. The standard array for this code is: decoding sphere decoding of radius 1 region c0 : c1 : c2 : c3 : codewords ----> : : : : / : : : : single : : : : errors < : : : :... coset -----:-> : : : : : coset leader... \ : : : :... : : : : : :... The selection of the coset leaders for the last two rows is not unique. For example, and could have been used instead of and

34 Because the size of the standard array increases exponentially in the block length n, it is quite clearly not very practical even for moderately large values of n. Therefore the concept of a parity check matrix is introduced, which will lead to a more compact decoding method using syndromes. Definition: Let C V, where V is the set of all q-ary n-tuples, be a linear (n, k) code. Then the dual or orthogonal code of C, denoted C ( C perp ), is defined by C = {u V u w = 0 for all w C}. The orthogonal complement of C has dimension n k and thus C is a linear (n, n k) code.

35 Definition: A parity check matrix H of a linear (n, k) code C is a (n k) n matrix whose rows form a basis for the (n k)-dimensional subspace C of the set of all q-ary n-tuples V. That is, any parity check matrix of C is a generator matrix of C. Theorem: Let C be a q-ary linear (n, k) code with generator matrix G and parity check matrix H. Then, using arithmetic modulo q, c i H T = 0, where T denotes transpose, for all c i C, and G H T = 0. Proof: Follows directly from the definition of C and the definition of H. QED

36 Example: Code #5 (ternary (5,2,3) code) has generator matrix G and parity check matrix H given by [ ] G =, H = Thus, C is the set C = {00000, 12001, 21002, 21010, 00011, 12012, 12020, 21021, 00022, 11100, 20101, 02102, 02110, 11111, 20112, 20120, 02121, 11122, 22200, 01201, 10202, 10210, 22211, 01212, 01220, 10221, 22222}

37 Theorem: A linear code C contains a nonzero codeword of Hamming weight w iff a linearly dependent set of w columns of H exists. Proof: Let c C have weight w. From c H T = 0 we can thus find a set of w linearly dependent columns of H. Conversely, if H contains a linearly dependent set of w columns, then we can construct a codeword c with nonzero coefficients corresponding to the w columns, such that c H T = 0. QED Corollary: A linear code C has minimum weight w min iff every set of w min 1 columns of H is linearly independent and at least one set of w min columns of H is linearly dependent.

38 Definition: Elementary row operations on a matrix are the following: (i) Interchange of any two rows. (ii) Multiplication of any row by a nonzero scalar. (iii) Replacement of any row by the sum of itself and a multiple of any other row. Definition: A matrix is said to be in row-echelon form if it satisfies the following conditions: (i) The leading term of every nonzero row is a one. (ii) Every column with a leading term has all other entries zero. (iii) The leading term of any row is to the right of the leading term in every previous row. All-zero rows (if any) are placed at the bottom. Note: Under modulo q arithmetic, any matrix can be put into row-echelon form by elementary row operations if q is a prime number.

39 Example: Let q = 11 and consider the matrix A = Multiply the first row by 6 1 = 2. Then replace the second row by the difference of the second row minus the new first row. Next, subtract 2 times the new first row from the third row and replace the third row with the result to obtain A = Now start by multiplying the second row by 2 1 = 6. Then subtract 2 times this row from the first row. Finally, note that the third row is just a multiple of the second row, so that it can be replaced by an all zero row. The result is A in row-echelon form: A =

40 Definition: Two codes which are the same except for a permutation of codeword components are called equivalent. The generator matrices G and G of equivalent codes are related as follows. The code corresponding to G is the set of all linear combinations of rows of G and is thus unchanged under elementary row operations. Permutation of the columns of G corresponds to permutation of codeword components and therefore two codes are equivalent if their generator matrices G and G are related by (i) Column permutations, and (ii) Elementary row operations.

41 From this it follows that every generator matrix G of a linear code is equivalent to one in row-echelon form. Because G is a k n matrix whose rows span a k-dimensional subspace, all rows of G must be linearly independent. This proves the following Theorem: Every generator matrix of a q-ary linear code, where q is a prime (or a prime power), is equivalent to one of the form G = [I k P], or G = [P I k ], where I k is a k k identity matrix and P is a k (n k) matrix. Definition: A code C with codewords whose first (or last) k components are the unmodified information symbols is called a systematic code. The remaining n k codeword symbols are called parity symbols. A systematic code has generator matrix G = [I k P] (or G = [P I k ]).

42 Let u = (u 0, u 1,..., u k 1 ) be the information word and let c = (c 0, c 1,..., c n 1 ) be the corresponding codeword. If G is in systematic form, then p 0,k p 0,k+1... p 0,n p 1,k p 1,k+1... p 1,n 1 G = p 2,k p 2,k+1... p 2,n p k 1,k p k 1,k+1... p k 1,n 1 then the components of c = u G are c j = u j, for 0 j k 1, and c j = u 0 p 0,j + u 1 p 1,j + + u k 1 p k 1,j, k j n 1. This latter set of equations is known as the set of parity-check equations of the code.,

43 Theorem: The systematic form of H corresponding to G = [I k P] is H = [ P T I n k ]. Proof: Multiplying corresponding submatrices in G and H T together yields [ ] P G H T = [I k P] = I k P + P I n k = 0. I n k Thus, H = [ P T I n k ] satisfies G H T = 0. QED

44 Written out explicitly, the systematic form of H is p 0,k p 1,k... p k 1,k p 0,k+1 p 1,k+1... p k 1,k H = p 0,k+2 p 1,k+2... p k 1,k p 0,n 1 p 1,n 1... p k 1,n From c H T = 0 it thus follows for the m-th row of H that c 0 p 0,k+m c 1 p 1,k+m c k 1 p k 1,k+m + c k+m = 0, 0 m n k 1. Letting j = k + m and using the fact that for a systematic code c i = u i, 0, i k 1, one obtains c j = u 0 p 0,j + u 1 p 1,j + + u k 1 p k 1,j, k j n 1, which is the same set of parity check equations as before. This shows that a systematic linear (n, k) code is completely specified either by its generator matrix G or by its parity check matrix H.

45 Example: Let q = 11 and consider the (8, 2) code with generator matrix [ ] G = Put G into row-echelon form which yields [ ] G = Note that G and G produce exactly the same set of codewords, but using a different set of basis vectors and therefore a different mapping from datawords u to codewords c. To obtain a generator matrix in systematic form, permute the second and third columns of G so that [ ] Gsys =

46 Now, using Hsys = [ P T I n k ], one easily finds Hsys = Finally, to obtain a parity check matrix H for the original generator matrix G, all the column permutations that were necessary to obtain Gsys from G need to be undone. Here only columns two and three need to be permuted to obtain H = A quick check shows that indeed G H T = 0 modulo 11.

47 Theorem: Singleton bound. The minimum distance of any linear (n, k) code satisfies d min n k + 1. Proof: Any linear code can be converted to systematic form (possibly permuting coordinates which does not affect d min ) and thus G = [I k P]. Since P is a k (n k) matrix and systematic codewords with only one nonzero information symbol exist, the result follows. QED Note: It can be shown that the Singleton bound also applies to non-linear codes.

48 Definition: Any linear code whose d min satisfies d min = n k + 1, is called maximum distance separable (MDS). Note: The name maximum distance separable code comes from the fact that such a code has the maximum possible (Hamming) distance between codewords and that the codeword symbols can be separated into data symbols and parity check symbols (i.e., the code has a systematic encoder). Example: Code #6. The ternary (4, 2) code with generator matrix» G =, is an MDS code. The set of codewords is C = {0000, 0121, 0212, 1022, 1110, 1201, 2011, 2102, 2220}. From this it is easily seen that d min = 3 = n k + 1, which proves the claim that this code is MDS.

49 Definition: Let C be a linear (n, k) code and let v = c + e, where c C is a codeword and e is an error vector of length n, be a received n-tuple. The syndrome s of v is defined by s = v H T. Theorem: All vectors in the same coset (cf. standard array decomposition of a linear code) have the same syndrome, unique to that coset. Proof: If v and v are in the same coset, then v = c i + e and v = c j + e for some e (coset leader) and codewords c i, c j. But, for any codeword c, c H T = 0 and therefore s = v H T = c i H T +e H {z } T = e H T =0 s = v H T = c j H T +e H T = e H T = s = s. {z } =0 Conversely, suppose that s = s. Then s s = (v v ) H T = 0, which implies that v v is a codeword. But that further implies that v and v are in the same coset. QED

50 Note: In practice, this theorem has some important consequences. To decode a linear code, one does not need to store the whole standard array. Only the mapping from the syndrome to the most likely error pattern needs to be stored. Example: Syndrome decoding for code #1. This is a binary code with parity check matrix H = Computing s = e H T for e = 0, all single error error patterns, and some double error patterns, the following table that uniquely relates syndromes to error patterns is obtained.

51 Error e Syndrome s H = Note that an incomplete (or bounded distance) decoder would only use the first six entries (above the dividing line at the bottom) for decoding. The choice of the error patterns for the last two entries is somewhat arbitrary (just as it was in the case of the standard array), and other double error patterns that yield the same syndromes could have been used. Suppose now that v = (11101) was received. To decode v, compute s = v H T = (011). From the syndrome lookup table the corresponding e = (01000). Finally, the corrected codeword c is obtained as c = v e = (10101).

52 Note: To construct a linear q-ary single error correcting code with redundancy r = n k, one can start from a parity check matrix H whose columns are q-ary r-tuples with the property that they are all distinct, even when multiplied by an arbitrary nonzero q-ary scalar. The resulting codes are called Hamming codes. Example: Natural parameters (n, k, d min ) of commonly used small binary linear codes are: (7, 4, 3) (15, 11, 3) (15, 7, 5) (15, 5, 7) (23, 12, 7) (31, 26, 3) (31, 21, 5) (31, 16, 7) (31, 11, 11) (63, 57, 3) (63, 51, 5) (63, 45, 7) (63, 39, 9) (63, 36, 11) (127, 120, 3) (127, 113, 5) (127, 106, 7) (127, 99, 9) (127, 92, 11) (255, 247, 3) (255, 239, 5) (255, 231, 7) (255, 223, 9) (255, 215, 11) Note that most of the block lengths of these codes are of the form 2 m 1 for some integer m. Such block lengths are called primitive block lengths.

53 Modified Linear Blockcodes. Often the natural parameters of block codes are not suitable for a particular application, e.g., for computer storage applications the data length is typically a multiple of 8, whereas the natural parameter k of a code may be a crummy number like 113 (= ). Thus, it may be necessary to change either k or n or both. To explain the different procedures, code #7 which is a linear binary (6, 3, 3) code with G and H as shown below, is used G = , H = The six different modifications that can be applied to the parameters n and k of a linear block code are: Lenghtening (n+,k+), shortening (n,k ), extending (n+,k =), puncturing (n,k =), augmenting (n =,k+), and expurgating (n =,k ).

54 Lengthening. Increase blocklength n by adding more data symbols while keepig redundancy r = n k fixed. The result is a code that has n and k increased by the same amount. In the best case d min will be unchanged, but it can drop to as low as 1 and needs to be reexamined carefully. Example: Code #7 lengthened by 1 results in a (7, 4, 3) code with generator and parity check matrices G = , H =

55 Shortening. Decrease blocklength n by dropping data symbols while keeping the redundancy r fixed. The resulting code has n and k reduced by the same amount. In most cases d min will be unchanged, occasionally d min may increase. Example: Code #7 shortened by 1 results in a (5, 2, 3) code with generator and parity check matrices G = [ ] , H =

56 Extending. Increase blocklength n by adding more parity check symbols while keeping k fixed. The result is a code that has n and r = n k increased by the same amount. The minimum distance may or may not increase and needs to be reexamined. A common method to extend a code from n to n + 1 is to add an overall parity check. Example: Code #7 extended by 1 results in a (7, 3, 4) code with generator and parity check matrices G = , H =

57 Puncturing. Decrease blocklength n by dropping parity check symbols while keeping k fixed. The resulting code has n and r = n k decreased by the same amount. Except for the trivial case of removing all zero columns from G, the minimum distance decreases. Example: Code #7 punctured by 1 yields a (5, 3, 2) code with generator and parity check matrices G = , H = [ ]

58 Augmenting. Increase datalength k while keeping n fixed by reducing the redundancy r = n k. The result is a code which has k increased by the same amount as r = n k is decreased. Because of the reduction of r, the minimum distance generally decreases. Example: Code #7 augmented by 1 gives a (6, 4, 2) code with generator and parity check matrices [ ] G = , H =

59 Expurgating. Decrease datalength k while keeping n fixed by increasing the redundancy r = n k. The resulting code has k decreased by the same amount as r = n k is increased. The increase in r may or may not lead to an increase in d min. Example: Code #7 expurgated by 1 gives a (6, 2, 4) code with generator and parity check matrices [ ] G =, H =

60 Definition: u u + v -construction. Let u = (u 0, u 1,..., u n 1 ) and v = (v 0, v 1,..., v n 1 ) be two q-ary n-tuples and define u u + v = (u 0, u 1,..., u n 1, u 0 + v 0, u 1 + v 1,... u n 1 + v n 1 ), where the addition is modulo q addition. Let C 1 be a q-ary linear (n, k 1, d min = d 1 ) code and let C 2 be a q-ary linear (n, k 2, d min = d 2 ) code. A new q-ary code C of length 2n is then defined by C = { u u + v : u C 1, v C 2 }. The generator matrix of the (2n, k 1 + k 2 ) code C is [ ] G1 G G = 1, 0 G 2 where 0 is a k 2 n all-zero matrix, G 1 is the generator matrix of C 1 and G 2 is the generator matrix of C 2.

61 Theorem: The minimum distance of the code C obtained from the u u + v -construction is d min (C) = min{2d 1, d 2 }. Proof: Let x = u u + v and y = u u + v be two distinct codewords of C. Then d(x, y) = w(u u ) + w(u u + v v ), where d(.,.) denotes Hamming distance and w(.) denotes Hamming weight. Case (i): v = v. Case (ii): v v.

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Introduction to Algebraic Coding Theory

Introduction to Algebraic Coding Theory Introduction to Algebraic Coding Theory Supplementary material for Math 336 Cornell University Sarah A. Spence Contents 1 Introduction 1 2 Basics 2 2.1 Important code parameters..................... 4

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

CODING THEORY a first course. Henk C.A. van Tilborg

CODING THEORY a first course. Henk C.A. van Tilborg CODING THEORY a first course Henk C.A. van Tilborg Contents Contents Preface i iv 1 A communication system 1 1.1 Introduction 1 1.2 The channel 1 1.3 Shannon theory and codes 3 1.4 Problems 7 2 Linear

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

The four [10,5,4] binary codes

The four [10,5,4] binary codes 1 Preliminaries The four [10,,] binary codes There are four distinct [10; ; ] binary codes. We shall prove this in a moderately elementary way, using the MacWilliams identities as the main tool. (For the

More information

= 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

= 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that Instructions. Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Credit will not be given for answers (even correct ones) without

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

6.02 Fall 2012 Lecture #5

6.02 Fall 2012 Lecture #5 6.2 Fall 22 Lecture #5 Error correction for linear block codes - Syndrome decoding Burst errors and interleaving 6.2 Fall 22 Lecture 5, Slide # Matrix Notation for Linear Block Codes Task: given k-bit

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Polarization codes and the rate of polarization

Polarization codes and the rate of polarization Polarization codes and the rate of polarization Erdal Arıkan, Emre Telatar Bilkent U., EPFL Sept 10, 2008 Channel Polarization Given a binary input DMC W, i.i.d. uniformly distributed inputs (X 1,...,

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Coping with Bit Errors using Error Correction Codes

Coping with Bit Errors using Error Correction Codes MIT 6.02 DRAFT Lecture Notes Last update: September 23, 2012 CHAPTER 5 Coping with Bit Errors using Error Correction Codes Recall our main goal in designing digital communication networks: to send information

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Chapter 1. Search for Good Linear Codes in the Class of Quasi-Cyclic and Related Codes

Chapter 1. Search for Good Linear Codes in the Class of Quasi-Cyclic and Related Codes Chapter 1 Search for Good Linear Codes in the Class of Quasi-Cyclic and Related Codes Nuh Aydin and Tsvetan Asamov Department of Mathematics, Kenyon College Gambier, OH, USA 43022 {aydinn,asamovt}@kenyon.edu

More information

Binary Adders: Half Adders and Full Adders

Binary Adders: Half Adders and Full Adders Binary Adders: Half Adders and Full Adders In this set of slides, we present the two basic types of adders: 1. Half adders, and 2. Full adders. Each type of adder functions to add two binary bits. In order

More information

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

More information

Codes for Network Switches

Codes for Network Switches Codes for Network Switches Zhiying Wang, Omer Shaked, Yuval Cassuto, and Jehoshua Bruck Electrical Engineering Department, California Institute of Technology, Pasadena, CA 91125, USA Electrical Engineering

More information

Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

Chapter 3. if 2 a i then location: = i. Page 40

Chapter 3. if 2 a i then location: = i. Page 40 Chapter 3 1. Describe an algorithm that takes a list of n integers a 1,a 2,,a n and finds the number of integers each greater than five in the list. Ans: procedure greaterthanfive(a 1,,a n : integers)

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Two classes of ternary codes and their weight distributions

Two classes of ternary codes and their weight distributions Two classes of ternary codes and their weight distributions Cunsheng Ding, Torleiv Kløve, and Francesco Sica Abstract In this paper we describe two classes of ternary codes, determine their minimum weight

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Group Theory. Contents

Group Theory. Contents Group Theory Contents Chapter 1: Review... 2 Chapter 2: Permutation Groups and Group Actions... 3 Orbits and Transitivity... 6 Specific Actions The Right regular and coset actions... 8 The Conjugation

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Notes 11: List Decoding Folded Reed-Solomon Codes

Notes 11: List Decoding Folded Reed-Solomon Codes Introduction to Coding Theory CMU: Spring 2010 Notes 11: List Decoding Folded Reed-Solomon Codes April 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami At the end of the previous notes,

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination) Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix

More information

How To Find A Nonbinary Code Of A Binary Or Binary Code

How To Find A Nonbinary Code Of A Binary Or Binary Code Notes on Coding Theory J.I.Hall Department of Mathematics Michigan State University East Lansing, MI 48824 USA 9 September 2010 ii Copyright c 2001-2010 Jonathan I. Hall Preface These notes were written

More information

Settling a Question about Pythagorean Triples

Settling a Question about Pythagorean Triples Settling a Question about Pythagorean Triples TOM VERHOEFF Department of Mathematics and Computing Science Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands E-Mail address:

More information