Chapter 1 Introduction
|
|
|
- Rosaline Gordon
- 9 years ago
- Views:
Transcription
1 Chapter 1 Introduction 1. Shannon s Information Theory 2. Source Coding theorem 3. Channel Coding Theory 4. Information Capacity Theorem 5. Introduction to Error Control Coding Appendix A : Historical Notes
2 1. Shannon's Information Theory The theory provides answers to two fundamental questions (among others: (awhat is the irreducible complexity below which a signal cannot be compressed? (bwhat is the ultimate transmission rate for reliable communication over a noisy channel? 2. Source Coding Theorem (Shannon's first theorem The theorem can be stated as follows: Given a discrete memoryless source of entropy H (S, the average code-word length L for any distortionless source coding is bounded as L H(S
3 This theorem provides the mathematical tool for assessing data compaction, i.e. lossless data compression, of data generated by a discrete memoryless source. The entropy of a source is a function of the probabilities of the source symbols that constitute the alphabet of the source. Entropy of Discrete Memoryless Source Assume that the source output is modeled as a discrete random variable, S, which taes on symbols from a fixed finite alphabet With probabilities P(S = s S = { s, s, L, } sk K -1 = p, = 0,1,2, L, K 1 with p = 1 = 0 Define the amount of information gain after observing the event S = s as the logarithmic function
4 I(s = log 2 ( 1 p bits the entropy of the source is defined as the mean of I ( s over source alphabet S given by H(S = = = E[ I ( s K 1 = 0 K 1 = 0 p p ] I( s log 2 ( 1 p bits The entropy is a measure of the average information content per source symbol. The source coding theorem is also nown as the "noiseless coding theorem" in the sense that it establishes the condition for error-free encoding to be possible.
5 3. Channel Coding Theorem (Shannon's 2 nd theorem The channel coding theorem for a discrete memoryless channel is stated in two parts as follows: (alet a discrete memoryless source with an alphabet S have entropy H (S and produce symbols once every T S seconds. Let a discrete memoryless channel have capacity C and be used once every T C seconds. Then if H(S T C S T C There exists a coding scheme for which the source output can be transmitted over the channel and be reconstructed with an small probability of error. arbitrarily
6 (b Conversely, if H(S > T C S T C It is not possible to transmit information over the channel and reconstruct it with an arbitrarily small probability of error. The theorem specifies the channel capacity C as a fundamental limit on the rate at which the transmission of reliable error-free message can tae place over a discrete memoryless channel. Mutual Information alphabet X alphabet Y Channel input x (selected from X with entropy H. X = { x0, x1,..., x J 1} Channel output y (selected from Y, Y = K { y0, y1,..., y 1} How can we measure the uncertainty about x after observing y?
7 Define the conditional entropy of X selected from X, given that Y = y as H ( Y = y = J 1 j = 0 P ( x j y log 2 1 [ P ( x The mean of entropy H( y over the output alphabet Y is given by H ( Y = = K 1 = 0 K -1 J 1 = 0 H ( j= 0 P ( x Y j, y = y log P ( y 2 1 [ P ( x The conditional entropy represents the amount of uncertainty remaining the channel input after the channel output has been observed. The difference H( H( is called the mutual information, which represents the uncertainty about the channel input that is resolved by observing the channel output. Denoting the mutual information by I ( ;, we may thus write I ( ; = H( H( I ( ; = I ( ;. Note that j j y y ] ]
8 Channel Capacity The mutual information can be expressed by K -1 J 1 P( x j y I ( ; = P( x j, y log2[ ] P( y = 0 j= 0 The input probability distribution { P ( x j } is obviously independent of the channel. We can then maximize I ( ; with respect to { P ( x } j Hence we define the channel capacity C as C = max I ( ; { p( x j } The channel capacity C is measured in bits per channel use. The Channel Coding Theorem is also nown as noisy coding theorem.
9 4. Information Capacity Theorem (also nown as Shannon-Hartley law or Shannon's 3 rd theorem It can be stated as follows: The information capacity of a continuous channel of bandwidth B Hz, perturbed by additive white Gaussian noise of power spectral density 2 and limited in bandwidth to B, is given by C = B log2 ( 1 + P N B 0 N 0 bits/secon d where P is the average transmitted power. This theorem implies that, for given average transmitted power P and channel bandwidth B, we can transmit information at the rate C bits per second, with arbitrarily small probability of error by employing sufficiently complex encoding systems.
10 Shannon limit: For an ideal system that transmits data at a rate R b = C Thus P = Eb Rb = Eb C C B C B Then = log 2 (1 + b E N C B b 1 o 2 = c B For infinite bandwidth, the approaches the limiting value E N o E / N Eb Eb ( = lim( = ln 2 = = 1.6 db N B N o o This value is called Shannon limit. b o
11 Appendix A. Historical Notes The seeds of Shannon's information theory appeared first in a Bell Labs classified memorandum dated 1 September 1945, A Mathematical Theory of Cryptography (revised form was published in BSTJ, Shannon's wor on cryptography problem at Bell Labs brought him into contact with two scholars, R.V.L. Hartley and H. Nyquist, who were major consultants to the cryptography research. Nyquist (1924 has shown that a certain bandwidth was necessary in order to send telegraph signals at a definite rate. Hartley (1928 had attempted to quantify of information as the logarithm of the number of possible message built from a pool of symbols.
12 Shannon was aware of Norbert Wiener's wor on cybernetics (he cited Wiener's boo in his two 1948 articles on information theory. Wiener had recognized that the communication of information was a problem in statistics. Shannon had taen a mathematics course from Norbert Wiener when he was studying at MIT, and Shannon had access to the "Yellow Peril" report (Wiener's boo before publication while he wored on cryptographic research. Major Publications of Shannon A Mathematical Theory of Communication, I, II, published in BSTJ, 1948, , Communication Theory of Secrecy Systems, published in BSTJ, 1949,
13 Paperbac edition of "The Mathematics Theory of Communication" was published by University of Illinois Press, ,000 copies have been sold up to In a review of the accomplishments of Bell Labs. in communication science, Millman in his edited boo (1984, A history of engineering and science in the Bell system: Communication Science ( says: Probably the most specular development in communication mathematics to tae place at Bell Laboratories was the formulation in the 1940's of information theory by C. E. Shannon. In his 1948 articles on information theory, Shannon credits J.W. Tuey (a professor at Princeton Univ. with suggesting the word "bit" as a measure of information.
14 In 1984 J.L. Massey wrote a paper Information Theory: the Copernican System of Communications. Published in IEEE Communications Magazine, vol. 22, No. 12, pp The Shannon s theory of information is the scientific basis of communications in the same sense that the Copernicus heliocentric theory is the scientific basis of astronomy. IEEE Transaction on Information Theory Oct Vol.44 No.6. Commemorate Issue (
15 5. Introduction to Error Control Coding 5.1 Purpose of Error Control Coding In data communications, coding is used for controlling transmission errors induced by channel noise or other impairments, such as fading and interference, so that error-free communication can be achieved. In data storage system, coding is used for controlling storage errors (during retrieval caused by storage medium defects, dust particles and radiation so that error-free storage can be achieved.
16 5.2 Coding Principle Coding is achieved by adding properly designed redundant digits (bits to each message. These redundant digits (bits are used for detecting and/or correcting transmission (or storage errors. 5.3 Types of Codings Bloc and Convolutional coding. Bloc coding: A message of digits is mapped into a structured sequence of n digits, called a codeword. : message length n : code length n : code rate The mapping operation is called encoding. Each encoding operation is independent of the past encoding, i.e. memoryless. The collection of all codeword is called a "bloc code".
17 Convolutional Coding: An information sequence is divided into (short blocs of digits each. Each -digit message is encoded into an n-digit coded bloc. The n-digit coded bloc depends not only the corresponding -digit message bloc but also on m ( 1 previous message blocs. That is, the encoder has memory of order m. The encoder has inputs and n outputs. An information is encoded into a coded sequence. The collection of all possible code sequences is called an (n,, m convolutional code. Normally, 1 2 n 8 9 n : code rate
18 5.4 Type of Errors and Channels Random errors and burst errors. Type of Channels Random error channels: Deep space channel, satellite channels, line of sight transmission channel, etc. Burst error channels: Radio lins, terrestrial microwave lins, wire and cable transmission channels, etc.
19 5.5 Decoding Suppose a codeword corresponding to a message is transmitted over a noisy channel. Let r be the received sequence. Based on r, the encoding rules and the noise characteristics of the channel, the receiver (or decoder maes a decision which message was actually transmitted. This decision maing operation is called decoding. The device which performing the decoding operation is called a decoder. Two types of decoding: Hard-decision decoding and soft-decision decoding. Hard-decision decoding: When binary coding is used, the modulator has only binary inputs. If binary demodulator output quantization
20 is used, the decoder has only binary inputs. In this case, the demodulator is said to mae hard decisions. Decoding based on hard decision made by the demodulator is called hard decision decoding. Soft-decision decoding: If the output of demodulator consists of more than two quantization levels or is left unquantized, the demodulator is said to mae soft decisions. Decoding based on soft decision made by demodulator is called soft-decision decoding. Hard-decision decoding is much easier to implement than soft-decision decoding. However, soft-decision decoding offers much better performance.
21 5.6 Some Channel Models Binary Symmetric Channel (BSC p: transition probability When BPSK modulation is used over the AWGN channel with optimum coherent detection and binary output quantization, the bit-error probability for equally liely signal is given by p = Q( 2E N 0 where Q(x E : bit energy 1 2p x e 2 y 2 dy N 0 2 : power spectral density of AWGN
22 Binary-input, 8-ary Output Discrete Channel 5.7 Optimum Decoding Suppose the codeword c corresponding to a certain message m is transmitted. Let r be the corresponding output of the demodulator. The decoder produces an estimate mˆ of the message based on r. An optimum decoding rule is that minimize the probability of a decoding error. That is, P( cˆ c r is minimized. Or, equivalently, maximizing P ( cˆ = c r.
23 The decoding error is minimized for a given r by choosing ĉ to be a codeword c that maximizes P ( c r = P( r c P( c P( r That is, ĉ is chosen to be the most liely codeword, given that r is received. 5.8 Maximum a posteriori decoding (MAP decoding In general, we have P ( c r = P( r c P( c P( r If nowledge or an estimation of (c used for decoding, the technique is called MAP decoding. P is
24 5.9 Maximum Lielihood Decoding (MLD Suppose all the messages are equally liely. An optimum decoding can be done as follows: For every codeword c j, compute the conditional probability P ( r c j c j with the largest The codeword conditional probability is chosen as the estimate ĉ for the transmitted codeword c. This decoding rule is called the Maximum Lielihood decoding (MLD.
25 MLD for a BSC Let a = ( a1,a2, L, an and b = ( b1,b2, L, bn be two binary sequences of n components. The Hamming distance between a and b, denoted as d H ( a, b, is defined as the number of places where a and b differ. In coding for a BSC, every codeword and every received sequence are binary sequences. Suppose some codeword is transmitted and the received sequence is r = ( r,r, L, n 1 2 r For a codeword c i probability P ( r ci is, the conditional d H ( r,ci n-dh ( ci ( r c ( r, i p 1 - p P = 1 For p < 2, P ( r ci is a monotonically decreasing function of d H ( r, ci. Then P ( r ci > P( r c j if and only if d H ( r,ci < dh ( r, c j
26 The MLD is completed by the following steps: (i Compute d H ( r, ci for all c i (ii c i is taen as the transmitted codeword if d H ( r,ci < dh ( r, c j for j i (iii Decode c i into message m i That is, the received vector r is decoded into the closest codeword. This is also called the minimum distance (nearest neighbor decoding.
27 5.10 Bounded Distance Decoding Given a received word r, a t-error correcting, bounded distance decoder selects that codeword c which minimizes d H ( r, c if and only if there exists c such that d H ( r,c t. If no such c exists, then a decoder failure is declared. All practical bounded distance decoder use some form of syndrome decoding. The bounded distance decoding is usually an incomplete decoding since it decodes only those received words lying in a radius-t sphere about a codeword.
FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN
DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca
Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System
Digital Modulation David Tipper Associate Professor Department of Information Science and Telecommunications University of Pittsburgh http://www.tele.pitt.edu/tipper.html Typical Communication System Source
ELEC3028 Digital Transmission Overview & Information Theory. Example 1
Example. A source emits symbols i, i 6, in the BCD format with probabilities P( i ) as given in Table, at a rate R s = 9.6 kbaud (baud=symbol/second). State (i) the information rate and (ii) the data rate
Capacity Limits of MIMO Channels
Tutorial and 4G Systems Capacity Limits of MIMO Channels Markku Juntti Contents 1. Introduction. Review of information theory 3. Fixed MIMO channels 4. Fading MIMO channels 5. Summary and Conclusions References
Coding and decoding with convolutional codes. The Viterbi Algor
Coding and decoding with convolutional codes. The Viterbi Algorithm. 8 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition
MIMO CHANNEL CAPACITY
MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use
(2) (3) (4) (5) 3 J. M. Whittaker, Interpolatory Function Theory, Cambridge Tracts
Communication in the Presence of Noise CLAUDE E. SHANNON, MEMBER, IRE Classic Paper A method is developed for representing any communication system geometrically. Messages and the corresponding signals
A New Interpretation of Information Rate
A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes
PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS
PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUM OF REFERENCE SYMBOLS Benjamin R. Wiederholt The MITRE Corporation Bedford, MA and Mario A. Blanco The MITRE
Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract
Section T3C2 Teaching Convolutional Coding using MATLAB in Communication Systems Course Davoud Arasteh Department of Electronic Engineering Technology, LA 70813, USA Abstract Convolutional codes are channel
A Survey of the Theory of Error-Correcting Codes
Posted by permission A Survey of the Theory of Error-Correcting Codes by Francis Yein Chei Fung The theory of error-correcting codes arises from the following problem: What is a good way to send a message
Entropy and Mutual Information
ENCYCLOPEDIA OF COGNITIVE SCIENCE 2000 Macmillan Reference Ltd Information Theory information, entropy, communication, coding, bit, learning Ghahramani, Zoubin Zoubin Ghahramani University College London
Privacy and Security in the Internet of Things: Theory and Practice. Bob Baxley; [email protected] HitB; 28 May 2015
Privacy and Security in the Internet of Things: Theory and Practice Bob Baxley; [email protected] HitB; 28 May 2015 Internet of Things (IoT) THE PROBLEM By 2020 50 BILLION DEVICES NO SECURITY! OSI Stack
Reed-Solomon Codes. by Bernard Sklar
Reed-Solomon Codes by Bernard Sklar Introduction In 1960, Irving Reed and Gus Solomon published a paper in the Journal of the Society for Industrial and Applied Mathematics [1]. This paper described a
Digital Communications
Digital Communications Fourth Edition JOHN G. PROAKIS Department of Electrical and Computer Engineering Northeastern University Boston Burr Ridge, IL Dubuque, IA Madison, Wl New York San Francisco St.
Image Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP
Department of Electrical and Computer Engineering Ben-Gurion University of the Negev LAB 1 - Introduction to USRP - 1-1 Introduction In this lab you will use software reconfigurable RF hardware from National
Lezione 6 Communications Blockset
Corso di Tecniche CAD per le Telecomunicazioni A.A. 2007-2008 Lezione 6 Communications Blockset Ing. Marco GALEAZZI 1 What Is Communications Blockset? Communications Blockset extends Simulink with a comprehensive
Diffusion and Data compression for data security. A.J. Han Vinck University of Duisburg/Essen April 2013 [email protected]
Diffusion and Data compression for data security A.J. Han Vinck University of Duisburg/Essen April 203 [email protected] content Why diffusion is important? Why data compression is important? Unicity
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of
MODULATION Systems (part 1)
Technologies and Services on Digital Broadcasting (8) MODULATION Systems (part ) "Technologies and Services of Digital Broadcasting" (in Japanese, ISBN4-339-62-2) is published by CORONA publishing co.,
Information, Entropy, and Coding
Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated
Lecture 3: Signaling and Clock Recovery. CSE 123: Computer Networks Stefan Savage
Lecture 3: Signaling and Clock Recovery CSE 123: Computer Networks Stefan Savage Last time Protocols and layering Application Presentation Session Transport Network Datalink Physical Application Transport
Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.
Reading.. IMAGE COMPRESSION- I Week VIII Feb 25 Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive, transform coding basics) 8.6 Image
Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:
Voice Transmission --Basic Concepts-- Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Amplitude Frequency Phase Voice Digitization in the POTS Traditional
1. (Ungraded) A noiseless 2-kHz channel is sampled every 5 ms. What is the maximum data rate?
Homework 2 Solution Guidelines CSC 401, Fall, 2011 1. (Ungraded) A noiseless 2-kHz channel is sampled every 5 ms. What is the maximum data rate? 1. In this problem, the channel being sampled gives us the
Gambling and Data Compression
Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction
How To Find A Nonbinary Code Of A Binary Or Binary Code
Notes on Coding Theory J.I.Hall Department of Mathematics Michigan State University East Lansing, MI 48824 USA 9 September 2010 ii Copyright c 2001-2010 Jonathan I. Hall Preface These notes were written
Lecture 2 Outline. EE 179, Lecture 2, Handout #3. Information representation. Communication system block diagrams. Analog versus digital systems
Lecture 2 Outline EE 179, Lecture 2, Handout #3 Information representation Communication system block diagrams Analog versus digital systems Performance metrics Data rate limits Next lecture: signals and
A New Digital Communications Course Enhanced by PC-Based Design Projects*
Int. J. Engng Ed. Vol. 16, No. 6, pp. 553±559, 2000 0949-149X/91 $3.00+0.00 Printed in Great Britain. # 2000 TEMPUS Publications. A New Digital Communications Course Enhanced by PC-Based Design Projects*
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya ([email protected]) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper
Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska
PROBLEM STATEMENT A ROBUST COMPRESSION SYSTEM FOR LOW BIT RATE TELEMETRY - TEST RESULTS WITH LUNAR DATA Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska The
Data Transmission. Raj Jain. Professor of CIS. The Ohio State University. Columbus, OH 43210 [email protected] http://www.cis.ohio-state.
Data Transmission Professor of CIS Columbus, OH 43210 [email protected] http://www.cis.ohio-state.edu/~jain/ 2-1 Overview Time Domain and Frequency Domain Bit, Hertz Decibels Data vs Signal Attenuation, Delay
Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network
Recent Advances in Electrical Engineering and Electronic Devices Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Ahmed El-Mahdy and Ahmed Walid Faculty of Information Engineering
AN1200.04. Application Note: FCC Regulations for ISM Band Devices: 902-928 MHz. FCC Regulations for ISM Band Devices: 902-928 MHz
AN1200.04 Application Note: FCC Regulations for ISM Band Devices: Copyright Semtech 2006 1 of 15 www.semtech.com 1 Table of Contents 1 Table of Contents...2 1.1 Index of Figures...2 1.2 Index of Tables...2
Implementation of Digital Signal Processing: Some Background on GFSK Modulation
Implementation of Digital Signal Processing: Some Background on GFSK Modulation Sabih H. Gerez University of Twente, Department of Electrical Engineering [email protected] Version 4 (February 7, 2013)
BER Performance Analysis of SSB-QPSK over AWGN and Rayleigh Channel
Performance Analysis of SSB-QPSK over AWGN and Rayleigh Channel Rahul Taware ME Student EXTC Department, DJSCOE Vile-Parle (W) Mumbai 056 T. D Biradar Associate Professor EXTC Department, DJSCOE Vile-Parle
Digital Transmission of Analog Data: PCM and Delta Modulation
Digital Transmission of Analog Data: PCM and Delta Modulation Required reading: Garcia 3.3.2 and 3.3.3 CSE 323, Fall 200 Instructor: N. Vlajic Digital Transmission of Analog Data 2 Digitization process
Adaptive Linear Programming Decoding
Adaptive Linear Programming Decoding Mohammad H. Taghavi and Paul H. Siegel ECE Department, University of California, San Diego Email: (mtaghavi, psiegel)@ucsd.edu ISIT 2006, Seattle, USA, July 9 14, 2006
Coding Theorems for Turbo-Like Codes Abstract. 1. Introduction.
Coding Theorems for Turbo-Like Codes Dariush Divsalar, Hui Jin, and Robert J. McEliece Jet Propulsion Laboratory and California Institute of Technology Pasadena, California USA E-mail: [email protected],
Coding Theorems for Turbo Code Ensembles
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1451 Coding Theorems for Turbo Code Ensembles Hui Jin and Robert J. McEliece, Fellow, IEEE Invited Paper Abstract This paper is devoted
Digital Video Broadcasting By Satellite
Digital Video Broadcasting By Satellite Matthew C. Valenti Lane Department of Computer Science and Electrical Engineering West Virginia University U.S.A. Apr. 2, 2012 ( Lane Department LDPCof Codes Computer
EECC694 - Shaaban. Transmission Channel
The Physical Layer: Data Transmission Basics Encode data as energy at the data (information) source and transmit the encoded energy using transmitter hardware: Possible Energy Forms: Electrical, light,
Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes
Reliability Level List Based Direct Target Codeword Identification Algorithm for Binary BCH Codes B.YAMUNA, T.R.PADMANABHAN 2 Department of ECE, 2 Department of IT Amrita Vishwa Vidyapeetham. Amrita School
A WEB BASED TRAINING MODULE FOR TEACHING DIGITAL COMMUNICATIONS
A WEB BASED TRAINING MODULE FOR TEACHING DIGITAL COMMUNICATIONS Ali Kara 1, Cihangir Erdem 1, Mehmet Efe Ozbek 1, Nergiz Cagiltay 2, Elif Aydin 1 (1) Department of Electrical and Electronics Engineering,
Maximum Likelihood Estimation of ADC Parameters from Sine Wave Test Data. László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár
Maximum Lielihood Estimation of ADC Parameters from Sine Wave Test Data László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár Dept. of Measurement and Information Systems Budapest University
Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
A Performance Study of Wireless Broadband Access (WiMAX)
A Performance Study of Wireless Broadband Access (WiMAX) Maan A. S. Al-Adwany Department of Computer & Information Engineering, College of Electronics Engineering University of Mosul Mosul, Iraq [email protected]
Polarization codes and the rate of polarization
Polarization codes and the rate of polarization Erdal Arıkan, Emre Telatar Bilkent U., EPFL Sept 10, 2008 Channel Polarization Given a binary input DMC W, i.i.d. uniformly distributed inputs (X 1,...,
Source-Channel Coding with Multiple Classes
-Channel Coding ith Multiple Classes Irina E. Bocharova 1, Albert Guillén i Fàbregas 234, Boris D. Kudryashov 1, Alfonso Martinez 2, Adrià Tauste Campo 2, and Gonzalo Vazquez-Vilar 2 1 St. Petersburg Univ.
Non-Data Aided Carrier Offset Compensation for SDR Implementation
Non-Data Aided Carrier Offset Compensation for SDR Implementation Anders Riis Jensen 1, Niels Terp Kjeldgaard Jørgensen 1 Kim Laugesen 1, Yannick Le Moullec 1,2 1 Department of Electronic Systems, 2 Center
Coded Bidirectional Relaying in Wireless Networks
Coded Bidirectional Relaying in Wireless Networks Petar Popovski and Toshiaki Koike - Akino Abstract The communication strategies for coded bidirectional (two way) relaying emerge as a result of successful
AN INTRODUCTION TO DIGITAL MODULATION
AN INTRODUCTION TO DIGITAL MODULATION This article provides readers a simple overview of the various popular methods used in modulating a digital signal. The relative merits of each of these modulation
Note on some explicit formulae for twin prime counting function
Notes on Number Theory and Discrete Mathematics Vol. 9, 03, No., 43 48 Note on some explicit formulae for twin prime counting function Mladen Vassilev-Missana 5 V. Hugo Str., 4 Sofia, Bulgaria e-mail:
Information Theory and Coding SYLLABUS
SYLLABUS Subject Code : IA Marks : 25 No. of Lecture Hrs/Week : 04 Exam Hours : 03 Total no. of Lecture Hrs. : 52 Exam Marks : 00 PART - A Unit : Information Theory: Introduction, Measure of information,
Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks
Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks R.A. Raghuvir, Dinesh Rajan and M.D. Srinath Department of Electrical Engineering Southern Methodist University Dallas, TX
WITH his 1948 paper, A Mathematical Theory of
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 6, OCTOBER 1998 2531 Applications of Error-Control Coding Daniel J. Costello, Jr., Fellow, IEEE, Joachim Hagenauer, Fellow, IEEE, Hideki Imai, Fellow,
Vector Signal Analyzer FSQ-K70
Product brochure Version 02.00 Vector Signal Analyzer FSQ-K70 July 2004 Universal demodulation, analysis and documentation of digital radio signals For all major mobile radio communication standards: GSM
CHAPTER 6. Shannon entropy
CHAPTER 6 Shannon entropy This chapter is a digression in information theory. This is a fascinating subject, which arose once the notion of information got precise and quantifyable. From a physical point
Adaptive Equalization of binary encoded signals Using LMS Algorithm
SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) volume issue7 Sep Adaptive Equalization of binary encoded signals Using LMS Algorithm Dr.K.Nagi Reddy Professor of ECE,NBKR
IN current film media, the increase in areal density has
IEEE TRANSACTIONS ON MAGNETICS, VOL. 44, NO. 1, JANUARY 2008 193 A New Read Channel Model for Patterned Media Storage Seyhan Karakulak, Paul H. Siegel, Fellow, IEEE, Jack K. Wolf, Life Fellow, IEEE, and
CODING THEORY a first course. Henk C.A. van Tilborg
CODING THEORY a first course Henk C.A. van Tilborg Contents Contents Preface i iv 1 A communication system 1 1.1 Introduction 1 1.2 The channel 1 1.3 Shannon theory and codes 3 1.4 Problems 7 2 Linear
5 Signal Design for Bandlimited Channels
225 5 Signal Design for Bandlimited Channels So far, we have not imposed any bandwidth constraints on the transmitted passband signal, or equivalently, on the transmitted baseband signal s b (t) I[k]g
Evolution from Voiceband to Broadband Internet Access
Evolution from Voiceband to Broadband Internet Access Murtaza Ali DSPS R&D Center Texas Instruments Abstract With the growth of Internet, demand for high bit rate Internet access is growing. Even though
Example/ an analog signal f ( t) ) is sample by f s = 5000 Hz draw the sampling signal spectrum. Calculate min. sampling frequency.
1 2 3 4 Example/ an analog signal f ( t) = 1+ cos(4000πt ) is sample by f s = 5000 Hz draw the sampling signal spectrum. Calculate min. sampling frequency. Sol/ H(f) -7KHz -5KHz -3KHz -2KHz 0 2KHz 3KHz
Sphere-Bound-Achieving Coset Codes and Multilevel Coset Codes
820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 3, MAY 2000 Sphere-Bound-Achieving Coset Codes and Multilevel Coset Codes G David Forney, Jr, Fellow, IEEE, Mitchell D Trott, Member, IEEE, and Sae-Young
CARLETON UNIVERSITY Department of Systems and Computer Engineering. SYSC4700 Telecommunications Engineering Winter 2014. Term Exam 13 February 2014
CARLETON UNIVERSITY Department of Systems and Computer Engineering SYSC4700 Telecommunications Engineering Winter 2014 Term Exam 13 February 2014 Duration: 75 minutes Instructions: 1. Closed-book exam
T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p
Data Transmission Concepts and terminology Transmission terminology Transmission from transmitter to receiver goes over some transmission medium using electromagnetic waves Guided media. Waves are guided
Complexity-bounded Power Control in Video Transmission over a CDMA Wireless Network
Complexity-bounded Power Control in Video Transmission over a CDMA Wireless Network Xiaoan Lu, David Goodman, Yao Wang, and Elza Erkip Electrical and Computer Engineering, Polytechnic University, Brooklyn,
CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM
CODED SOQPSK-TG USING THE SOFT OUTPUT VITERBI ALGORITHM Daniel Alam Department of Electrical Engineering & Computer Science University of Kansas Lawrence, KS 66045 [email protected] Faculty Advisor: Erik Perrins
Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.
Sampling Theorem We will show that a band limited signal can be reconstructed exactly from its discrete time samples. Recall: That a time sampled signal is like taking a snap shot or picture of signal
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 341
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 341 Multinode Cooperative Communications in Wireless Networks Ahmed K. Sadek, Student Member, IEEE, Weifeng Su, Member, IEEE, and K.
Modified Golomb-Rice Codes for Lossless Compression of Medical Images
Modified Golomb-Rice Codes for Lossless Compression of Medical Images Roman Starosolski (1), Władysław Skarbek (2) (1) Silesian University of Technology (2) Warsaw University of Technology Abstract Lossless
How To Understand The Theory Of Time Division Duplexing
Multiple Access Techniques Dr. Francis LAU Dr. Francis CM Lau, Associate Professor, EIE, PolyU Content Introduction Frequency Division Multiple Access Time Division Multiple Access Code Division Multiple
Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks
Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks Jia-Qi Jin, Tracey Ho California Institute of Technology Pasadena, CA Email: {jin,tho}@caltech.edu Harish Viswanathan
Selection of data modulation techniques in spread spectrum systems using modified processing gain definition
HAIT Journal of Science and Engineering, Series: Engineering 2 (1), pp. xxx-xxx Copyright c 2004 Holon Academic Institute of Technology Selection of data modulation techniques in spread spectrum systems
EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak
Path Loss Radio Wave Propagation The wireless radio channel puts fundamental limitations to the performance of wireless communications systems Radio channels are extremely random, and are not easily analyzed
5 Capacity of wireless channels
CHAPTER 5 Capacity of wireless channels In the previous two chapters, we studied specific techniques for communication over wireless channels. In particular, Chapter 3 is centered on the point-to-point
Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels
Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels Qi Lu, Qingchong Liu Electrical and Systems Engineering Department Oakland University Rochester, MI 48309 USA E-mail:
Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation
Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation Sunil Karthick.M PG Scholar Department of ECE Kongu Engineering College Perundurau-638052 Venkatachalam.S Assistant Professor
Appendix C GSM System and Modulation Description
C1 Appendix C GSM System and Modulation Description C1. Parameters included in the modelling In the modelling the number of mobiles and their positioning with respect to the wired device needs to be taken
PART 5D TECHNICAL AND OPERATING CHARACTERISTICS OF MOBILE-SATELLITE SERVICES RECOMMENDATION ITU-R M.1188
Rec. ITU-R M.1188 1 PART 5D TECHNICAL AND OPERATING CHARACTERISTICS OF MOBILE-SATELLITE SERVICES Rec. ITU-R M.1188 RECOMMENDATION ITU-R M.1188 IMPACT OF PROPAGATION ON THE DESIGN OF NON-GSO MOBILE-SATELLITE
Secure Physical-layer Key Generation Protocol and Key Encoding in Wireless Communications
IEEE Globecom Workshop on Heterogeneous, Multi-hop Wireless and Mobile Networks Secure Physical-layer ey Generation Protocol and ey Encoding in Wireless Communications Apirath Limmanee and Werner Henkel
NRZ Bandwidth - HF Cutoff vs. SNR
Application Note: HFAN-09.0. Rev.2; 04/08 NRZ Bandwidth - HF Cutoff vs. SNR Functional Diagrams Pin Configurations appear at end of data sheet. Functional Diagrams continued at end of data sheet. UCSP
Optimization under fuzzy if-then rules
Optimization under fuzzy if-then rules Christer Carlsson [email protected] Robert Fullér [email protected] Abstract The aim of this paper is to introduce a novel statement of fuzzy mathematical programming
An Introduction to Information Theory
An Introduction to Information Theory Carlton Downey November 12, 2013 INTRODUCTION Today s recitation will be an introduction to Information Theory Information theory studies the quantification of Information
Sequential Decoding of Binary Convolutional Codes
Sequential Decoding of Binary Convolutional Codes Yunghsiang S. Han Graduate Institute of Communication, National Taipei University Taiwan E-mail: [email protected] Y. S. Han Sequential Decoding of
HIGH DENSITY DATA STORAGE IN DNA USING AN EFFICIENT MESSAGE ENCODING SCHEME Rahul Vishwakarma 1 and Newsha Amiri 2
HIGH DENSITY DATA STORAGE IN DNA USING AN EFFICIENT MESSAGE ENCODING SCHEME Rahul Vishwakarma 1 and Newsha Amiri 2 1 Tata Consultancy Services, India [email protected] 2 Bangalore University, India ABSTRACT
Introduction to Algebraic Coding Theory
Introduction to Algebraic Coding Theory Supplementary material for Math 336 Cornell University Sarah A. Spence Contents 1 Introduction 1 2 Basics 2 2.1 Important code parameters..................... 4
Public Switched Telephone System
Public Switched Telephone System Structure of the Telephone System The Local Loop: Modems, ADSL Structure of the Telephone System (a) Fully-interconnected network. (b) Centralized switch. (c) Two-level
JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding
Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding C. SARAVANAN [email protected] Assistant Professor, Computer Centre, National Institute of Technology, Durgapur,WestBengal,
