Chapter 3: Digital Audio Processing and Data Compression

Size: px
Start display at page:

Download "Chapter 3: Digital Audio Processing and Data Compression"

Transcription

1 Chapter 3: Digital Audio Processing and Review of number system 2 s complement sign and magnitude binary The MSB of a data word is reserved as a sign bit, 0 is positive, 1 is negative. The rest of the bits of the data words represent the magnitude. Range of fixed-point representation of integers 8-bit 2 s complement from to 16-bit 2 s complement from to Fixed-point representation of fractional numbers 2 s complement representation with integer and fraction parts 1

2 Digital Audio Processing Fixed-point representation of fractional numbers For 16-bit arithmetic Q15 1 bit sign, 0 bit integer, 15 bits fraction representing a number between -1.0 to 1.0 Q14 1 bit sign, 1 bit integer, 14 bits fraction representing a number between -2.0 to 2.0 Q13 1 bit sign, 2 bits integer, 13 bits fraction representing a number between -4.0 to 4.0 For 32-bit arithmetic Q31 1 bit sign, 0 bit integer, 31 bits fraction representing a number between -1.0 to 1.0 Q28 1 bit sign, 3 bit integer, 28 bits fraction representing a number between -8.0 to 8.0 Q25 1 bit sign, 6 bits integer, 25 bits fraction representing a number between to 64.0 Examples: 0A00 16 in Q15 of 16-bit arithmetic = A00 16 in Q14 of 16-bit arithmetic = (note that Q15 = 2 x Q14) F7 16 in Q7 of 8-bit arithmetic =

3 Fixed-point representation of fractional numbers Addition/Subtraction The fraction points for both numbers must be aligned prior to the calculation. e.g., Q15+Q15 is allowed but Q15+Q14 is not allowed, you have to convert a Q14 number to Q15 prior to the addition operation, (remember, by shifting 1 bit left of a Q14 number, it will become a Q15 number, but mind you that there might be overflowed). There is one bit increase in precision, e.g., =1.8, a 16-bit Q15 format can not be used to store the result -> to keep 16 bit result, sacrifice 1 bit precision by shifting the result one bit to the right. Multiplication Multiply 2 numbers; A and B, total number of bits for the result = no need to use 2 sign bits in the result, so shift 1 bit left. Digital Audio Processing In 16 bit arithmetic, Q15 x Q15 = 32 bit result, then shift 1 bit left -> Q31 in 32 bit format. 3

4 Floating Point Representation and Arithmetic Floating point arithmetic offers the advantage of eliminating the scaling factor problem and also expanding the range of values over that of fixedpoint arithmetic. Format A floating-point number consists of two parts: a fraction f and an exponent e. Both f and e are signed. r is the radix (base). Digital Audio Processing Normalized fraction The magnitude of the normalized fraction has an absolute value within the range for binary number (r=2), this range becomes For example, after normalization Addition/Subtraction When adding or subtracting two floating-point numbers, the exponents must be compared and made equal > by shifting operation on one of the fractions. 4

5 Digital Audio Processing Common Audio File Formats in Computer Systems WAVE File Format (.wav) is a file format for storing digital audio (waveform) data. It supports a variety of bit resolutions, sample rates, and channels of audio. This format is very popular upon PC platforms, and is widely used in professional programs that process digital audio waveforms. This format uses Microsoft's version of the Electronic Arts Interchange File Format method for storing data in "chunks". WAVE File Structure A WAVE file is a collection of a number of different types of chunks. There is a required Format ("fmt ") chunk which contains important parameters describing the waveform, such as its sample rate. The Data chunk, which contains the actual waveform data, is also required. All other chunks are optional. The Format chunk must precede the Data chunk. All applications that use WAVE must be able to read the 2 required chunks and can choose to selectively ignore the optional chunks. 5

6 WAVE File Structure Sample Points and Sample Frames Digital Audio Processing A sample point is a value representing a sample of a sound at a given moment in time For waveforms with greater than 8-bit resolution, each sample point is stored as a linear, 2's-complement value which may be from 9 to 32 bits wide For example, each sample point of a 16-bit waveform would be a 16-bit word where (0x7FFF) is the highest value and (0x8000) is the lowest value. For 8-bit (or less) waveforms, each sample point is a linear, unsigned byte where 255 is the highest value and 0 is the lowest value. A sample point should be rounded up to a size which is a multiple of 8 when stored in a WAVE. This makes the WAVE easier to read into computer memory from 1 to 8 bits wide -> stored as an 8-bit byte (ie, unsigned char) from 9 to 16 bits wide -> stored as a 16-bit word (ie, signed short) from 17 to 24 bits wide -> stored as three bytes signed integer from 25 to 32 bits wide -> stored as a 32-bit double word (ie, signed long) the data bits should be left-justified, with any remaining (ie, pad) bits zeroed For example, 12 bits 6

7 WAVE File Structure Sample Points and Sample Frames Digital Audio Processing For multichannel sounds (for example, a stereo waveform), single sample points from each channel are interleaved The sample points that are meant to be played simultaneously are collectively called a sample frame. In the example of stereo waveform, every two sample points makes up another sample frame. For a monophonic waveform, a sample frame is merely a single sample point For example, stereo (2 channels) Packing of sample points into frame 7

8 WAVE File Structure Digital Audio Processing Format chunk The Format (fmt) chunk describes fundamental parameters of the waveform data such as sample rate, bit resolution, and how many channels of digital audio are stored in the WAVE The chunk ID is always "fmt ". The chunksize field is the number of bytes in the chunk The wformattag indicates whether compression is used when storing the data. If compression is used, WFormatTag is some value other than 1, if no compression is used, wformattag = 1 8

9 WAVE File Structure Format chunk The wchannels field contains the number of audio channels for the sound. A value of 1 means monophonic sound, 2 means stereo, and 6 for 5+1 surround The dwsamplespersec field is the sample rate at which the sound is to be played back in sample frames per second, e.g., The dwavgbytespersec field indicates how many bytes will be played every second. Its value should be equal to the following formula rounded up to the next whole number: dwsamplespersec * wblockalign The wblockalign field should be equal to the following formula, rounded to the next whole number: wchannels * (wbitspersample / 8) Digital Audio Processing Essentially, wblockalign is the size of a sample frame, in terms of bytes, e.g., a sample frame for a 16-bit mono wave is 2 bytes, a sample frame for a 16-bit stereo wave is 4 bytes The wbitspersample field indicates the bit resolution of a sample point, i.e., a 16-bit waveform would have wbitspersample = 16 9

10 WAVE File Structure Data chunk Digital Audio Processing The Data (data) chunk contains the actual sample frames, i.e., all channels of waveform data The chunkid is always data. chunksize is the number of bytes in the chunk, not counting the 8 bytes used by ID and Size fields The waveformdata array contains the actual waveform data. The data is arranged into sample frames The number of sample frames in waveformdata is determined by dividing this chunksize by the Format chunk's wblockalign. The Data Chunk is required. One and only one Data Chunk may appear in a WAVE 10

11 Digital Audio Processing Review of Digital Filters Applications of digital filters in audio: Equalization, mixing Over-sampling A/D conversion Linear prediction for audio coding Quadratic Mirror Filtering in MPEG audio codecs Music and sound synthesis Digital sound effect generation Finite Impulse Response (FIR) filter Non-recursive filter with feed-forward paths only Input signal Output signal Filter coefficients In z-transform where is called a unit delay element. FIR filter can be designed to have linear phase. 11

12 Digital Audio Processing Infinite Impulse Response (IIR) filter Recursive filter with feedback paths Digital filtering requires memory storage, multiplication and addition operations. Assume M = N 12

13 Basic of compression and digital representation Shannon s model of communication Shannon s model is general and covers many aspects of communication including error correction, data compression and cryptography Data rate: a measure of information rate in terms of the number of bits per second, e.g., a telephone PCM channel rate is 64 kbps, a stereo CD audio is 2x44100x16= mbps 13

14 Basic of compression and digital representation Entropy: According to Shannon, the entropy of an information source is defined as: where is the probability that symbol in will occur. The term indicates the amount of information contained in, i.e., the number of bits needed to code. For example, in an audio signal with uniform distribution of intensity levels, i.e., then the number of bits needed to code each level is 8 bits. The entropy of this signal is 8 Shannon showed that for a given source and channel, coding techniques existed that would code the source with an average code length of as close to the entropy of the source desired. However, finding such a code was a separate problem. 14

15 Basic of compression and digital representation Compression: algorithm to remove redundancy from a source such that the compressed data can be transmitted or stored more efficiently. Different sources may require different compression algorithms, for examples, Lempel-Ziv algorithm for text (lossless), linear predictive coding (LPC) algorithm for speech (lossy), psychoacoustics coding for audio Lossless compression exact reproduction of the input data source after decompression Lossy compression the decompressed data is not the same as input data, but generally the difference (lost) may not be noticeable Compression ratio is defined as the ratio of source data rate over channel data rate and is an important measure of the effectiveness of the compression process, for examples, an MP3 audio coder has an average compression ratio of about 12. Bandwidth measures the rate at which data is transmitted through the network. It is often used as a gauge of speed in the network. Channel Capacity: the capacity of information a noisy channel can carry. It is defined as bits per second where B is the available bandwidth and SNR is the signal-to-noise ratio. Note if noise is absent in the channel, the channel capacity will be infinite 15

16 Fundamental of Lossless Compression Algorithms Compression can be considered as the mapping of source strings to channel stings. Blocking To form a mapping between input stings and output strings of various lengths with the aim of matching their probabilities as closely as possible. This mapping technique is called blocking. There are four kinds of blocking Variable-to-variable coding provides the most flexibility in matching the characteristics of the source with those of the channel 16

17 Shannon-Fano Algorithm The messages are sorted by probability and then subdivided recursively at as close to power of two boundaries as possible. The resultant binary tree when labeled with 0s and 1s describes the set of code strings Example Symbol A B C D E Count Encoding (a top-down approach): 1.Sort symbols according to their frequencies/probabilities, e.g., ABCDE 2.Recursively divide symbols into two parts, each with approximately same number of counts Symbol Count log 2 (1/p i ) Code Subtotal (# of bits) A B C D E Total code length (# of bit) = 89, average code length (# of bit per symbol) = This technique yields an average code length of [H, H+1] where H is the entropy of the set of source messages 17

18 Huffman Coding Algorithm The messages are sorted by probability. To form a Huffman code, the two least probable messages are combined into a single pseudo message whose probability is the sum of the probabilities of its component messages. The pseudo message replaces the two messages in the list of messages and the grouping process is repeated iteratively until there is only one pseudo message left. The resultant binary tree describes the set of code strings Encoding (a bottom-up approach): 1. Initialization: Put all nodes in an OPEN list, keep it sorted at all times (e.g., ABCDE). 2. Repeat until the OPEN list has only one node left: a) From OPEN pick two nodes having the lowest frequencies/probabilities, create a parent node of them b) Assign the sum of the children's frequencies/probabilities to the parent node and insert it into OPEN c) Assign code 0, 1 to the two branches of the tree, and delete the children from OPEN. Symbol Count log 2 (1/p i ) Code Subtotal (# of bits) A B C D E Total code length (# of bits) = 87, average code length = 2.23 Entropy = (15 x x x x x 2.96) / 39 = / 39 =

19 Shannon-Fano and Huffman Coding Algorithms Discussion Decoding for these two algorithms is trivial as long as the coding table (the statistics) is sent before the data. (There is a bit overhead for sending this, negligible if the data file is big.) Unique Prefix Property: no code is a prefix to any other code (all symbols are at the leaf nodes) --> great for decoder, unambiguous. The previous algorithms described use fixed statistics. They make two passes over the message, the first pass to gather statistics and the second pass to code the message (using the statistics). If prior statistics are available and accurate, then Huffman coding is very good. For compression on, say, live audio and video, the previous algorithms require the statistical knowledge which is often not available. Even when it is available, it could be a heavy overhead especially when many tables had to be sent. Actually, the statistics of most data sources are varying Taking into account the impact of the previous symbol to the probability of the current symbol (e.g., "qu" often come together in English,...), an adaptive algorithm can be used to more actually reflect the statistics of source and hence achieve better coding performance. The solution is to use adaptive algorithms 19

20 The modern Paradigm of The modern paradigm uses predictions to divide compression into separate modeling and coding units. Each step transmits an instance. At the start of each step, the model constructs a prediction p (of the next instance) and passes it to the coder. The coder uses the prediction to transmit the next instance a using as close to nats as it can. Meanwhile, the receiver s model has generated an identical prediction which the decoder uses to identify the instance that was transmitted. The transmitter and receiver both use the new instance to update their models. The cycle repeats until the entire message is transmitted This prediction+coding approach is actually adaptive data compression 20

21 Adaptive Huffman Coding Algorithm The key is to have both encoder and decoder to use exactly the same initialization and update_model routines. update_model does two things: (a) increment the count, (b) update the Huffman tree. During the updates, the Huffman tree will be maintained its sibling property, i.e., the nodes (internal and leaf) are arranged in the order of increasing weights (see figures). When swapping is necessary, the farthest node with weight W is swapped with the node whose weight has just been increased to W+1. Note: If the node with weight W has a subtree beneath it, then the subtree will go with it. The Huffman tree could look very different after node swapping, e.g., in the third tree, node A is again swapped and becomes the #5 node. It is now encoded using only 2 bits. 21

22 Adaptive Huffman Coding Algorithm Note: Code for a particular symbol will be changed during the adaptive coding process 22

23 Golomb-Rice Coding Algorithm Golomb coding is a lossless data compression method invented by Solomon W. Golomb in the 1960s. If the source alphabets follow a geometric distribution, a Golomb code will be an optimal prefix code which is particularly suitable for situations in which the occurrence of small values in the input stream is significantly more likely than large values, for example, coding audio signal residual after linear prediction. Golomb codes can be considered as a special case of Huffman codes for sources with geometrically distributed symbols: where Rice coding (invented by Robert F. Rice) uses a subset of the family of Golomb codes to produce a simpler suboptimal prefix code. A Golomb code has a tunable parameter that can be any positive value, Rice codes are those in which the tunable parameter is a power of two. This makes Rice codes convenient for use on a computer, since multiplication and division by 2 can be implemented more efficiently in binary arithmetic Rice coding is used as the entropy encoding stage in a number of lossless image compression and audio data compression methods, for example; MPEG-4 ALS audio coder. 23

24 Golomb-Rice Coding Algorithm Golomb coding uses a tunable parameter M to divide an input value into two parts: q, the quotient as a result of division by M, and r, the remainder. The quotient is sent in unary coding, followed by the remainder in truncated binary encoding. When M = 1 Golomb coding is equivalent to unary coding. Note that the two parts are given by the following expression, where x is the number being encoded; and where denotes truncation to integer value. The final code looks like: where is q bits unary code of 1s and 1 bit of 0 for dilemma, and is the binary code for encoding the remainder r. Note that r can be encoded with a varying number of bits, and is specifically only b bits for Rice code, i.e.,, and switches between b-1 and b bits for Golomb code (i.e. M is not a power of 2 for Golomb code). 24

25 Golomb-Rice Coding Algorithm The algorithm to perform Rice coding is shown below: 1. Fix the tunable parameter M to power of 2 integer value 2. For x, the number to be encoded Quotient q = int[x/m] Remainder r = x modulo M 3. Generate codeword The coding format: <Quotient Code><Remainder Code> where Quotient Code (in unary coding) Write a q-length string of 1 bits Write a 0 bit Remainder Code (in binary coding) Write b=log 2 M bits of binary code for remainder Example Given,, The final codeword is

26 Arithmetic Coding Algorithm Arithmetic coding is an entropy coding. The compression achieved by arithmetic coding is generally better than Huffman coding. The idea behind arithmetic coding is to group source symbols and code them into a single number which has a fraction range from 0 to 1 as a probability line [0, 1) Each symbol is assigned a range in probability line based on its probability, the higher the probability, the higher range that is assigned to it. After the ranges and the probability line are defined, encoding process can be started. The algorithm to accomplish this for a message of any length is shown below: Set low to 0.0 Set high to 1.0 While there are still input symbols do get an input symbol range = high - low. high = low + range*high_range(symbol) low = low + range*low_range(symbol) End of While output low 26

27 Arithmetic Coding Example: if we are going to encode message ALIALIBABA", we first work out a probability distribution and assign their probabilities into a range along a probability line, which is nominally 0 to 1, like this: Symbol Probability Range A B L I Each symbol is assigned the portion of the 0-1 range that corresponds to its probability of appearance The most significant portion of an arithmetic coded message belongs to the first symbol to be encoded. In order for the first symbol, i.e., an A, to be decoded properly, the final coded message has to be a number greater than or equal to 0.00 and less than 0.40 After the first symbol is encoded, the range for the output number is now bounded by the low number (0.00) and the high number (0.40) Each new symbol to be encoded will further restrict the possible range of the output number The next symbol to be encoded, L', owns the range 0.60 through So the new encoded number will have to fall somewhere in the 60th to 80th percentile of the currently established range. Applying this logic will further restrict the number to the 27 range 0.24 to 0.32.

28 Arithmetic Coding The encoding process through to its natural conclusion with the chosen message ALIALIBABA looks like this: Symbol Low Value High Value Range A L I A L I B A B A The encoded codeword can be any value between the final low and high values, i.e., and , respectively The total number of bits B required to encode this codeword depends on the final range r, which is related to the codeword precision, by the equation. In this example bits. With 10 symbols in the source sequence, the average bit rate is bit per symbol Check the entropy of the source. Is Arithmetic Coding efficient? 28

29 Arithmetic Coding The decoding process to create the exact stream of input symbols operates as: The first symbol in the message owns the code space that the encoded message falls in. Since the number falls between 0.0 and 0.4, we know that the first character must be A Since the low and high ranges of A is known, their effects can be removed by reversing the process that put them in. First, the low value of A is subtracted from the number, giving Then it is divided by the range of A, which is 0.4. This gives a value of , which in turn determines where it lands in the range of the next letter, L The algorithm for decoding the incoming number looks like this: get encoded number Do find symbol whose range straddles the encoded number output the symbol range = symbol high value - symbol low value subtract symbol low value from encoded number divide encoded number by range until no more symbols Note that a special EOF symbol, or a length code of the stream can be used to identify the ending of the decoding process. 29

30 Arithmetic Coding The decoding algorithm for the ALIALIBABA" message will proceed something like this: Encoded Number Output Symbol Low Value High Value Range A L I A L I B A B A In summary, the encoding process is simply one of narrowing the range of possible numbers with every new symbol. The new range is proportional to the predefined probability attached to that symbol. Decoding is the inverse procedure, where the range is expanded in proportion to the probability of each symbol as it is extracted. 30

31 Arithmetic Coding Practical Matters Do we need a floating point processor? The bit length of the output number will increase as the number of symbols is increased. Do we need to start over again as it reaches the limit? Can be implemented using 16 bit or 32 bit integer math. Use an incremental transmission scheme where fixed size integer state variables receive new bits in at the low end and shift them out the high end Practical implementation Imagine that a 6 decimal digit (fixed-length) register is used, the decimal equivalent of the setup would look like this: HIGH: this is 1.0 LOW: this is 0.0 The range between the low value and the high value, i.e., the difference between the two registers will be In encoding the first symbol, the new high value is computed by using the formula from the previous section. In this case the high range was 0.4, which gives a new value for high of The calculation of low value follows the same path, with a resulting new value of So now high and low look like this: HIGH: LOW:

32 Arithmetic Coding Practical Matters shift out the most significant digits of the low and high values if they are matched LOW HIGH RANGE CUMULATIVE OUTPUT Initial state Encode A ( ) Encode L ( ) Encode I ( ) Shift out Encode A ( ) Encode L ( ) Shift out Encode I ( ) Encode B ( ) Encode A ( ) Shift out 8 and Encode B ( ) Shift out Encode A ( ) Shift out Shift out Shift out Shift out

33 Arithmetic Coding Practical Matters: underflow problem This scheme works well for incrementally encoding a message. However, there is potential for a loss of precision under certain circumstances. In the event that the encoded word has a string of 0s or 9s in it, the high and low values will slowly converge on a value, but may not see their most significant digits match immediately. For example, the high and low values just after encoding the first B in previous table are: High: Low: At this point, the most significant digits of low and high are not the same, which means they can not be shifted out, but the calculated range is going to be small with only 4 digit long, which means the output word may not have enough precision to be accurately encoded in subsequent process. In the worse case, after a few more iterations, high and low could look like this: High: Low: Then the values are permanently stuck. The range between high and low has become so small that any calculation will always return the same values. But, since the most significant digits of both words are not equal, the algorithm can't output the digit and shift! 33

34 Arithmetic Coding Practical Matters: underflow problem The way to defeat this underflow problem is to prevent things from ever getting this bad. If the two most significant digits don't match, but are on adjacent numbers, a second test needs to be applied to see if the 2nd most significant digit in high is a 0, and the 2nd digit in low is a 9. If so, it means we are on the road to underflow, and need to take action. Instead of shifting the most significant digit out of the word, we just delete the 2nd digits from high and low, and shift the rest of the digits left to fill up the space. The most significant digit stays in place. We then have to set an underflow counter to remember that we threw away a digit, and we aren't quite sure whether it was going to end up as a 0 or a 9. The operation looks like this: Before After High Low Underflow 0 1 After every recalculation operation, if the most significant digits don't match up, we can check for underflow digits again. If they are present, we shift them out and increment the counter. When the most significant digits do finally converge to a single value, we first output that value. Then, we output all of the "underflow" digits that were previously discarded. The underflow digits will be all 9s or 0s, depending on whether High and Low converged to the higher or lower value. 34

35 Arithmetic Coding Practical Matters: underflow prevention Re-exam the example using only 5-digit precision LOW HIGH RANGE CUMULATIVE OUTPUT Initial state Encode A ( ) Encode L ( ) Encode I ( ) Shift out Encode A ( ) Encode L ( ) Shift out Encode I ( ) Encode B ( ) Discard second digit, count += Encode A ( ) Shift out Check count and shift out 9, count -= Encode B ( ) Shift out Encode A ( ) Shift out Shift out Shift out Shift out

The string of digits 101101 in the binary number system represents the quantity

The string of digits 101101 in the binary number system represents the quantity Data Representation Section 3.1 Data Types Registers contain either data or control information Control information is a bit or group of bits used to specify the sequence of command signals needed for

More information

Data Storage 3.1. Foundations of Computer Science Cengage Learning

Data Storage 3.1. Foundations of Computer Science Cengage Learning 3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how

More information

Compression techniques

Compression techniques Compression techniques David Bařina February 22, 2013 David Bařina Compression techniques February 22, 2013 1 / 37 Contents 1 Terminology 2 Simple techniques 3 Entropy coding 4 Dictionary methods 5 Conclusion

More information

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to: Chapter 3 Data Storage Objectives After studying this chapter, students should be able to: List five different data types used in a computer. Describe how integers are stored in a computer. Describe how

More information

Information, Entropy, and Coding

Information, Entropy, and Coding Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated

More information

Lecture 2. Binary and Hexadecimal Numbers

Lecture 2. Binary and Hexadecimal Numbers Lecture 2 Binary and Hexadecimal Numbers Purpose: Review binary and hexadecimal number representations Convert directly from one base to another base Review addition and subtraction in binary representations

More information

Computer Science 281 Binary and Hexadecimal Review

Computer Science 281 Binary and Hexadecimal Review Computer Science 281 Binary and Hexadecimal Review 1 The Binary Number System Computers store everything, both instructions and data, by using many, many transistors, each of which can be in one of two

More information

Class Notes CS 3137. 1 Creating and Using a Huffman Code. Ref: Weiss, page 433

Class Notes CS 3137. 1 Creating and Using a Huffman Code. Ref: Weiss, page 433 Class Notes CS 3137 1 Creating and Using a Huffman Code. Ref: Weiss, page 433 1. FIXED LENGTH CODES: Codes are used to transmit characters over data links. You are probably aware of the ASCII code, a fixed-length

More information

Fast Arithmetic Coding (FastAC) Implementations

Fast Arithmetic Coding (FastAC) Implementations Fast Arithmetic Coding (FastAC) Implementations Amir Said 1 Introduction This document describes our fast implementations of arithmetic coding, which achieve optimal compression and higher throughput by

More information

Number Representation

Number Representation Number Representation CS10001: Programming & Data Structures Pallab Dasgupta Professor, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Topics to be Discussed How are numeric data

More information

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic: Binary Numbers In computer science we deal almost exclusively with binary numbers. it will be very helpful to memorize some binary constants and their decimal and English equivalents. By English equivalents

More information

EE 261 Introduction to Logic Circuits. Module #2 Number Systems

EE 261 Introduction to Logic Circuits. Module #2 Number Systems EE 261 Introduction to Logic Circuits Module #2 Number Systems Topics A. Number System Formation B. Base Conversions C. Binary Arithmetic D. Signed Numbers E. Signed Arithmetic F. Binary Codes Textbook

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8

Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8 ECE Department Summer LECTURE #5: Number Systems EEL : Digital Logic and Computer Systems Based on lecture notes by Dr. Eric M. Schwartz Decimal Number System: -Our standard number system is base, also

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

encoding compression encryption

encoding compression encryption encoding compression encryption ASCII utf-8 utf-16 zip mpeg jpeg AES RSA diffie-hellman Expressing characters... ASCII and Unicode, conventions of how characters are expressed in bits. ASCII (7 bits) -

More information

Levent EREN levent.eren@ieu.edu.tr A-306 Office Phone:488-9882 INTRODUCTION TO DIGITAL LOGIC

Levent EREN levent.eren@ieu.edu.tr A-306 Office Phone:488-9882 INTRODUCTION TO DIGITAL LOGIC Levent EREN levent.eren@ieu.edu.tr A-306 Office Phone:488-9882 1 Number Systems Representation Positive radix, positional number systems A number with radix r is represented by a string of digits: A n

More information

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012 Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

More information

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit Decimal Division Remember 4th grade long division? 43 // quotient 12 521 // divisor dividend -480 41-36 5 // remainder Shift divisor left (multiply by 10) until MSB lines up with dividend s Repeat until

More information

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers This Unit: Floating Point Arithmetic CIS 371 Computer Organization and Design Unit 7: Floating Point App App App System software Mem CPU I/O Formats Precision and range IEEE 754 standard Operations Addition

More information

For Articulation Purpose Only

For Articulation Purpose Only E305 Digital Audio and Video (4 Modular Credits) This document addresses the content related abilities, with reference to the module. Abilities of thinking, learning, problem solving, team work, communication,

More information

Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1

Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1 Divide: Paper & Pencil Computer Architecture ALU Design : Division and Floating Point 1001 Quotient Divisor 1000 1001010 Dividend 1000 10 101 1010 1000 10 (or Modulo result) See how big a number can be

More information

plc numbers - 13.1 Encoded values; BCD and ASCII Error detection; parity, gray code and checksums

plc numbers - 13.1 Encoded values; BCD and ASCII Error detection; parity, gray code and checksums plc numbers - 3. Topics: Number bases; binary, octal, decimal, hexadecimal Binary calculations; s compliments, addition, subtraction and Boolean operations Encoded values; BCD and ASCII Error detection;

More information

Numbering Systems. InThisAppendix...

Numbering Systems. InThisAppendix... G InThisAppendix... Introduction Binary Numbering System Hexadecimal Numbering System Octal Numbering System Binary Coded Decimal (BCD) Numbering System Real (Floating Point) Numbering System BCD/Binary/Decimal/Hex/Octal

More information

Useful Number Systems

Useful Number Systems Useful Number Systems Decimal Base = 10 Digit Set = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} Binary Base = 2 Digit Set = {0, 1} Octal Base = 8 = 2 3 Digit Set = {0, 1, 2, 3, 4, 5, 6, 7} Hexadecimal Base = 16 = 2

More information

Digital Audio and Video Data

Digital Audio and Video Data Multimedia Networking Reading: Sections 3.1.2, 3.3, 4.5, and 6.5 CS-375: Computer Networks Dr. Thomas C. Bressoud 1 Digital Audio and Video Data 2 Challenges for Media Streaming Large volume of data Each

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

Specification of the Broadcast Wave Format (BWF)

Specification of the Broadcast Wave Format (BWF) EBU TECH 3285 Specification of the Broadcast Wave Format (BWF) A format for audio data files in broadcasting Version 2.0 Geneva May 2011 1 * Page intentionally left blank. This document is paginated for

More information

Binary Representation. Number Systems. Base 10, Base 2, Base 16. Positional Notation. Conversion of Any Base to Decimal.

Binary Representation. Number Systems. Base 10, Base 2, Base 16. Positional Notation. Conversion of Any Base to Decimal. Binary Representation The basis of all digital data is binary representation. Binary - means two 1, 0 True, False Hot, Cold On, Off We must be able to handle more than just values for real world problems

More information

Lecture 11: Number Systems

Lecture 11: Number Systems Lecture 11: Number Systems Numeric Data Fixed point Integers (12, 345, 20567 etc) Real fractions (23.45, 23., 0.145 etc.) Floating point such as 23. 45 e 12 Basically an exponent representation Any number

More information

LSN 2 Number Systems. ECT 224 Digital Computer Fundamentals. Department of Engineering Technology

LSN 2 Number Systems. ECT 224 Digital Computer Fundamentals. Department of Engineering Technology LSN 2 Number Systems Department of Engineering Technology LSN 2 Decimal Number System Decimal number system has 10 digits (0-9) Base 10 weighting system... 10 5 10 4 10 3 10 2 10 1 10 0. 10-1 10-2 10-3

More information

Arithmetic Coding: Introduction

Arithmetic Coding: Introduction Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation

More information

Binary Numbering Systems

Binary Numbering Systems Binary Numbering Systems April 1997, ver. 1 Application Note 83 Introduction Binary numbering systems are used in virtually all digital systems, including digital signal processing (DSP), networking, and

More information

Streaming Lossless Data Compression Algorithm (SLDC)

Streaming Lossless Data Compression Algorithm (SLDC) Standard ECMA-321 June 2001 Standardizing Information and Communication Systems Streaming Lossless Data Compression Algorithm (SLDC) Phone: +41 22 849.60.00 - Fax: +41 22 849.60.01 - URL: http://www.ecma.ch

More information

Today s topics. Digital Computers. More on binary. Binary Digits (Bits)

Today s topics. Digital Computers. More on binary. Binary Digits (Bits) Today s topics! Binary Numbers! Brookshear.-.! Slides from Prof. Marti Hearst of UC Berkeley SIMS! Upcoming! Networks Interactive Introduction to Graph Theory http://www.utm.edu/cgi-bin/caldwell/tutor/departments/math/graph/intro

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:

More information

2010/9/19. Binary number system. Binary numbers. Outline. Binary to decimal

2010/9/19. Binary number system. Binary numbers. Outline. Binary to decimal 2/9/9 Binary number system Computer (electronic) systems prefer binary numbers Binary number: represent a number in base-2 Binary numbers 2 3 + 7 + 5 Some terminology Bit: a binary digit ( or ) Hexadecimal

More information

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya (LKaya@ieee.org) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper

More information

DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION

DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION Introduction The outputs from sensors and communications receivers are analogue signals that have continuously varying amplitudes. In many systems

More information

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer Computers CMPT 125: Lecture 1: Understanding the Computer Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University January 3, 2009 A computer performs 2 basic functions: 1.

More information

NUMBER SYSTEMS. William Stallings

NUMBER SYSTEMS. William Stallings NUMBER SYSTEMS William Stallings The Decimal System... The Binary System...3 Converting between Binary and Decimal...3 Integers...4 Fractions...5 Hexadecimal Notation...6 This document available at WilliamStallings.com/StudentSupport.html

More information

Introduction to image coding

Introduction to image coding Introduction to image coding Image coding aims at reducing amount of data required for image representation, storage or transmission. This is achieved by removing redundant data from an image, i.e. by

More information

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 04 Digital Logic II May, I before starting the today s lecture

More information

Coding and decoding with convolutional codes. The Viterbi Algor

Coding and decoding with convolutional codes. The Viterbi Algor Coding and decoding with convolutional codes. The Viterbi Algorithm. 8 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition

More information

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Voice Transmission --Basic Concepts-- Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Amplitude Frequency Phase Voice Digitization in the POTS Traditional

More information

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

More information

The number of channels represented in the waveform data, such as 1 for mono or 2 for stereo.

The number of channels represented in the waveform data, such as 1 for mono or 2 for stereo. The following is taken from RIFFMCI.RTF, "Multimedia Programming Interface and Data Specification v1.0", a Windows RTF (Rich Text Format) file contained in the.zip file, RMRTF.ZRT. The original document

More information

Binary Number System. 16. Binary Numbers. Base 10 digits: 0 1 2 3 4 5 6 7 8 9. Base 2 digits: 0 1

Binary Number System. 16. Binary Numbers. Base 10 digits: 0 1 2 3 4 5 6 7 8 9. Base 2 digits: 0 1 Binary Number System 1 Base 10 digits: 0 1 2 3 4 5 6 7 8 9 Base 2 digits: 0 1 Recall that in base 10, the digits of a number are just coefficients of powers of the base (10): 417 = 4 * 10 2 + 1 * 10 1

More information

NUMBER SYSTEMS. 1.1 Introduction

NUMBER SYSTEMS. 1.1 Introduction NUMBER SYSTEMS 1.1 Introduction There are several number systems which we normally use, such as decimal, binary, octal, hexadecimal, etc. Amongst them we are most familiar with the decimal number system.

More information

Probability Interval Partitioning Entropy Codes

Probability Interval Partitioning Entropy Codes SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract

More information

CPEN 214 - Digital Logic Design Binary Systems

CPEN 214 - Digital Logic Design Binary Systems CPEN 4 - Digital Logic Design Binary Systems C. Gerousis Digital Design 3 rd Ed., Mano Prentice Hall Digital vs. Analog An analog system has continuous range of values A mercury thermometer Vinyl records

More information

Base Conversion written by Cathy Saxton

Base Conversion written by Cathy Saxton Base Conversion written by Cathy Saxton 1. Base 10 In base 10, the digits, from right to left, specify the 1 s, 10 s, 100 s, 1000 s, etc. These are powers of 10 (10 x ): 10 0 = 1, 10 1 = 10, 10 2 = 100,

More information

2011, The McGraw-Hill Companies, Inc. Chapter 3

2011, The McGraw-Hill Companies, Inc. Chapter 3 Chapter 3 3.1 Decimal System The radix or base of a number system determines the total number of different symbols or digits used by that system. The decimal system has a base of 10 with the digits 0 through

More information

Preservation Handbook

Preservation Handbook Preservation Handbook Digital Audio Author Gareth Knight & John McHugh Version 1 Date 25 July 2005 Change History Page 1 of 8 Definition Sound in its original state is a series of air vibrations (compressions

More information

CDA 3200 Digital Systems. Instructor: Dr. Janusz Zalewski Developed by: Dr. Dahai Guo Spring 2012

CDA 3200 Digital Systems. Instructor: Dr. Janusz Zalewski Developed by: Dr. Dahai Guo Spring 2012 CDA 3200 Digital Systems Instructor: Dr. Janusz Zalewski Developed by: Dr. Dahai Guo Spring 2012 Outline Data Representation Binary Codes Why 6-3-1-1 and Excess-3? Data Representation (1/2) Each numbering

More information

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g.,

More information

Binary Trees and Huffman Encoding Binary Search Trees

Binary Trees and Huffman Encoding Binary Search Trees Binary Trees and Huffman Encoding Binary Search Trees Computer Science E119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Motivation: Maintaining a Sorted Collection of Data A data dictionary

More information

Number Systems and Radix Conversion

Number Systems and Radix Conversion Number Systems and Radix Conversion Sanjay Rajopadhye, Colorado State University 1 Introduction These notes for CS 270 describe polynomial number systems. The material is not in the textbook, but will

More information

Systems I: Computer Organization and Architecture

Systems I: Computer Organization and Architecture Systems I: Computer Organization and Architecture Lecture 2: Number Systems and Arithmetic Number Systems - Base The number system that we use is base : 734 = + 7 + 3 + 4 = x + 7x + 3x + 4x = x 3 + 7x

More information

Today. Binary addition Representing negative numbers. Andrew H. Fagg: Embedded Real- Time Systems: Binary Arithmetic

Today. Binary addition Representing negative numbers. Andrew H. Fagg: Embedded Real- Time Systems: Binary Arithmetic Today Binary addition Representing negative numbers 2 Binary Addition Consider the following binary numbers: 0 0 1 0 0 1 1 0 0 0 1 0 1 0 1 1 How do we add these numbers? 3 Binary Addition 0 0 1 0 0 1 1

More information

Data Structures. Topic #12

Data Structures. Topic #12 Data Structures Topic #12 Today s Agenda Sorting Algorithms insertion sort selection sort exchange sort shell sort radix sort As we learn about each sorting algorithm, we will discuss its efficiency Sorting

More information

MATH-0910 Review Concepts (Haugen)

MATH-0910 Review Concepts (Haugen) Unit 1 Whole Numbers and Fractions MATH-0910 Review Concepts (Haugen) Exam 1 Sections 1.5, 1.6, 1.7, 1.8, 2.1, 2.2, 2.3, 2.4, and 2.5 Dividing Whole Numbers Equivalent ways of expressing division: a b,

More information

1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal:

1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal: Exercises 1 - number representations Questions 1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal: (a) 3012 (b) - 435 2. For each of

More information

RN-Codings: New Insights and Some Applications

RN-Codings: New Insights and Some Applications RN-Codings: New Insights and Some Applications Abstract During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently

More information

CHAPTER 5 Round-off errors

CHAPTER 5 Round-off errors CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any

More information

Paramedic Program Pre-Admission Mathematics Test Study Guide

Paramedic Program Pre-Admission Mathematics Test Study Guide Paramedic Program Pre-Admission Mathematics Test Study Guide 05/13 1 Table of Contents Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page

More information

Chapter 1: Digital Systems and Binary Numbers

Chapter 1: Digital Systems and Binary Numbers Chapter 1: Digital Systems and Binary Numbers Digital age and information age Digital computers general purposes many scientific, industrial and commercial applications Digital systems telephone switching

More information

The Answer to the 14 Most Frequently Asked Modbus Questions

The Answer to the 14 Most Frequently Asked Modbus Questions Modbus Frequently Asked Questions WP-34-REV0-0609-1/7 The Answer to the 14 Most Frequently Asked Modbus Questions Exactly what is Modbus? Modbus is an open serial communications protocol widely used in

More information

Measures of Error: for exact x and approximation x Absolute error e = x x. Relative error r = (x x )/x.

Measures of Error: for exact x and approximation x Absolute error e = x x. Relative error r = (x x )/x. ERRORS and COMPUTER ARITHMETIC Types of Error in Numerical Calculations Initial Data Errors: from experiment, modeling, computer representation; problem dependent but need to know at beginning of calculation.

More information

Analysis of Compression Algorithms for Program Data

Analysis of Compression Algorithms for Program Data Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory

More information

Development and Evaluation of Point Cloud Compression for the Point Cloud Library

Development and Evaluation of Point Cloud Compression for the Point Cloud Library Development and Evaluation of Point Cloud Compression for the Institute for Media Technology, TUM, Germany May 12, 2011 Motivation Point Cloud Stream Compression Network Point Cloud Stream Decompression

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML. By Prashanth Tilleti

Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML. By Prashanth Tilleti Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML By Prashanth Tilleti Advisor Dr. Matthew Fluet Department of Computer Science B. Thomas Golisano

More information

CSI 333 Lecture 1 Number Systems

CSI 333 Lecture 1 Number Systems CSI 333 Lecture 1 Number Systems 1 1 / 23 Basics of Number Systems Ref: Appendix C of Deitel & Deitel. Weighted Positional Notation: 192 = 2 10 0 + 9 10 1 + 1 10 2 General: Digit sequence : d n 1 d n 2...

More information

Chapter 4: Computer Codes

Chapter 4: Computer Codes Slide 1/30 Learning Objectives In this chapter you will learn about: Computer data Computer codes: representation of data in binary Most commonly used computer codes Collating sequence 36 Slide 2/30 Data

More information

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions Grade 7 Mathematics, Quarter 1, Unit 1.1 Integer Operations Overview Number of Instructional Days: 15 (1 day = 45 minutes) Content to Be Learned Describe situations in which opposites combine to make zero.

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

C Implementation & comparison of companding & silence audio compression techniques

C Implementation & comparison of companding & silence audio compression techniques ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 26 C Implementation & comparison of companding & silence audio compression techniques Mrs. Kruti Dangarwala 1 and Mr. Jigar Shah 2 1 Department of Computer

More information

DNA Data and Program Representation. Alexandre David 1.2.05 adavid@cs.aau.dk

DNA Data and Program Representation. Alexandre David 1.2.05 adavid@cs.aau.dk DNA Data and Program Representation Alexandre David 1.2.05 adavid@cs.aau.dk Introduction Very important to understand how data is represented. operations limits precision Digital logic built on 2-valued

More information

Numeral Systems. The number twenty-five can be represented in many ways: Decimal system (base 10): 25 Roman numerals:

Numeral Systems. The number twenty-five can be represented in many ways: Decimal system (base 10): 25 Roman numerals: Numeral Systems Which number is larger? 25 8 We need to distinguish between numbers and the symbols that represent them, called numerals. The number 25 is larger than 8, but the numeral 8 above is larger

More information

Digital Design. Assoc. Prof. Dr. Berna Örs Yalçın

Digital Design. Assoc. Prof. Dr. Berna Örs Yalçın Digital Design Assoc. Prof. Dr. Berna Örs Yalçın Istanbul Technical University Faculty of Electrical and Electronics Engineering Office Number: 2318 E-mail: siddika.ors@itu.edu.tr Grading 1st Midterm -

More information

WAVE PCM soundfile format

WAVE PCM soundfile format EE 356 WAV File Format Notes https://ccrma.stanford.edu/courses/422/projects/waveformat/ WAVE PCM soundfile format The WAVE file format is a subset of Microsoft's RIFF specification for the storage of

More information

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music ISO/IEC MPEG USAC Unified Speech and Audio Coding MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music The standardization of MPEG USAC in ISO/IEC is now in its final

More information

Solution for Homework 2

Solution for Homework 2 Solution for Homework 2 Problem 1 a. What is the minimum number of bits that are required to uniquely represent the characters of English alphabet? (Consider upper case characters alone) The number of

More information

Chapter One Introduction to Programming

Chapter One Introduction to Programming Chapter One Introduction to Programming 1-1 Algorithm and Flowchart Algorithm is a step-by-step procedure for calculation. More precisely, algorithm is an effective method expressed as a finite list of

More information

ENGINEERING COMMITTEE Digital Video Subcommittee SCTE 20 2012 METHODS FOR CARRIAGE OF CEA-608 CLOSED CAPTIONS AND NON-REAL TIME SAMPLED VIDEO

ENGINEERING COMMITTEE Digital Video Subcommittee SCTE 20 2012 METHODS FOR CARRIAGE OF CEA-608 CLOSED CAPTIONS AND NON-REAL TIME SAMPLED VIDEO ENGINEERING COMMITTEE Digital Video Subcommittee SCTE 20 2012 METHODS FOR CARRIAGE OF CEA-608 CLOSED CAPTIONS AND NON-REAL TIME SAMPLED VIDEO NOTICE The Society of Cable Telecommunications Engineers (SCTE)

More information

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS Dasaradha Ramaiah K. 1 and T. Venugopal 2 1 IT Department, BVRIT, Hyderabad, India 2 CSE Department, JNTUH,

More information

Basics of Digital Recording

Basics of Digital Recording Basics of Digital Recording CONVERTING SOUND INTO NUMBERS In a digital recording system, sound is stored and manipulated as a stream of discrete numbers, each number representing the air pressure at a

More information

Positional Numbering System

Positional Numbering System APPENDIX B Positional Numbering System A positional numbering system uses a set of symbols. The value that each symbol represents, however, depends on its face value and its place value, the value associated

More information

Lecture 8: Binary Multiplication & Division

Lecture 8: Binary Multiplication & Division Lecture 8: Binary Multiplication & Division Today s topics: Addition/Subtraction Multiplication Division Reminder: get started early on assignment 3 1 2 s Complement Signed Numbers two = 0 ten 0001 two

More information

PCM Encoding and Decoding:

PCM Encoding and Decoding: PCM Encoding and Decoding: Aim: Introduction to PCM encoding and decoding. Introduction: PCM Encoding: The input to the PCM ENCODER module is an analog message. This must be constrained to a defined bandwidth

More information

Sheet 7 (Chapter 10)

Sheet 7 (Chapter 10) King Saud University College of Computer and Information Sciences Department of Information Technology CAP240 First semester 1430/1431 Multiple-choice Questions Sheet 7 (Chapter 10) 1. Which error detection

More information

RN-coding of Numbers: New Insights and Some Applications

RN-coding of Numbers: New Insights and Some Applications RN-coding of Numbers: New Insights and Some Applications Peter Kornerup Dept. of Mathematics and Computer Science SDU, Odense, Denmark & Jean-Michel Muller LIP/Arénaire (CRNS-ENS Lyon-INRIA-UCBL) Lyon,

More information

An Optimised Software Solution for an ARM Powered TM MP3 Decoder. By Barney Wragg and Paul Carpenter

An Optimised Software Solution for an ARM Powered TM MP3 Decoder. By Barney Wragg and Paul Carpenter An Optimised Software Solution for an ARM Powered TM MP3 Decoder By Barney Wragg and Paul Carpenter Abstract The market predictions for MP3-based appliances are extremely positive. The ability to maintain

More information

0.8 Rational Expressions and Equations

0.8 Rational Expressions and Equations 96 Prerequisites 0.8 Rational Expressions and Equations We now turn our attention to rational expressions - that is, algebraic fractions - and equations which contain them. The reader is encouraged to

More information

CM0340 SOLNS. Do not turn this page over until instructed to do so by the Senior Invigilator.

CM0340 SOLNS. Do not turn this page over until instructed to do so by the Senior Invigilator. CARDIFF UNIVERSITY EXAMINATION PAPER Academic Year: 2008/2009 Examination Period: Examination Paper Number: Examination Paper Title: SOLUTIONS Duration: Autumn CM0340 SOLNS Multimedia 2 hours Do not turn

More information

PRIMER ON PC AUDIO. Introduction to PC-Based Audio

PRIMER ON PC AUDIO. Introduction to PC-Based Audio PRIMER ON PC AUDIO This document provides an introduction to various issues associated with PC-based audio technology. Topics include the following: Introduction to PC-Based Audio Introduction to Audio

More information

Symbol Tables. Introduction

Symbol Tables. Introduction Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

More information

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8. Reading.. IMAGE COMPRESSION- I Week VIII Feb 25 Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive, transform coding basics) 8.6 Image

More information