Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.



Similar documents
A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES

Introduction to image coding

Information, Entropy, and Coding

Image Compression through DCT and Huffman Coding Technique

Compression techniques

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

CHAPTER 2 LITERATURE REVIEW

encoding compression encryption

ELEC3028 Digital Transmission Overview & Information Theory. Example 1

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

CM0340 SOLNS. Do not turn this page over until instructed to do so by the Senior Invigilator.

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application

Arithmetic Coding: Introduction

Development and Evaluation of Point Cloud Compression for the Point Cloud Library

Entropy and Mutual Information

For Articulation Purpose Only

Introduction to Medical Image Compression Using Wavelet Transform

LZ77. Example 2.10: Let T = badadadabaab and assume d max and l max are large. phrase b a d adadab aa b

Video Coding Basics. Yao Wang Polytechnic University, Brooklyn, NY11201

Video-Conferencing System

Figure 1: Relation between codec, data containers and compression algorithms.

Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.

Transform-domain Wyner-Ziv Codec for Video


Standard encoding protocols for image and video coding


Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding

Chapter 1 Introduction

FCE: A Fast Content Expression for Server-based Computing

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

Data Storage 3.1. Foundations of Computer Science Cengage Learning

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

Today s topics. Digital Computers. More on binary. Binary Digits (Bits)

Structures for Data Compression Responsible persons: Claudia Dolci, Dante Salvini, Michael Schrattner, Robert Weibel

Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India

2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India

Michael W. Marcellin and Ala Bilgin

Design of Efficient Algorithms for Image Compression with Application to Medical Images

A New Interpretation of Information Rate

H.264/MPEG-4 AVC Video Compression Tutorial

Secured Lossless Medical Image Compression Based On Adaptive Binary Optimization

An Introduction to Information Theory

Linear Codes. Chapter Basics

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

A comprehensive survey on various ETC techniques for secure Data transmission

Ex. 2.1 (Davide Basilio Bartolini)

Lectures 6&7: Image Enhancement

Hybrid Lossless Compression Method For Binary Images

On the Use of Compression Algorithms for Network Traffic Classification

A Robust and Lossless Information Embedding in Image Based on DCT and Scrambling Algorithms

1. (Ungraded) A noiseless 2-kHz channel is sampled every 5 ms. What is the maximum data rate?

Implementation of ASIC For High Resolution Image Compression In Jpeg Format

Digital Video Broadcasting By Satellite

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Introduzione alle Biblioteche Digitali Audio/Video

Class Notes CS Creating and Using a Huffman Code. Ref: Weiss, page 433

CHAPTER 6. Shannon entropy

Video compression: Performance of available codec software

Gambling and Data Compression

Wan Accelerators: Optimizing Network Traffic with Compression. Bartosz Agas, Marvin Germar & Christopher Tran

The Mathematics of Gambling

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

Basics of information theory and information complexity

Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

(Refer Slide Time: 01:11-01:27)

An Efficient Compression of Strongly Encrypted Images using Error Prediction, AES and Run Length Coding

INFOCOMMUNICATIONS JOURNAL Mojette Transform Software Hardware Implementations and its Applications

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

H 261. Video Compression 1: H 261 Multimedia Systems (Module 4 Lesson 2) H 261 Coding Basics. Sources: Summary:

ENG4BF3 Medical Image Processing. Image Visualization

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201

Econometrics Simple Linear Regression

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac)

Information Theory and Coding SYLLABUS

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Privacy and Security in the Internet of Things: Theory and Practice. Bob Baxley; HitB; 28 May 2015

IE1204 Digital Design F12: Asynchronous Sequential Circuits (Part 1)

- Easy to insert & delete in O(1) time - Don t need to estimate total memory needed. - Hard to search in less than O(n) time

Lecture 1: Course overview, circuits, and formulas

Digital Design. Assoc. Prof. Dr. Berna Örs Yalçın

Public Switched Telephone System

A Survey of the Theory of Error-Correcting Codes

Genetically Modified Compression Approach for Multimedia Data on cloud storage Amanjot Kaur Sandhu [1], Er. Anupama Kaur [2] [1]

Transcription:

Reading.. IMAGE COMPRESSION- I Week VIII Feb 25 Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive, transform coding basics) 8.6 Image Compression Standards (time permitting) 02/25/2003 Image Compression-I 1 02/25/2003 Image Compression-I 2 Image compression Objective: To reduce the amount of data required to represent an image. Important in data storage and transmission Progressive transmission of images (internet, www) Video coding (HDTV, Teleconferencing) Digital libraries and image databases Medical imaging Satellite images IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding 02/25/2003 Image Compression-I 3 02/25/2003 Image Compression-I 4 Lossy vs Lossless Compression Compression techniques Information preserving (loss-less) Images can be compressed and restored without any loss of information. Application: Medical images, GIS Lossy Perfect recovery is not possible but provides a large data compression. Example : TV signals, teleconferencing Data Redundancy CODING: Fewer bits to represent frequent symbols. INTERPIXEL / INTERFRAME: Neighboring pixels have similar values. PSYCHOVISUAL: Human visual system can not simultaneously distinguish all colors. 02/25/2003 Image Compression-I 5 02/25/2003 Image Compression-I 6

Coding Redundancy Fewer number of bits to represent frequently occurring symbols. Let p r (r k ) = n k / n, k = 0,1,2,.., L-1; L # of gray levels. Let r k be represented by l (r k ) bits. Therefore average # of bits required to represent each pixel is Lavg = l( rk ) pr( rk) ( A) k = 0 Usually l(r k ) = m bits (constant). L = mp ( r ) = m avg r k k Coding Redundancy (contd.) Consider equation (A): It makes sense to assign fewer bits to those r k for which p r (r k ) are large in order to reduce the sum. this achieves data compression and results in a variable length code. More probable gray levels will have fewer # of bits. Lavg = l( rk ) pr( rk) ( A) k = 0 02/25/2003 Image Compression-I 7 02/25/2003 Image Compression-I 8 Example (From text) r p ( r) Code l( r) k r k k r = 0 0.19 11 2 0 r = 1 1 7 r 2 2 = 7 r = 3 3 7 r 4 4 = 7 r = 5 5 7 r 6 6 = 7 7 0.25 01 2 0.21 10 2 0.16 001 3 0.08 0001 4 0.06 00001 5 0.03 000001 6 r = 1 0.02 000000 6 Coding: Example Lavg = ρ( rk) l( rk) = 2. 7 Bits 10% less code 02/25/2003 Image Compression-I 9 spatial. N(x,y) f(x,y) Depends on f (x, y ), (x, y ) ε N xy N xy : Neighbourhood of pixels around (x,y) Inter-pixel/Inter-frame Interframe f(x,y,t 3 ) f(x,y,t 2 ) f(x,y,t 1 ) f(x,y,t 0 ) f(x, y, t i ) i=1, 2, 3,... are related to each other. This can be exploited for video compression. 02/25/2003 Image Compression-I 10 Runlength Coding Pixel Correlations 02/25/2003 Image Compression-I 11 02/25/2003 Image Compression-I 12

Psychovisual Quantization Human visual system has limitations ; good example is quantization. conveys infromation but requires much less memory/space. (Example: Figure 8.4 in text; matlab) 02/25/2003 Image Compression-I 13 02/25/2003 Image Compression-I 14 IGS Code(Table 8.2) General Model General compression model f(x,y) Source encoder Channel encoder Channel Channel decoder Source decoder g(x,y) Source encoder f(x,y) Mapper Quantizer Symbol encoder To channel 02/25/2003 Image Compression-I 15 02/25/2003 Image Compression-I 16 Source Encoder Mapper: Designed to reduce interpixel redundancy. example: Run length encoding results in compression. Transfrom to another domain where the coefficients are less correlated then the original. Ex: Fourier transfrom. Quantizer: reduces psychovisual redundancies in the image - should be left out if error-free encoding is desired. Symbol encoder: creates a fixed/variable length code - reduces coding redundancies. from channel symbol decoder Source Decoder Inverse mapper Note: Quantization is NOT reversible f(x,y) ^ Question(s): Is there a minimum amount of data that is sufficient to completely describe an image without any loss of information? How do you measure information? 02/25/2003 Image Compression-I 17 02/25/2003 Image Compression-I 18

Self-Information Suppose an event E occurs with probability P(E) ; then it is said to contain I (E) = -log P(E) units of information. P(e) = 1 [always happens] => I(e) = 0 [conveys no information] If the base of the logarithm is 2, then the unit of information is called a bit. If P(E) = 1/2, I(E) = -log 2 (1/2) = 1 bit. Example: Flipping of a coin ; outcome of this experiment requires one bit to convey the information. 02/25/2003 Image Compression-I 19 Self-Information (contd.) Assume an information source which generates the symbols {a 0, a 1, a 2,..., a L-1 } with prob { ai} = p( ai) ; p( ai) = 1 i= 0 Ia ( i) = log 2 pa ( i) bits. 02/25/2003 Image Compression-I 20 Average information per source output is H = p( ai)log 2 p( ai) i= 0 ENTROPY bits / symbol H is called the uncertainity or the entropy of the source. If all the source symbols are equally probable then the source has a maximum entropy. H gives the lower bound on the number of bits required to code a signal. 02/25/2003 Image Compression-I 21 Noiseless coding theorem (Shannon) It is possible to code, without any loss of information, a source signal with entropy H bits/symbol, using H + ε bits/symbol where ε is an arbitrary small quantity. ε can be made arbitrarily small by considering increasingly larger blocks of symbols to be coded. 02/25/2003 Image Compression-I 22 Error-free coding Huffman code: example Error-free coding Coding redundancy Ex: Huffman coding Yeilds smallest possible # of code symbols per source symbol when symbols are coded one at a time. Interpixel redundancy Ex: Runlength coding Huffman code: Consider a 6 symbol source a 1 a 2 a 3 a 4 a 5 a 6 p(a i ) 0.1 0.4 0.06 0.1 0.04 0.3 02/25/2003 Image Compression-I 23 02/25/2003 Image Compression-I 24

Huffman coding: example (contd.) Example (contd.) a 2 0.4 (1) 0.4 (1) 0.4 (1) 0.4 (1) 0.6(0) a 6 0.3 (00) 0.3 (00) 0.3 (00) 0.3 (00) 0.4(1) a 1 0.1 (011) 0.1 (011) 0.2 (010) 0.3 (01) a 4 0.1 (0100) 0.1 (0100) 0.1 (011) Average length: (0.4) (1) + 0.3 (2) + 0.1 X 3 + 0.1 X 4 + (0.06 + 0.04) 5 = 2.2 bits/symbol -Σ p i log p i = 2.14 bits/symbol (Entropy) a 3 0.06 (01010) 0.1 (0101) a 5 0.04 (01011) 02/25/2003 Image Compression-I 25 02/25/2003 Image Compression-I 26 Huffman code: Steps Arrange symbol probabilities p i in decreasing order While there is more than one node Merge the two nodes with the smallest probabilities to form a new node with probabilities equal to their sum. Arbitrarily assign 1 and 0 to each pair of branches merging in to a node. Read sequentially from the root node to the leaf node where the symbol is located. Huffman code (final slide) Lossless code Block code Uniquely decodable Instantaneous (no future referencing is needed) 02/25/2003 Image Compression-I 27 02/25/2003 Image Compression-I 28 Fig. 8.13: Arithmetic Coding Lempel-Ziv-Welch (LZW) coding Uses a dictionary Dictionary is adaptive to the data Decoder constructs the matching dictionary based on the codewords received. used in GIF, TIFF and PDF file formats. Coding rate: 3/5 decimal digits/symbol Entropy: 0.58 decimal digits/symbol. 02/25/2003 Image Compression-I 29 02/25/2003 Image Compression-I 30

LZW-an example LZW-dictionary construction Consider a 4x4, 8-bits per pixel image 39 39 126 126 39 39 126 126 39 39 126 126 39 39 126 126 The dictionary values 0-255 correspond to the pixel values 0-255. Assume a 512 word dictionary. 02/25/2003 Image Compression-I 31 02/25/2003 Image Compression-I 32 Run-length Coding Run-length encoding (Binary images) 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 0 0 4 6 3 3 2 Lengths of 0 s and 1 s is encoded. Each of the bit planes in a gray scale image can be run length - encoded. One can combine run-length encoding with variable length coding of the run-lengths to get better compression. 02/25/2003 Image Compression-I 33