CHAPTER 2 LITERATURE REVIEW



Similar documents
Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India

Image Compression through DCT and Huffman Coding Technique

Introduction to Medical Image Compression Using Wavelet Transform

MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION


Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

Introduction to image coding

Compression techniques

COMPRESSION OF 3D MEDICAL IMAGE USING EDGE PRESERVATION TECHNIQUE

WAVELETS AND THEIR USAGE ON THE MEDICAL IMAGE COMPRESSION WITH A NEW ALGORITHM

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac)

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application

Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria

Design of Efficient Algorithms for Image Compression with Application to Medical Images

Lossless Medical Image Compression using Redundancy Analysis

CHAPTER 7 CONCLUSION AND FUTURE WORK

ENG4BF3 Medical Image Processing. Image Visualization

Implementation of ASIC For High Resolution Image Compression In Jpeg Format

On the Use of Compression Algorithms for Network Traffic Classification

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

Region of Interest Access with Three-Dimensional SBHP Algorithm CIPR Technical Report TR

Genetically Modified Compression Approach for Multimedia Data on cloud storage Amanjot Kaur Sandhu [1], Er. Anupama Kaur [2] [1]

Michael W. Marcellin and Ala Bilgin

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.

Figure 1: Relation between codec, data containers and compression algorithms.

Progressive-Fidelity Image Transmission for Telebrowsing: An Efficient Implementation

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

balesio Native Format Optimization Technology (NFO)

A comprehensive survey on various ETC techniques for secure Data transmission

SPEECH SIGNAL CODING FOR VOIP APPLICATIONS USING WAVELET PACKET TRANSFORM A

Video Coding Basics. Yao Wang Polytechnic University, Brooklyn, NY11201

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music

AUTHORIZED WATERMARKING AND ENCRYPTION SYSTEM BASED ON WAVELET TRANSFORM FOR TELERADIOLOGY SECURITY ISSUES

A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES

Streaming Lossless Data Compression Algorithm (SLDC)

Analysis of Compression Algorithms for Program Data

Information, Entropy, and Coding

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Multi-factor Authentication in Banking Sector

Comparing Multiresolution SVD with Other Methods for Image Compression

REGION OF INTEREST CODING IN MEDICAL IMAGES USING DIAGNOSTICALLY SIGNIFICANT BITPLANES

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

JPEG2000 ROI CODING IN MEDICAL IMAGING APPLICATIONS

Wan Accelerators: Optimizing Network Traffic with Compression. Bartosz Agas, Marvin Germar & Christopher Tran

Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding

Video-Conferencing System

JPEG A Short Tutorial

A Robust and Lossless Information Embedding in Image Based on DCT and Scrambling Algorithms

Study and Implementation of Video Compression standards (H.264/AVC, Dirac)

encoding compression encryption

Structures for Data Compression Responsible persons: Claudia Dolci, Dante Salvini, Michael Schrattner, Robert Weibel

Master of Science Graphics, Multimedia and Virtual Reality. Courses description

ANALYSIS AND EFFICIENCY OF ERROR FREE COMPRESSION ALGORITHM FOR MEDICAL IMAGE

SECURE AND EFFICIENT TRANSMISSION OF MEDICAL IMAGES OVER WIRELESS NETWORK

A GPU based real-time video compression method for video conferencing

Volume 2, Issue 12, December 2014 International Journal of Advance Research in Computer Science and Management Studies

Secured Lossless Medical Image Compression Based On Adaptive Binary Optimization

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl

DIGITAL IMAGE PROCESSING AND ANALYSIS

Understanding Compression Technologies for HD and Megapixel Surveillance

LZ77. Example 2.10: Let T = badadadabaab and assume d max and l max are large. phrase b a d adadab aa b

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

How To Code A 4D (Dalt) Image Encoder With A 4Th Generation Dalt (Delt) And 4Th Gen (Dnt) (Dct) (A) And 2Nd Generation (Dpt) (F

Video compression: Performance of available codec software

Statistical Modeling of Huffman Tables Coding

How To Code With Cbcc (Cbcc) In Video Coding

VCDEMO.rtf VcDemo Application Help Page 1 of 55. Contents

Storage Optimization in Cloud Environment using Compression Algorithm

STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION

2K Processor AJ-HDP2000

White paper. An explanation of video compression techniques.

Transform-domain Wyner-Ziv Codec for Video

Quality Estimation for Scalable Video Codec. Presented by Ann Ukhanova (DTU Fotonik, Denmark) Kashaf Mazhar (KTH, Sweden)

Image Authentication Scheme using Digital Signature and Digital Watermarking

Digital Video Coding Standards and Their Role in Video Communications

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

Medical Image Compression using Improved EZW and New Lifting Based Wavelets

DCT-JPEG Image Coding Based on GPU

Key Components of WAN Optimization Controller Functionality

Transcription:

11 CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION Image compression is mainly used to reduce storage space, transmission time and bandwidth requirements. In the subsequent sections of this chapter, general image compression schemes and medical image compression techniques that are available in the literature are discussed. 2.2 BASIC COMPRESSION TECHNIQUES Image compression can be accomplished by the use of coding methods, spatial domain techniques and transform domain techniques (Vemuri et al. 2007). Coding methods are directly applied to images. General coding methods comprise of entropy coding techniques which include Huffman coding and arithmetic coding, run length coding and dictionary based coding. Spatial domain methods which operate directly on the pixels of the image, combine spatial domain algorithms and coding methods. Transform domain methods transform the image from its spatial domain representation to a different type of representation using well-known transforms. 2.2.1 Lossless Compression Techniques In lossless compression techniques, the reconstructed image after compression is identical to the original image.

12 In general, lossless compression is implemented using coding methods. Entropy coding encodes the given set of symbols with the minimum number of bits required to represent them using the probability of the symbols. Compression is achieved by assigning variable-size codes to symbols. Shorter codeword is assigned to more probable symbols. Huffman coding (Huffman 1952) and Arithmetic coding (Rissanen 1976, Witten et al. 1987) are the most popular entropy coding techniques. Lossless image compression techniques can be implemented using Huffman coding and Arithmetic coding. Huffman coding (Huffman 1952) is an optimum prefix code. It assigns a set of prefix codes to symbols based on their probabilities. Symbols that occur more frequently will have shorter codewords than symbols which occur less frequently. Also less frequently occurring two symbols have codewords with same maximum length. Huffman coding is inefficient when the alphabet size is small and probability of occurrence of symbols is highly skewed. Arithmetic coding (Witten et al. 1987) is more efficient when the alphabet size is small or the symbol probabilities are highly skewed. Generating codewords for sequences of symbols is efficient than generating a separate codeword for each symbol in a sequence. A unique arithmetic code can be generated for a particular sequence without generating codewords for all sequences of that length. This is unlike for Huffman codes. A single tag value is assigned to a block of symbols, which is uniquely decodable. Arithmetic coding provides better compression ratios than Huffman coding (Shahbahrami et al. 2011). Run-length encoding technique is the simplest compression technique (Golomb 1966). It is efficient only when the data to be compressed consists of long runs of repeated characters or symbols. It encodes runs of

13 characters into two bytes, namely a count and a symbol, i.e., it stores these runs as a single character, preceded by a number representing the number of times this character is repeated in the run. Dictionary-based coding reads input sequence and looks for groups of symbols that appear in a dictionary (Larsson and Moffat 2000). The symbols are coded using a pointer or index in the dictionary. This technique will be more useful with sources that generate a relatively small number of patterns quite frequently (Sayood 2000). Images will not generate the same pattern of symbols frequently. So more number of bits are required for image coding. Lempel-Ziv-77 (LZ77) and Lempel-Ziv-Welch (LZW) are dictionarybased compression techniques. 2.2.2 Lossy Compression Techniques In lossy image compression, the reconstructed image after compression is approximation of the original image. A lossy compression method is termed visually lossless when the loss of information caused by compression method is invisible for an observer. In general, a lossy compression is implemented using spatial domain encoding and transform domain encoding methods. Spatial domain techniques generally make use of the prediction function, in which value of the current pixel is determined by the knowledge of the previously coded pixel. Delta modulation and pulse code modulation are examples of predictive coding. In transform domain technique, image transforms are used to decorrelate the pixels. Thus, the image information is packed into a small number of coefficients. The coefficients in the transform domain are then quantized to reduce the number of allocated bits. The error or loss in

14 information is due to quantization step. The resulting quantized coefficients are of different probabilities and an entropy coding scheme can further reduce the number of required bits. Transform coding is commonly adopted method for lossy image compression as it provides greater data compression compared to predictive methods (Bhavani and Dhanushkodi 2010). The transforms used to decorrelate the image pixels are Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT), Walsh- Hadamard Transform (WHT), Karhunen-Loeve Transform (KLT) and Discrete Wavelet Transform (DWT). Among these transforms, DCT (Ahmed et al. 1984) and DWT (Vetterli and Kovacevic 1995) are the most popular and deep-rooted transform techniques (Sikora 2005). In DCT, most of the energy is compacted into lower frequency coefficients. Due to quantization, most of the higher frequency coefficients become small or zero and have a tendency to be grouped together. The well-known DCT based Joint Photographic Experts Group (JPEG) compression standard (Wallace 1991 and Pennebaker and Mitchell 1993) divides the image into sub images. DCT is performed on the sub-images, and the resulting coefficients are quantized and coded. The main disadvantage of DCT is that at low bit rates, it yields blocking artifacts. The wavelet transform decomposes the image into different frequency subbands, namely lower frequency subbands and higher frequency subbands, by which smooth variations and details of the image can be separated. Most of the energy is compacted into lower frequency subbands. Most of the coefficients in higher frequency subbands are small or zero and have a tendency to be grouped together and are also located in the same relative spatial location in the subbands. Thus image compression methods that use wavelet transforms have been successful in providing high rates of compression while maintaining good image quality and are superior to

15 DCT-based methods (Antonini et al. 1992, Manduca and Said 1996, Erickson et al. 1998 and Rani et al. 2009). After the introduction of DWT, a wide variety of wavelet-based compression methods have been reported. Among them, Embedded Zero tree Wavelet (EZW) (Shapiro 1993) which uses parent-child dependancies between subband coefficients at the same spatial location of wavelet decomposed image, Set Partitioning In Hierarchical Trees (SPIHT) (Said and Pearlman 1996) which uses the self similarity between subbands of wavelet decomposed image and JPEG2000 algorithm (Taubman 2000, Boleik et al. 2000, Taubman and Marcellin 2002) which uses Embedded Block Coding with Optimized Truncation (EBCOT) are the most popular methods. 2.2.3 Overview of Previous Research Works in Medical Image Compression Hundreds of researchers in their studies have demonstrated the new advances in the field of medical image compression in both lossless and lossy categories. Lossless compression can achieve a maximum compression ratio of 3:1 restoring the image without loss of information. As digital images occupy large amount of storage space, most of the research is focused on lossy compression that removes insignificant information preserving all the relevant and important image information. In teleradiology applications, several scientific research studies have been performed to determine the degree of compression that maintains the diagnostic image quality (Ishigaki et al. 1990). MacMahon et al. (1991) proved that medical images with a compression ratio of 10:1 are acceptable. Cosman et al. (1994) proved that there is no loss in diagnostic accuracy for compression ratios up to 9:1. Compression of medical images is vital in achieving a low bit rate in the representation of radiology images in order to

16 reduce the data volume, without loss in diagnostic information (Wong et al. 1995). Lee et al. (1993), Goldberg (1994) and Perlmutter et al. (1997) pointed out that lossy compression techniques could be applied for medical images without significantly affecting the diagnostic content of images. The decompression results show no significant difference with the original for compression ratios up to 10:1 in case of medical images in the work proposed by Ando et al. (1999). Research studies (Slone et al. 2000, Skodras et al. 2001, Chen 2007 and Choong et al. 2007) showed that as digital medical images occupy large amount of storage space, at least 10:1 compression ratio has to be achieved. Kalyanpur et al. (2000) examined the effect of JPEG and wavelet compression algorithms on medical images and concluded that there is no significant loss of diagnostic quality upto10:1 compression. Persons et al. (2000) discussed diagnostic accuracy and reported that reconstructed medical images with a compression ratio of 9:1 do not result in visual degradation. Saffor et al. (2001) compared JPEG and wavelet and concluded that the wavelet could achieve higher compression efficiency than JPEG without compromising image quality. Li et al. (2001) investigated the effect of JPEG and wavelet compression algorithm on medical images and concluded that compression ratio up to 10:1 is acceptable. Hui and Besar (2002) studied the performance of JPEG2000 on medical images and showed that JPEG2000 is more acceptable compared to JPEG as JPEG2000 images could retain more detail than a JPEG image. Both lossless and lossy compression techniques are discussed in Smutek (2005) and Seeram (2006). The lossless compression techniques discussed achieve overweening results with maximum compression ratio of

17 3:1 and the lossy compression techniques with high compression ratios cause distortion in decompressed image. The more advanced technique for compressing medical images is JPEG2000 (Krishnan et al. 2005) which combines integer wavelet transform with EBCOT. Asraf et al. (2006) proposed a compression technique, which is a hybrid of lossless and lossy techniques using neural network vector quantization and Huffman coding. This high complexity technique is tested for medical images achieving compression ratio of 5 to 10. Chen (2007) proposed a new algorithm for medical image compression that is based on SPIHT algorithm. An 8 x 8 DCT approach is adopted to perform subband decomposition. SPIHT is then employed to organize data. This is block-based technique and more complexity is involved. The author concluded that for application in medical domain some adaptations are needed to remove blocking artifacts. Singh et al. (2007) presented a technique for classification of DCT blocks into two categories, namely pure and complex, based on the variance in the block. The block with higher complexity has more information about the image and more coefficients must be chosen to preserve the diagnostic information of the image. As pure regions occupy most of the image and have minimum information content, better compression ratios are obtained. But blocky effect arises at high compression ratios. All these techniques discussed involve multiple levels of wavelet decomposition resulting in high computational complexity. Higher level of decomposition results in more details that can be thresholded to get high compression ratios but leads to energy loss.

18 Energy retained will be more if the image is decomposed to fewer levels but compression attained is less (Tahoces et al. 2008 and Sadashivappa and Babu 2008). Dragan and Ivetic (2009) said that the issue is not whether to compress medical images using lossless or lossy techniques, but preferably which type of compression can be used without compromising image quality. For medical images, only a small portion of the image contributes diagnostically useful information. Compression methods providing higher reconstruction quality for the diagnostically important regions are advisable in this situation. In region-based coding, the features of region of diagnosis termed Region Of Interest (ROI) can be preserved using lossless compression algorithm, while achieving high compression by allowing degradation in the background region. An efficient segmentation technique (either manual or automatic) locates ROI. Segmenting image into regions (Osher and Sethian 1988) either automatically or manually is possible for compression algorithm to deliver different levels of reconstruction quality in different spatial regions of the image. As per the suggested theme, for providing higher compression, diagnostically significant regions must be preserved at high quality, while allowing degradation in non-roi regions, as non-roi helps the viewer to identify the position of the ROI within the original image (Ali et al. 2008). Hu et al. (2008) and Babu and Alamelu (2009) worked on ROI compression and explained the clinical importance of ROI of the medical image through diagnosis.

19 2.3 CONCLUSION In this chapter, an overview of lossless and lossy image compression techniques carried out by previous researchers is presented. The salient features of transform coding, Huffman coding, Arithmetic Coding, medical image coding, JPEG, JPEG 2000, EZW, SPIHT and ROI coding algorithms are also presented. This literature review reveals that: Transform methods provides greater data compression. DCT and DWT are the popular image transforms. DCT-based compression method suffers from blocking artifacts at low bitrates. DWT-based compression is superior to DCT based method. Wavelet-based compression methods such as JPEG 2000, EZW and SPIHT have multiple levels of wavelet decomposition leading to complexity. Arithmetic coding provides better compression ratios than Huffman coding. Even though computational demand is a general issue, ROIbased compression is expected to be better than other techniques for medical images. In the thesis, transform coding is adopted as it provides high compression. The proposed work uses one level wavelet decomposition thereby reducing the complexity. Arithmetic coding, a lossless coding technique, is adopted to increase the compression efficiency. The next chapter proposes the first compression algorithm for medical images.