Image Compression Using Wavelet Methods

Similar documents
Image Compression through DCT and Huffman Coding Technique

Conceptual Framework Strategies for Image Compression: A Review

JPEG Image Compression by Using DCT

CHAPTER 2 LITERATURE REVIEW

Introduction to Medical Image Compression Using Wavelet Transform

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India

COMPRESSION OF 3D MEDICAL IMAGE USING EDGE PRESERVATION TECHNIQUE

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac)

Introduction to image coding

Image Authentication Scheme using Digital Signature and Digital Watermarking

MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding

WAVELETS AND THEIR USAGE ON THE MEDICAL IMAGE COMPRESSION WITH A NEW ALGORITHM

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Performance Analysis of medical Image Using Fractal Image Compression

Combating Anti-forensics of Jpeg Compression

A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES

Video compression: Performance of available codec software

Hybrid Lossless Compression Method For Binary Images

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria

Volume 2, Issue 12, December 2014 International Journal of Advance Research in Computer Science and Management Studies

A Robust and Lossless Information Embedding in Image Based on DCT and Scrambling Algorithms


2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India

Figure 1: Relation between codec, data containers and compression algorithms.

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl

SPEECH SIGNAL CODING FOR VOIP APPLICATIONS USING WAVELET PACKET TRANSFORM A

New high-fidelity medical image compression based on modified set partitioning in hierarchical trees

CHAPTER 7 CONCLUSION AND FUTURE WORK

A comprehensive survey on various ETC techniques for secure Data transmission

ROI Based Medical Image Watermarking with Zero Distortion and Enhanced Security

Lossless Medical Image Compression using Redundancy Analysis

Information, Entropy, and Coding

Study and Implementation of Video Compression standards (H.264/AVC, Dirac)

Implementation of ASIC For High Resolution Image Compression In Jpeg Format

Progressive-Fidelity Image Transmission for Telebrowsing: An Efficient Implementation

Steganography Based Seaport Security Communication System

A Secure File Transfer based on Discrete Wavelet Transformation and Audio Watermarking Techniques

International Journal of Computer Sciences and Engineering Open Access. A novel technique to hide information using Daubechies Transformation

A GPU based real-time video compression method for video conferencing

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

Multiple Embedding Using Robust Watermarks for Wireless Medical Images

Genetically Modified Compression Approach for Multimedia Data on cloud storage Amanjot Kaur Sandhu [1], Er. Anupama Kaur [2] [1]

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Statistical Modeling of Huffman Tables Coding

Image Compression and Decompression using Adaptive Interpolation

STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION

AUTHORIZED WATERMARKING AND ENCRYPTION SYSTEM BASED ON WAVELET TRANSFORM FOR TELERADIOLOGY SECURITY ISSUES

ENG4BF3 Medical Image Processing. Image Visualization

CS Introduction to Data Mining Instructor: Abdullah Mueen

Multi-factor Authentication in Banking Sector

Redundant Wavelet Transform Based Image Super Resolution

AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGORITHM

Parametric Comparison of H.264 with Existing Video Standards

Region of Interest Access with Three-Dimensional SBHP Algorithm CIPR Technical Report TR

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.

Dynamic Binary Location based Multi-watermark Embedding Algorithm in DWT

DCT-JPEG Image Coding Based on GPU

Wavelet-based medical image compression

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music

INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL Volume 1, No 3, 2010

Secured Lossless Medical Image Compression Based On Adaptive Binary Optimization

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation

Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images

How To Filter Spam Image From A Picture By Color Or Color

2K Processor AJ-HDP2000

Video-Conferencing System

SECURE AND EFFICIENT TRANSMISSION OF MEDICAL IMAGES OVER WIRELESS NETWORK


Design of Efficient Algorithms for Image Compression with Application to Medical Images

A High-Yield Area-Power Efficient DWT Hardware for Implantable Neural Interface Applications

Improving Quality of Medical Image Compression Using Biorthogonal CDF Wavelet Based on Lifting Scheme and SPIHT Coding

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.

FCE: A Fast Content Expression for Server-based Computing

Friendly Medical Image Sharing Scheme

A Proficient scheme for Backup and Restore Data in Android for Mobile Devices M S. Shriwas

For Articulation Purpose Only

Transform-domain Wyner-Ziv Codec for Video

Image Transmission over IEEE and ZigBee Networks

On the Use of Compression Algorithms for Network Traffic Classification

Compression techniques

An Efficient Architecture for Image Compression and Lightweight Encryption using Parameterized DWT

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

Reversible Data Hiding for Security Applications

encoding compression encryption

ANALYSIS AND EFFICIENCY OF ERROR FREE COMPRESSION ALGORITHM FOR MEDICAL IMAGE

Quality Estimation for Scalable Video Codec. Presented by Ann Ukhanova (DTU Fotonik, Denmark) Kashaf Mazhar (KTH, Sweden)

balesio Native Format Optimization Technology (NFO)

Less naive Bayes spam detection

Watermarking Techniques for Protecting Intellectual Properties in a Digital Environment

FFT Algorithms. Chapter 6. Contents 6.1

A VIDEO COMPRESSION TECHNIQUE UTILIZING SPATIO-TEMPORAL LOWER COEFFICIENTS

How to Send Video Images Through Internet

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Structure of the Lecture. Assignment from last week

A Digital Audio Watermark Embedding Algorithm

Data Storage 3.1. Foundations of Computer Science Cengage Learning

Transcription:

Image Compression Using Wavelet Methods Yasir S. AL - MOUSAWY*,1, Safaa S. MAHDI 1 *Corresponding author *,1 Medical Eng. Dept., Al-Nahrain University, Baghdad, Iraq Yasir_bio@yahoo.com, dr_safaaisoud@yahoo.com DOI: 10.13111/2066-8201.2013.5.1.2 Abstract: This paper aims to study and compare the compression efficiencies of wavelet-bitmap and wavelet-huffman. Both methods use wavelet to deduct a portion of the information in the image that can endure losing it without a significant disturbance in the image itself. Both methods deal with thresholding of wavelet coefficients to produce as much as possible zero coefficients for the purpose of higher compression, then encode the non zero coefficients by using Huffman encoding and bitmap drawing of the coefficients distribution resulted from thresholding inside the analyzed image. The processing time is calculated using MATLAB. Results were taken to compare the efficiencies of the compression between the two methods. In this work we present a method based on wavelet transforms using Huffman encoding and bitmap encoding. Different combinations of parameters and transforms have been compared. The result shows that the time of compression in wavelet-bitmap method is less than the time of compression which is taken by wavelet-huffman method. Also the compression ratio in the wavelet-huffman is greater than the other method. The quality in both methods is relatively equal. Key Words: Image compression, wavelet-bitmap, wavelet-huffman. I. INTRODUCTION Image compression is the application of data compression on digital images. In effect, the objective is to reduce redundancy of image data in order to be able to store or transmit data in an efficient form. There have been developed many types of compression algorithms. In fact, there are two types of compression, namely lossy and lossless compression. In lossless compression techniques, the original image can be perfectly recovered form the compressed (encoded) image. These are also called noiseless since they do not add noise to the signal (image). It is also known as entropy coding since it use statistics/decomposition techniques to eliminate/ minimize redundancy. Lossless compression is used only for a few applications with stringent requirements such as medical imaging. Lossy compression technique provides much higher compression ratios than lossless schemes. Lossy compression is widely used since the quality of the reconstructed images is adequate for most applications. By this method, the decompressed image is not identical to the original image, but reasonably close to it. This work exploits an image compression based on wavelet compression method which be considered a lossy compression. In this study the wavelet compression was achieved using Huffman encoding and bitmap encoding. These compression methods that will reduce the massive storage capacity required as well transmission times without a degradation in the diagnostic contents. Compression techniques can be classified according to the difference between the original and recovered image; a lossy techniques such as a discrete cosine transform (DCT) and discrete wavelet transform (DWT) recovers images similar to the original one and achieves high compression ratio. While lossless techniques are able to accurately recover the original image although lowering the compression efficiency., pp. 13 18 ISSN 2066 8201

Yasir S. Al-MOUSAWY, Safaa S. MAHDI 14 A further classification for the lossy technique can be done according to the compression algorithm involved [4, 5]. Medical image compression requires higher quality. No lossy compression method has been adopted by the health authorities as an accepted standard. This fact, together with the grey scale nature of the radiographic images motivate the researchers to promote a new developments on this field. In our approach we have chosen wavelet based technique mainly because the accepted results obtained on the previous researches [3-9] pursuing a lossy compression method can be widely accepted by the medical image specialists mainly on MRI application. II. METHODS OF IMAGE COMPRESSION In this work we compare the wavelet-huffman method and wavelet-bitmap method. Also we will compare the transmission time required for the original image and the compressed image. Lossy Compression 1. The wavelet-huffman method. This method has three stages: a- Discrete bi-dimensional wavelet transform [1, 2], using an extended 2D pyramidal algorithm described by Mallat [8], employing daubechi (4) filter. b- Coefficients thresholding and quantization. The resulting coefficients from the wavelet transform are truncated with a threshold depending on the scale band. In order to remove the least significant coefficients which represent the zeros coefficients, thresholding of wavelet coefficients gives an opportunity to suppress a portion of the coefficients that do not affect the quality of the information, and setting them to zero. This will open the way for applying different methods to compress those redundant information (zeros coefficients). Thresholding is calculated by a statistical method according to the formulae: Where α is a correction factor ranging from 0 to 1 and is the threshold, N represents the count of coefficients in the analyzed image. Ce w are the wavelet coefficients [9]. In this paper, the threshold is calculated for all sub-bands. After the thresholding, the non-zero coefficients will be quantized in order to increase the repetition which increases the efficiency of Huffman encoding. If we remove the zeros coefficients by bitmap-encoding, this will approve the compression ratio [3]. c- Application of Huffman encoding on the quantized coefficients. 2. The wavelet-bitmap encodings. This method has the same procedure of the first two stages of the wavelet-huffman encoding, but it is different from previous one because of using the bit-map encoding. After the process of thresholding, zeros and non-zeros coefficients are obtained. The zeros coefficients have a values which are either equal or less than the calculated threshold values. The idea behind bitmap is to draw a map for the zeros and non-zeros coefficients. Then the positions of zeros coefficients will be located in order to eliminate them [3]. The qualification indicates how many zeros coefficients will be eliminated. The encoded map and the original image are equal in size but certainly it depicts the positions of the zero coefficients. The bitmap also needs to be reduced in size. Then a run length encoding method is used. The run length is a very simple method used for sequential

15 Image Compression Using Wavelet Methods data. It is very useful in case of repetitive data. This technique replaces sequences of identical symbols (pixels), called (runs) by shorter symbols. The run length code for a gray scale image is represented by a sequence {Vi, Ri}, where Vi is the intensity of pixel and Ri refers to the number of consecutive pixels with the intensity Vi [7].In our bit map, a run length is used to encode the zeroes coefficients and a non-zeros coefficients. Furthermore, the run length encodes the zeros coefficients as a length of zeros, also a run length encodes the non zeros coefficients as a length of ones. After the elimination process of zeros coefficients, a bitmap will have only the length of one s code which has all the necessary information to retain the image. 3. DCT-Huffman method. This compression technique is based on discrete cosine transform to block of (8*8) pixels of the original image followed by quantization of the transformed image coefficients using a quantization matrix which is built experimentally depending on the human visual system that produces a lot of zeros coefficients [6]. Finally the entropy encoding using (Huffman encoding) will be applied on the quantized coefficients. Lossless compression HUFFMAN encoding methods. This is a general technique for coding symbols based on their statistical occurrence frequencies (probabilities). The pixels in the image are treated as symbols. The symbols that occur more frequently are assigned a smaller number of bits, while the symbols that occur less frequently are assigned a relatively larger number of bits. Huffman code is a prefix code. This means that the (binary) prefix code of any symbol is different from the binary prefix code of any other symbol. Most image coding standards use lossy techniques in the earlier stages of compression and use Huffman coding as the final step [7]. III. RESULTS AND DISCUSSION A set of 8 pictures of size 128X128 pixels are used in this research. This dataset is compressed by the wavelet methods (wavelet-bitmap, wavelet-huffman), measuring the PSNR, compression ratio, and compression time. While calculating the entropy of each input image, the entropy represents a measure of the information contained in the image. The results are shown in table 1 and table 2. Below there is a view of the PSNR, compression ratio and the compression time which are obtained by the wavelet-bitmap method and wavelet-huffman, respectively. Image name Mr12 Mr13 Mr14 Mr15 Mr16 Mr17 Mr18 Wavelet-bitmap Comp. time (min) 0.0014599 0.0014651 0.0014373 0.0014656 0.0014184 0.0014409 0.0014272 0.0014403 Table 1. The results of a wavelet-bitmap method PSNR 25.6229 29.9811 21.7292 27.731 29.725 27.3771 31.3789 33.0054 COMP. RATIO 1.5887 1.4907 1.413 1.6457 1.5637 1.9535 2.621 2.3978 entropy 6.5305 6.228 6.6584 7.1785 6.2289 4.5778 4.1237 4.1551

Yasir S. Al-MOUSAWY, Safaa S. MAHDI 16 As it can be seen in figure 1 below which is obtained from the result of table 1 above, the compression process takes short time. Also we note that the change in the time of compression is relatively small. This is because the compression time is mainly image size dependent and it does not depend on the entropy (information and details) of the image. Furthermore, the bit-map process is used to eliminate the zeros coefficients and encode the zeros and non-zeros coefficients and the time of this process is approximately the same in all images because these images are equal in size. The compression ratio will be changed reversely with respect to the entropy but it is not affected by the time of the compression because of the reason mentioned above. Also we expect that the PSNR may change reversely with respect to the entropy and this will be clear if we take more samples of the selected images. PSNR ENTROPY COMP.R Figure 1. The relation between PSNR, Compression ratio and the entropy vs. time. In the wavelet-huffman method we have obtained the results shown in the table 2 below: Time of compression in (min) Table 2. The results of a wavelet-huffman method Image name Wavelet-Huffman PSNR COMP. RATIO entropy Comp. time (min) 0.23773 25.63 2.219 7.003 Mr12 0.22379 29.717 2.193 6.5305 Mr13 0.25343 21.7536 2.1058 6.6584 Mr14 0.23149 27.7251 2.1956 7.1785 Mr15 0.2532 29.6493 2.6402 6.2289 Mr16 0.19488 27.3743 2.7384 4.5778 Mr17 0.16906 31.3816 3.1147 4.1237 Mr18 0.16722 32.9898 3.1713 4.1557 It can be seen from figure 2 below which is obtained from the results shown in table 2. The time of the compression of wavelet-huffman method is longer than the time of the compression that measured for the wavelet-bitmap method, also we note that the change in the time of compression between any two compressed images is relatively high; this is because of the wavelet-huffman process is more complex than the wavelet-bitmap process. The time of compression in wavelet-huffman method does not depend only on the size of the image but also depends on the entropy of image. Furthermore, if the image has high entropy, the complexity of the wavelet-huffman process will increase. This is because the wavelet-huffman needs to build the tree of the symbols and extract the probability of these symbols. Next, it is necessary to make the binary code of each symbol. These symbols are different from the image to another image depending on the information of it, therefore this

17 Image Compression Using Wavelet Methods process is too complex if it is compared with process of the wavelet-bitmap method because the time of the compression process in the wavelet-huffman method is a summation of two times;these times are thresholding time and the time taken to build the tree of the symbols. Moreover we see that the compression ratio is affected by the time of the compression for the same reason above. The compression ratio will change reversely with respect to the entropy of the image. It can be seen that the PSNR will change reversely with respect to the entropy of the image. PSNR ENTROPY COMP.R Figure2. The relation of PSNR, compression ratio, and the entropy of the image vs. time. An experiment was done on transmitting and receiving a digital image between two computers. The transmission time was calculated for different methods of image compression which are mentioned above. This time is compared with the transmission time of the original image shown in the table 3 below: Table 3. The difference in transmission time between the original image and compressed image Image name Time of compression in (min) Type of image Original image DCT-huff compressed image Wavelet-huff compressed image Wavelet-bitmap compressed image Huffman-compressed image Transmission time(min) 0.25494 0.04448.083044 0.095401 0.088092 IV. CONCLUSIONS It is obvious that the compression ratio is changing according to the entropy in both methods (wavelet-huffman and wavelet-bitmap). When the compression ratio increases, the entropy will decrease. Also the PSNR drops when the compression ratio increased. We can conclude that the PSNR in both methods above is relatively equal and the images that were compressed in both methods have the same quality. The compression ratio resulted from a wavelet-huffman method is relatively higher than the compression ratio that resulted from a wavelet-bitmap method. This indicates that the wavelet-bitmap is not efficient enough but although the time of compression is reduced. This compression ratio in bit-map method may be modified if another wavelet filter is chosen or the level of the wavelet compression is increased. Also the compression ratio of the wavelet-huffman method can be enhanced if the zeros coefficients that resulted from the thresholding process is removed.

Yasir S. Al-MOUSAWY, Safaa S. MAHDI 18 REFERENCES [1] I. Daubechies, Orthonormal Bases Of Compactly supported Wavelets. Communication on the pure and applied mathematics, XLI: 909-996, 1988. [2] O. Rioul, M. Vetterli, Wavelet and signal processing. IEEE SP magazine, October, 14-38, 1991. [3] J. J. Vaquero, R. J. Vilar, A. Sanntos, F. del Pozo, Cardiac MR Imaging compression: Comparison between Wavelet Based and JPEG Methods, Grupo de Bioingenieriay Telemedicina, Universidad Politecnica de Madrid, Spain, 2003. [4] G. K. Wallance, The JPEG still picture compression standard, Communications of the ACM, 34, 4, 30-44, 1991. [5] B. A. Furht, Survey of multimedia compression techniques and standards parts 1: JPEG standard real time imaging, 1, 49-67, 1995. [6] Kne Cabeen and Peter Gent, Image Compression and Descrete Cosine Transform, College of the Redwoods, 2006. [7] Sonal Dinesh Kumar, A study of the various image compression techniques, Department of the computer science and Engineering, Guru Jhambheswar University of Science and Technology, Hisar, 2006 [8] S.G. Mallat, A Theory for Multiresolution Signal Decomposition. The Wavelet representation. IEEE Transactions on Pattern Analysis And Machine Intelligence, 11, 7, 674-693, 1989. [9] Ali Al-Ataby and Fawzi Al-Naima, A modified high capacity image steganography technique based on wavelet transform, The international Arab journal of information technology, vol.7, no.4, October 2010.