Design of Efficient Algorithms for Image Compression with Application to Medical Images

Size: px
Start display at page:

Download "Design of Efficient Algorithms for Image Compression with Application to Medical Images"

Transcription

1 Design of Efficient Algorithms for Image Compression with Application to Medical Images Ph.D. dissertation Alexandre Krivoulets IT University of Copenhagen February 18, 2004

2

3 Abstract This thesis covers different topics on design of image compression algorithms. The main focus in this work is development of efficient entropy coding algorithms, development of optimization techniques for context modeling (with respect to the minimum code length) and application of the methods in the design of an algorithm for compression of medical images. Specifically, we study entropy coding methods based on a technique of binary decomposition of source symbols. We show that the binarization allows to fit a parametric probability distribution model to a source coding algorithm, e.g., arithmetic coding, reducing the number of coding parameters to that of the distribution model. Context modeling is an essential part of an image coding algorithm, which basically defines its compression performance. In the thesis, we describe a unified approach to this problem based on statistics learning methods and the minimum description length principle. In particular, we present a design of optimized models using context quantization and initialization techniques. The optimization allows to find a model, which yields the minimum code length for some set of training data samples. Entropy coding and context modeling methods are applied for developing a compression algorithm intended for medical images. The algorithm allows for progressive near-lossless coding and is based on lossy plus refinement layered approach. We show that this method results in a better compression performance and image quality for large distortion values compared with the recently adopted standard JPEG-LS for lossless and near-lossless image compression. We also investigate a possibility of image reconstruction with the minimum mean squared error criterion within the proposed framework. i

4 ii

5 Contents Contents iii 1 Introduction Motivation and main goals Previous work About this thesis Source coding and image compression Information sources Source coding Arithmetic coding Universal source coding Compression of images Source coding via binary decomposition Introduction The binary decomposition technique On redundancy of binary decomposition Binary decomposition of FSM sources Binary decomposition and universal coding Applications of binary decomposition Introduction Generalized two-sided geometric distribution Efficient coding of sources with the GTSGD Experimental results On redundancy of rice coding Summary Context modeling for image compression Introduction Context formation Context model optimization Context initialization Context quantization Summary iii

6 iv CONTENTS 6 Optimization in the JPEG2000 standard Introduction Context modeling in JPEG High-order context modeling Experimental results Summary Hierarchical modeling Introduction Two-stage quantization Tree-structured quantization Optimal tree pruning Experimental results Summary Compression of medical images Introduction Embedded near-lossless quantization Entropy coding of the refinement layers Experimental results Reconstruction with minimum MSE criterion Summary Bibliography 89 A Bit rates for test sets of medical images 95 B PSNR for test sets of medical images 117

7 Chapter 1 Introduction 1.1 Motivation and main goals Computer imaging plays a significant role in many areas ranging from consumer digital photo albums to remote earth sensing. The growing production of images and demands to their quality require high performance compression methods for efficient transmission, storage and archival. Most image compression algorithms can be viewed as consisting of an image transformation, which performs decomposition of an image into a sequence of descriptors, followed by entropy coding of the descriptors. Entropy coding, in essence, performs the compression. Prediction, discrete cosine and wavelet transforms are examples of image decomposition, whereas prediction errors and transform coefficients are examples of the descriptors. They constitute an information source for the entropy coder, which encodes the sequence of source symbols according to some source model. This model is normally designed off-line using some assumptions on the data to be coded. The better the model approximates statistical properties of the data, the higher compression can be achieved. The lower bound on the compression performance is defined by the entropy of the source. The design of efficient models is a task of primary interest for any compression algorithm. Most image compression algorithms use the universal coding approach, where the model has fixed structure and unknown parameters. The parameters are estimated on the fly during encoding. This approach allows to adopt the model to the data statistics, which may vary for different data. On the other hand, the price for that adaptivity is the increase of the code length due to the necessity of implicit or explicit transmission of information on the model parameters (the so called model cost). The higher the model order, the more parameters it involves, and the higher the model cost. Thus, there is a trade-off between the order of the model and the overhead data needed to specify the model parameters. The use of domain knowledge of the data allows for reduction of the overhead information. Finding optimal solutions to this problem is one of the goals of this thesis. In this connection, we investigate the technique of source coding via binary decomposition of symbols. The decomposition can be used as a mean for model cost 1

8 2 CHAPTER 1. INTRODUCTION reduction and/or for efficient model optimization. Another goal is the design of an efficient algorithm for compression of medical images. The design of such an algorithm differs from the design of general purpose image compression methods due to some specific requirements. The main concern in medical imaging is the quality of the reconstructed image. There are three kinds of image compression methods: lossless, lossy and near-lossless. Lossless compression methods perfectly reconstruct an image, but they compress data at the lowest degree. Lossy compression techniques are usually used for images intended for human observation in general-purpose computer systems. Such methods are based on the fact that the human visual system does not perceive the high frequency spatial components of the image, and those components may be removed without any visible degradation. Thus, introducing distortion in the reconstructed data, one can achieve higher compression, while preserving its visual information content. Lossy methods have the main advantage that they can compress images at the highest degree. However, the distortion of the data is undesirable in some applications, e.g., in image analysis, object recognition and some others. Near-lossless compression is a lossy image coding algorithm, where the distortion is specified by the maximum allowable absolute difference (the tolerance value) between pixel values of the original and the reconstructed images. The method allows for a rigid and robust control of errors while achieving reasonable compression performance. For that reason near-lossless compression seems to be an attractive method for medical images. In our work, we develop an efficient algorithm, which allows for progressive near-lossless compression up to the lossless mode. The extended functionality and makes the algorithm more suitable for real applications. Compression of medical images also allows for efficient use of the similarity of statistical properties of the data to design high-order source models and thus, to achieve better compression performance. 1.2 Previous work Image coding originates from the communication theory established in 1948 by Shannon in his famous paper [50]. Yet, it took years before the methods developed in this work could be used in practical algorithms for image compression. In the recent decades, image compression has been extensively studied by many researchers. The first lossy and lossless still image coding standard JPEG [33] appeared by the end of 1980 s and had a remarkable performance at that time. The standard uses a block discrete cosine transform (DCT). The DCT was shown to be very close to the optimal decomposition for the class of natural images, performing almost perfect decorrelation of image pixels, see, e.g., [35]. The standard uses an ad-hoc Huffman or arithmetic entropy coding of transform coefficients. The version with arithmetic coding exploits binarization of the coefficients and a simple heuristic context modeling. The JPEG standard remains the most popular image compression algorithm. Introduction of a (discrete) wavelet transform (DWT) in the middle of 1980 s [20,

9 1.3. ABOUT THIS THESIS 3 30, 26] launched a new era in image compression. Besides being good for decorrelation of image pixels, the new transform provides more functionality to the compression algorithm, such as embedded coding, where any initial part of the compressed bit stream can be used to reconstruct the image with the quality in proportion to the length of this part. A benchmark method called the embedded zerotree wavelet (EZW) coding was introduced in [51] and further developed in the SPIHT (Set Partitioning in Hierarchical Trees) algorithm [47]. The use of integer-to-integer (reversible) wavelet transform [9] allowed for progressive coding up to the lossless image reconstruction [8, 48, 3, 64]. Development of context modeling techniques and rate-distortion optimization lead to a remarkable improvement of compression efficiency of algorithms based on DWT. The most sophisticated methods are the high-order embedded entropy coder of wavelet coefficients (ECECOW) [64], embedded block coding with optimized trancation (EBCOT) [55], and the compression with reversible embedded wavelets (CREW) algorithm [69], just to name a few. The algorithm EBCOT was adopted as a basis for the new standard for still image coding JPEG2000 [56]. The best compression results for lossless image compression were obtained by algorithms based on predictive coding and context modeling of prediction residuals. For example, algorithm CALIC (Context-based, Adaptive, Lossless Image Codec) [68] uses a sophisticated context modeling and a special kind of prediction (it exploits non-linear, gradient adaptive predictor) to achieve the best compression performance among practical coders. The performance of CALIC turns out to be even better than the UCM (Universal Context Modeling) method [57] developed on the basis of a universal source coding algorithm with provable asymptotic optimality. The ideas from CALIC were employed in the LOCO-I (LOw COmplexity LOssless COmpression for Images) algorithm [59], which became the new standard for lossless image coding JPEG-LS [60]. The standard also supports near-lossless compression. Near-lossless compression first appeared in [10] as a method allowing for much better compression performance compared to the lossless coding at the price of a very small and controllable distortion of the image. It was further elaborated in [65] to achieve higher compression efficiency. The fist method allowing progressive coding was proposed in [5]. The major improvements in compression performance are due to development of new techniques for context modeling, such as context quantization [67], and the use of parametrized probability distribution models (LOCO-I). 1.3 About this thesis The thesis covers two major problems in image compression: entropy coding and context modeling. For efficient entropy coding we studied the use of a binary decomposition technique with application to coding of sources with parametrized distribution. For design of high-order context models we developed optimization methods based on training data samples. The proposed solutions are tested in a series of experiments and applied in the design

10 4 CHAPTER 1. INTRODUCTION of an algorithm for near-lossless compression primarily intended for coding medical images. The thesis is organized such that most chapters are self-contained is the sense that they cover different topics, yet in the same framework. The material of the thesis is based on the following papers: 1. A. Krivoulets, On redundancy of coding using binary tree decomposition, in Proc IEEE Int. Workshop on Inf. Theory, p. 200, Oct., Bangalore, India, (Chapter 3) 2. V.F. Babkin, A.G. Krivoulets, On coding of sources with Laplacian distribution, in Proc. of Popov s Society Conference, p. 254, May, 2000, Moscow, Russia. (in Russian) (Chapter 4) 3. A. Krivoulets, On coding of sources with two sided geometric distribution using binary decomposition, in Proc Data Compression Conf., p. 459, Snowbird, UT, Apr., (Chapter 4) 4. A. Krivoulets, Efficient entropy coding for image compression, IT University of Copenhagen, Tech. report TR , February, (Chapter 4) 5. A. Krivoulets, Fast and efficient coding of low entropy sources with two-sided geometric distribution, in Proc. of the 2nd European workshop on Advanced video-based surveillance systems, pp , Kingston, UK, Sep., (Chapter 4) 6. A. Krivoulets On redundancy of Rice coding, IT University of Copenhagen, Tech. report TR , September, (Chapter 4) 7. A. Krivoulets, X. Wu, and S. Forchhammer, On optimality of context modeling for bit-plane entropy coding in the JPEG2000 standard, In Proc. VLBV03 Workshop, pp Madrid, Spain, Sep., 2003, (LNCS 2849) (Chapters 5,6) 8. A. Krivoulets and X. Wu, Hierarchical modeling via optimal context quantization, in Proc. 12th International Conference on Image Analysis and Processing, pp , Mantova, Italy, Sep., (Chapter 7) 9. A. Krivoulets, A method for progressive near-lossless image compression, in Proc. ICIP2003, vol. 2, pp , Barcelona, Spain, Sep., (Chapter 8) 10. A. Krivoulets, Progressive near-lossless coding of medical images, in Proc. 3d International Symposium on Image and Signal Processing and Analysis, vol. 1, pp , Rome, Italy, Sep., (Chapter 8) A brief description of the chapters is as follows. Chapter 2 describes the main concepts of image compression algorithms and introduces the framework of our research. In this chapter, we describe basic principles and the place of entropy coding and context modeling in image compression algorithms.

11 1.3. ABOUT THIS THESIS 5 In Chapter 3, we discuss a technique of binary decomposition of source symbols as an efficient mean for entropy coding and present some properties of the technique. Some applications of the technique are presented and discussed in Chapter 4. In this chapter, we introduce a probabilistic model of sources, which often occurs in image compression, and describe efficient methods of coding of such sources using the binarization technique. Binarization allows to reduce the number of coding parameters, which is usually equal to the size of the source alphabet, to the number of probability distribution parameters, which normally is much lower. Binarization also simplifies context model optimization described in the next chapter. Chapter 5 presents basic principles of high-order context modeling for image compression. The chapter covers two main topics of the context model design: context formation and context model optimization. The latter is based on context initialization and quantization based on some prior statistics on the data. In Chapter 6, we describe an application of the high-order context modeling techniques developed in Chapter 5 to optimization of context models adopted in the JPEG2000 standard. The models are used in the bit-plane entropy coding of wavelet transform coefficients and basically define the compression performance of the standard. In this chapter we demonstrate almost optimality of the models adopted in JPEG2000 with the given context template. Chapter 7 extends ideas of the context quantization technique to build a hierarchical set of models intended for a better fit to the actual data. Finally, in Chapter 8, we present an algorithm for near-lossless compression intended for medical images, which allows for progressive coding and reconstruction. In the algorithm design, we exploited the methods developed in Chapters 3,4, and 5. We show, that the resulting algorithm allows for more efficient coding (in terms of both compression performance and functionality) than the recently adopted standard JPEG- LS for lossless and near-lossless image compression.

12 6 CHAPTER 1. INTRODUCTION

13 Chapter 2 Source coding and image compression In this chapter we introduce the basic theory, concepts and definitions of source coding and consider a general structure of image compression algorithms. This background will be used throughout the thesis. 2.1 Information sources By an information source we assume a mechanism generating discrete random variables u from a countable (often finite, but maybe infinite) set A = {a 0, a 1,...,a m 1 }. The set A is called the source alphabet and m = A defines the alphabet size (hereafter, denotes cardinality of a set or the length of a string). Let u t be a random variable generated at the time instance t. A string of source symbols of length n (a source message) is denoted as u n 1 = u 1 u 2 u 3...u n. The empty string u 0 1 is denoted as. Let An be the set of all possible messages of length n in the alphabet A: A n = {u n 1 }. The simplest source model generates an independent and identically distributed (i.i.d.) symbols according to the probability distribution P(A) = {p(a), a A}, which defines the model parameters (probabilities of source symbols). Such a model is called the memoryless source. The distribution P(A) does not depend on the past symbols. A general source model is a finite state machine (FSM) model, which is defined by the (finite) set of states S = {s}, the source alphabet A, the set of conditional probability distributions {P s (A), s S}, P s (A) = {p(a s), a A}, and the initial state s 1. At each instance of time t the source generates a symbol u t A and changes its state from s t to s t+1 according to the state transition rule s t+1 = F(s t, u t 1 ). (2.1) The function F( ) will be referred to as a model structure. The set of probability distributions {P s (A), s S} specifies the set of model parameters. A message u n 1 generated by an FSM source can be decomposed into S subsequences generated at the states s S, which are the i.i.d. sequences drawn according to the distribution P s (A). 7

14 8 CHAPTER 2. SOURCE CODING AND IMAGE COMPRESSION If the state s t+1 is uniquely defined by the previous state s t and the symbol u t generated at this state, i.e., s t+1 = F(s t, u t ), (2.2) then the model is called a Markov source. The structure function F( ) of a Markov source can be described by a directed graph of state transitions, where nodes define the states and edges correspond to the source symbols and define state transitions. Let P ij be the probability of entering the state s = j from the state s = i: P ij = Pr(s t+1 = j s t = i). (2.3) If the state transition rule F( ) is such that the probability (2.3) depends only on the previous state, i.e., Pr(s t+1 s t, s t 1,...) = Pr(s t+1 s t ), then the sequence of the states form a homogeneous Markov chain. If the state is uniquely defined by the o last source symbols u t t o+1, then the source is called an o-order Markov chain. For the o-order Markov chain model we have: and where p(a s) = p(a u t t o+1 ) s t+1 = F(u t t o+1), F(u t t o+1 ) : ut t o+1 s {1, 2,..., Ao }. The FSM model is a good approximation to most practical sources, therefore it is widely used in compression algorithms. The main property of an information source is its entropy defined as where and p(u n 1 ) is the probability of the string un 1. 1 H = lim n n H(An ), (2.4) H(A n ) = p(u n 1 ) log 2 p(u n 1 ), (2.5) p(u n 1 ) An 2.2 Source coding Let B = {0, 1} be a binary alphabet and B be a set of all words in the alphabet B. Source coding is a mapping, which to each source message u n 1 A n assigns a codeword ϕ(u n 1 ) B such that u n 1 can be uniquely reconstructed. A set of codewords {ϕ(u n 1 ), un 1 An } is called a prefix code if no one codeword is a prefix of any other in the set. The use of prefix code guarantees the unique decipherability.

15 2.2. SOURCE CODING 9 The codeword lengths ϕ( ) of any set of prefix codes must satisfy the Kraft inequality: u n 1 An 2 ϕ(un 1 ) 1. (2.6) Conversely, if the set of codeword lengths satisfies (2.6), then there exists a prefix code with these codeword lengths. The set Q(A n ) = {q(u n 1 ) = 2 ϕ(un 1 ), u n 1 An } is called the coding probability distribution on A n. Since the codeword lengths are of main concern, this distribution is often useful in the source coding analysis. Given the source model, the key challenges in source coding are the choice of the codeword lengths and the construction of the codewords. Normally the codewords are chosen to minimize the description length ϕ(u n 1), meaning data compression. Thus, the terms source coding and data compression are often interchanged in the literature, even though source coding has a broader meaning. In the future, we will follow the tradition and use both terms in the same sense. Let L = p(u n 1 ) ϕ(un 1 ) (2.7) u n 1 An be the average code length for the set A n. The source coding theorem ([16], Theorem 3.3.1) establishes the lower bound on L for lossless coding L H(A n ), (2.8) with equality iff ϕ(u n 1) = log 2 p(u n 1). The theorem also states that there exists a prefix code such that L < H(A n )+1. Thus, the minimum possible code length for the message u n 1 is defined by ϕ(u n 1) = log 2 p(u n 1). (2.9) This quantity is called the self-information of the message u n 1. This is the basic idea of source coding: data compression is possible by assigning shorter codewords to more probable messages (symbols) and longer codewords to less probable messages (symbols). Maximum compression is achieved by choosing the codeword lengths equal to minus the logarithm of the probability of a message. The main property of a code is its redundancy. There are different redundancy measures defined in source coding. The two measures, which will be used later in the thesis, are the average redundancy and the individual redundancy R a = L H(A n ), (2.10) R i = ϕ(u n 1) + log 2 p(u n 1), (2.11) respectively. The task of a codeword construction can be solved by using arithmetic coding.

16 10 CHAPTER 2. SOURCE CODING AND IMAGE COMPRESSION 2.3 Arithmetic coding Arithmetic coding is a method for sequential calculation of the codeword ϕ(u n 1) for the source message u n 1. It is based on unpublished Elias algorithm described by Abramson [2] and Jelinec [22]. First practical implementations are due to Rissanen [43] and Pasco [32], who solved the finite precision problem and Witten et al. [61] who made it popular by publishing the C-code of their implementation. In arithmetic coding, the codeword is recursively calculated as a cumulative coding probability of the string u n 1: ϕ(u n 1 ) = a<u 1 q(a ) + q(u 1 ) a<u 2 q(a u 1 ) + + q(u n 1 1 ) a<u n q(a u n 1 1 ), (2.12) where {q t (a u t 1 ), a A, t = 0, 1,..., n 1} is a sequence of conditional coding probability distributions satisfying q t (a u t 1) = q(ut 1 a) q(u t 1), (2.13) such that q t (a u t 1) 1, (2.14) a A n 1 q(u n 1 ) = t=0 q(u t+1 u t 1 ). (2.15) The calculations are assumed to have infinite precision resulting in ideal arithmetic coding. The main property of ideal arithmetic coding is the following theorem, which is a slightly modified version of Theorem 1 from [52]. Theorem 1. Given a sequence of coding distributions {q t (a u t 1), a A, t = 1, 2,..., n} satisfying (2.13), (2.14), an arithmetic coder achieves codeword lengths The codewords form a prefix code. ϕ(u n 1) < log 2 q(u n 1) + 2, u n 1 A n. (2.16) In practice, calculations are performed with finite precision. In this case, estimates on upper bounds of the coding redundancy are defined in the following theorem. Theorem 2. Let r bits be used in a binary representation of the coding probabilities q(a ), a A, and g r + 2-bit registers for calculations. Then R a mn(r + log 2 e)2 (g 2) + 2, (2.17) R i g + (n 1)2 r g. (2.18)

17 2.4. UNIVERSAL SOURCE CODING 11 ENCODER DECODER u ϕ(u) u Arithmetic Arithmetic coder coder q(u t+1 u t 1) q(u t+1 u t 1) Source model Source model Figure 2.1: A block-chart of source coding using arithmetic coder. This theorem essentially is a compilation of Theorem 4 form [46], which establishes the bound for R a, and the estimate on R i given in [41]. It is clear from the theorem, that even with finite precision arithmetic implementation, the coding redundancy is usually negligible. Using arithmetic coding, we can separate the problem of assigning the coding probabilities q(u t+1 u t 1 ), according to some chosen criterion (different criteria will be considered in the next section), from the codeword construction. Source coding using arithmetic coder is schematically represented on Figure 2.1 [1], where the source model does the job of sequential assigning the coding probabilities q(u t+1 u t 1 ) and the arithmetic coder performs sequential calculation of the corresponding code word. 2.4 Universal source coding If the source model is known, then the choice of the coding distribution q(a ) = p(a ), a A allows to calculate a codeword for the message u n 1 using arithmetic coding exceeding the ideal by at most 2 bits (disregarding the arithmetic precision problem). However in practice, the source parameters (or even the underlying source model) are usually not known in advance or may vary for different data. In universal coding, it is assumed that the source belongs to some predefined set of models Ω = {ω}. A code is designed to perform well for all, or most of the models in the set. The set can be just a parametric probabilistic set of sources having the same model structure (e.g., Markov source with the same state transition rule) or may be a double mixture of sources with different structure and a set of parameters [11]. Let ϕ Ω (A n ) B be a prefix code on A n used for all sources in the set Ω and R{ϕ Ω (A n )} be a measure of performance (redundancy) of the code such that R{ϕ Ω (A n )} 0, (2.19) with equality when the set Ω contains only a single element, i.e., when the source is

18 12 CHAPTER 2. SOURCE CODING AND IMAGE COMPRESSION known (the source structure and the parameters are given). Thus, the redundancy of universal coding is only due to the lack of knowledge about the source. Let ϕ Ω = arg inf R{ϕ Ω(A n )} (2.20) ϕ and R = R{ ϕ Ω (A n )}. (2.21) The code ϕ Ω is said to be universal w.r.t the model set Ω if R 0 (2.22) as n [11]. There are two main measures used in universal coding, which are based on the average and individual redundancy. Let and R n a(ϕ Ω, ω) = p(u n 1 ω) An p(u n 1 ω) ϕ Ω (u n 1) H(A n ω) (2.23) Ri n (ϕ Ω, ω) = ϕ Ω (u n 1 ) + log 2 p(u n 1 ω) (2.24) be the average and individual redundancy of the code ϕ Ω for the source ω, respectively. Then the measures of universal coding are defined by the maximum average and individual R a {ϕ Ω (A n 1 )} = sup n Rn a(ϕ Ω, ω) (2.25) ω Ω R i {ϕ Ω (A n 1 )} = sup n Rn i (ϕ Ω, ω) (2.26) per-symbol redundancy. In general, the use of different criteria results in different codes. The criterion (2.26) was first proposed in [53, 54]. Clearly, it is stronger than (2.25) and a code ϕ Ω with good properties according to the criterion (2.26) also implies good behavior w.r.t. the criterion (2.25). The lower bound on the convergence (2.22) w.r.t. to both criteria for the parametric sets of memoryless and FSM sources is defined by [44, 52] ω Ω R K 2n log 2 n + O ( ) 1, (2.27) n where K is the number of free parameters. For memoryless sources K = m 1, and for FSM sources with S states K = S (m 1) (m is the alphabet size). Coding of an FSM source using the sequential coding probability distribution q t+1 (a s) = ϑ(a uts 1 (s)) + α a (s) t s + a A α a(s), (2.28)

19 2.5. COMPRESSION OF IMAGES 13 combined with arithmetic coding, where ϑ(a u ts 1 (s)) is the number of occurrences of the symbol a in the subsequence u ts 1 (s) corresponding the state s, and t s is the length of the subsequence, allows to achieve the optimal convergence rate (2.27). The values {α a (s) > 0, a A, s S} define prior distributions on the source parameters {P s (A), s S}. If nothing is known about P(A) s, then the best choice is α a (s) = 1, a A, s S, see, e.g., [23, 54]. 2 For memoryless sources, (2.28) is reduced to 2.5 Compression of images q t+1 (a) = ϑ(a ut 1 ) + α a t + a A α. (2.29) a A gray-scale digital image 1 is a 2-dimensional array of bounded integer values (image pixels) v[y, x], 1 x X <, 1 y Y <, where y and x define the row and the column coordinates, respectively. The values X and Y define the size of an image. The range of pixel s values is normally defined in terms of the number of bits k required to represent all image pixels. Thus, for a k-bit image, the pixel s value lies in the range 2 v[y, x] [0, 2 k 1]. Most general purpose images use 8-bit representation. Medical images often use bit representation. An example of an image is shown on Figure 2.2, where the small squares represent pixels and the grey scale reflects the pixel s value. In its original representation, an image takes X Y k bits. Image compression aims at reducing this number by using source coding techniques. An image compression system can be described in terms of four functional blocks: image transformation, quantization, encoder model (not to be confused with the source model introduced in Section 2.2), and entropy coder, as it is shown on Figure 2.3, where the mandatory part is the entropy coder. (In general, the image pixels can be fed directly to the entropy coder, however, in most applications this approach would lead to inferior compression performance due to significant spatial correlation of the image pixels.) The image transform part converts an image into a sequence of descriptors being another (abstract) representation of the image. The most used transformation techniques are prediction and a discrete orthogonal transform. The transformations use the fact that image pixels typically have substantial correlation with their neighbors. Predictive coding exploits an auto-regressive model, where the weighted past image pixels is used to predict the value of the next pixel. The difference values (residuals) between the estimated (predicted) and the original pixel values define the sequence of descriptors. The idea of an orthogonal transformation is that the image is represented as a linear combination of some basis functions (waveforms). The transform coefficients are the weighting factors to these functions. If the transform coefficients are real values, then quantization of the coefficients is a necessary step of image coding. 1 In the thesis, we will deal only with 2-D images. 2 Without loss of generality, we assume that pixels can take only non-negative values.

20 14 CHAPTER 2. SOURCE CODING AND IMAGE COMPRESSION x v[y, x] y Figure 2.2: Example of a gray-scale 8-bit image of size pixels. The zero-order entropy of prediction residuals and (quantized) transform coefficients is much lower than that of image pixels. Furthermore, using orthogonal transformation, an image can be represented by a few transform coefficients. All that significantly reduces the amount of storing or transmitting data. The most used kinds of orthogonal transformation for compression are the discrete cosine and wavelet transforms (DCT and DWT, respectively) [36]. Basis functions of the DCT are asymptotically optimal for decorrelation of the 1-st order 1-dimensional stationary Markov process. Nevertheless, 2-D separable transformation still has a good decorrelation property and allows for a high compression capability (retaining most information in a few transform coefficients). There exist algorithms for fast calculation. The DCT is used in many image and video compression algorithms. Examples are the still image coding standard JPEG (Joint Picture Expert Group) [33], and the video coding standard MPEG (Motion Picture Expert Group) [25, 49]. The wavelet transform has a superior energy compaction due to comparatively short basis functions. It fits better to a non-stationary signal like real images. Yet, the main advantage of using DWT is its inherited ability for multiresolution representation of the signal, which adds an additional useful functionality to a compression algorithm. One more benefit of the DWT is that there are reversible integer-to-integer transformation algorithms [9]. Using embedded quantization of the transform coefficients,

21 2.5. COMPRESSION OF IMAGES 15 Original Image Encoder Compressed Data Encoder Image Transform Quantization Descriptors Encoder Source Entropy Model Symbols Coder Figure 2.3: An image compression system. such a transformation allows for an efficient progressive quality and spatial resolution representation up to perfect reconstruction of the original image. The DWT is exploited in the new standard for compression of still images JPEG2000 [56]. Objective and subjective tests show that the DWT achieves higher compression performance then the DCT. However, the DWT is normally more time and memory consuming. The transformation produces almost uncorrelated data and makes further quantization step more efficient, ultimately resulting in a better compression. Quantization of the descriptors (prediction errors, transform coefficients) introduces distortion into the reconstructed image and results in lossy compression. Lossy compression allows for higher compression rates than the lossless coding. Using an appropriate quantization rule, the quality of the reconstructed image and the compression rate can be efficiently managed and controlled. The theoretical principles of such a trade-off are given in the rate-distortion theory, see, e.g., [7, 16]. Quantization of the DCT or DWT transform coefficients yields an approximation of the original image in L 2 sense. Quantization of prediction errors results in the so called near-lossless compression, where the distortion is specified by the maximum absolute difference between the original and the reconstructed image pixels. In the literature it is also called L constrained lossy coding [65]. The encoder model maps the descriptors into source symbols for the entropy coder. It may be done explicitly or implicitly. An example of explicit mapping is the (old) JPEG standard. In JPEG, the sequences of quantized high-frequency transform coefficients are converted into blocks, which are then coded using Huffman code. Bit-plane coding of wavelet transform coefficients in the JPEG2000 standard is an example of implicit mapping, where the sequence of binary symbols is an information source for the entropy coder. In some coders the descriptors themselves (transform coefficients or prediction residuals) constitute input symbols for the entropy coder.

22 16 CHAPTER 2. SOURCE CODING AND IMAGE COMPRESSION The transformation alone does not perform the data compression. On the contrary, it often performs an expansion of the data, since the transform coefficients, even quantized, may take more bits for their representation than the pixels themselves. The encoder model just translates the descriptors into a more convenient representation for the entropy coder and it does not change the amount of description information. The compression is performed solely by the entropy coder. That is why developing an efficient entropy coder is an important step in the design of an image compression algorithm.

23 Chapter 3 Source coding via binary decomposition In this chapter, we study source coding using binary decomposition of source symbols and derive some properties. 3.1 Introduction In Chapter 2 we introduced a general structure of image compression algorithms consisting of image transformation and entropy coding of description symbols (descriptors). The sequence of descriptors constitutes an information source for the entropy coder. In general, the definition of the information source in compression algorithms (the encoder model) is left to the algorithm designer. It can be a sequence of the descriptors itself. The source symbols can also be represented by blocks of the descriptors, like it is implemented in the JPEG standard for coding the AC coefficients of the DCT [33]. The grouping of symbols is called alphabet extension [42]. It is commonly used in combination with a Huffman coding. However, it was shown in [42] that any source can be coded without alphabet extension using an arithmetic coder. An information source can also be reduced to a binary source via binary decomposition of source symbols. Binary decomposition combined with a binary arithmetic coding is a well known method for coding m-ary sources, see, e.g., [34, 24, 21]. The use of binary decomposition of (non-binary) source symbols has a number of advantages over conventional m-ary coding. Binary arithmetic coding is much simpler for hardware and software implementations than an m-ary arithmetic coding, especially if m 2. On the other hand, one has to encode more than one binary event for each input symbol. However, using an appropriate tree for decomposition and a binary coding technique, the method may result in a faster and/or more efficient entropy coding. Another advantage of using binarization is the possibility of easy optimization of context models for conditional entropy coding, as will be described in Chapter 5. In this chapter, we formally introduce the technique and present some properties. 17

24 18 CHAPTER 3. SOURCE CODING VIA BINARY DECOMPOSITION 3.2 The binary decomposition technique Let us suppose a memoryless source with an alphabet A = {a 0, a 1,...,a m 1 }, and a probability distribution of source symbols P(A) = {p(a), a A}. Let B = {1, 0} denote the binary alphabet and let A n, B n be sets of all words of length n in the alphabets A and B, respectively. As usually, a source message of length n is denoted as u n 1 = u 1u 2...u n, u A. Let T m be a set of proper and complete binary trees with m terminating nodes (leaves) χ 0, χ 1,..., χ m 1 and Λ = {η j, j = 0, 1,...,m 2} be a set of m 1 internal nodes of a tree τ T m. Let the tree τ be assigned to the source A in such a way, that to each symbol a k, k = 0, 1,..., m 1, of the source, there corresponds a leaf χ k of the tree τ: τ : a k χ k. (3.1) Then the source symbol a k is represented by a string of binary decisions, which is a path from the root to the leaf χ k. The m-ary (memoryless) source, generating a message u n 1 of length n in the alphabet A, can now be considered as a binary Markov source modeled by the tree τ generating a binary sequence b 1 b 2...b n = b n 1 of length n, where b B. Each node η Λ Bn of the tree corresponds to a state of the Markov source and the tree uniquely defines a corresponding directed graph of state transitions. The initial state is defined by the root node. At each state (node), the source generates symbol 0 or 1 with the probability distribution p(0 η) and changes its state (maybe to the same state). The distributions {p(0 η), η Λ} are uniquely defined by the probability distribution of the alphabet P(A). Such a binary source can be encoded by conventional methods of coding of Markov sources. Given the model and the initial state, the binary sequence b n 1 is decomposed into m 1 subsequences {b n η 1 (η), η Λ}, generated at each state. The subsequences in essence are the sequences of i.i.d. binary symbols drawn with the probability p(0 η). They can be effectively encoded using some kind of binary coding techniques, e.g., arithmetic coding [61] or Golomb run-length coding [19]. (Note, that for an arithmetic coder there is no need for an explicit decomposition of the binary sequence into the subsequences corresponding to the states. The arithmetic coder just uses the conditional probabilities p(0 η) to calculate a codeword for the whole binary sequence.) The main parameter of a decomposition tree is the average number of binary coding operations per source symbol n = a A p(a)n(a), (3.2) where n(a) is the number of binary decisions required to code a (the length of the binary path to the symbol a). This parameter defines the coding efficiency (in terms of both redundancy and speed, as will be discussed in the following sections). The minimum n is achieved when the tree is a Huffman tree for this source. The use of a

25 3.3. ON REDUNDANCY OF BINARY DECOMPOSITION 19 binary decomposition assumes that the decomposition tree is fixed 1 during encoding, and compression is performed by a binary coding technique. 3.3 On redundancy of binary decomposition In this section, we derive an upper bound estimate on the coding redundancy using the binarization technique. We define a redundancy R as the difference between the average code length per source symbol and the entropy of the source: R = 1 p(u n 1 n ) ϕ(un 1 ) H, (3.3) u n 1 An where ϕ(u n 1 ) denotes the codeword length for the message un 1 (see Chapter 2) and H = a A p(a) log p(a). Let us allow the use of different binary coding techniques for each node. (For example, if some subsequence has probability of a binary symbol close to 0.5, then, for the sake of simplicity of implementation, this subsequence may be put directly into the output bit stream). If the probabilities of source symbols are unknown, then the subsequences can be coded using some kind of universal or adaptive coding technique (e.g., binary arithmetic coding with the symbol probability estimate given by (2.29). Let n η be the length of a binary sequence, generated at the state η. We define an average coding redundancy per binary symbol at the state η as ρ η = 1 n η b n η 1 Bn η p(b n η 1 ) ϕ b (b n η 1 ) H η, (3.4) where ϕ b (b n η 1 ) is the code length of a binary string bn η 1, generated with the probability p(b n η 1 ) and H η is the entropy of binary symbols generated at the state η: H η = p(0 η) log 2 p(0 η) (1 p(0 η)) log 2 (1 p(0 η)). The following theorem establishes the relationship between the decomposition tree τ, the (redundancy of) coding techniques used at the nodes, and the resulting redundancy R. The theorem also helps to pose some interesting properties of the method, which are stated as Corollaries 1 4. Theorem 3. Let the source generate a message of length n of symbols from an alphabet A = {a 0, a 1,...,a m 1 }. Let this message be sequentially coded using the binary tree decomposition technique described above. Then the redundancy is defined as R = n η Λ π η ρ η, (3.5) 1 Otherwise, one would come to a (dynamic) Huffman coding technique and there would be no need for binary coding (at least for sources with an entropy larger than 1 bit).

26 20 CHAPTER 3. SOURCE CODING VIA BINARY DECOMPOSITION where n and ρ η are defined by (3.2), (3.4) and π η = n η / nn can be viewed as a probability of the state η. Remark: In this formula, n and π η are defined by the decomposition tree, whereas ρ η is determined by binary coding techniques, used at the nodes. Proof. The proof is straightforward. We rewrite (3.5) as follows: = n η Λ π η = n η Λ = 1 n π η 1 n η η Λ 1 n η R = n η Λ π η ρ η b n η 1 Bn η b n η 1 Bn η b n η 1 Bn η p(b n η 1 ) ϕ b (b n η 1 ) + H η p(b n η 1 ) ϕ b (b n η 1 ) n η Λ π η H η p(b n η 1 ) ϕ b (b n η 1 ) n η Λ π η H η The first term in the last equality is the average code length and the second defines the entropy of the input source. Corollary 1. If for η Λ ρ η = ρ, then R = nρ. Corollary 2. If r-bits registers are used to represent probabilities and an arithmetic coder with registers of size g r + 2 is used to code binary symbols at the nodes, then R < 2 n(r + log 2 e)2 (g 2) + 2/n. (3.6) Proof. It was proven in [46], that the per-symbol redundancy for an m-ary arithmetic coder is upper bounded by the inequality ρ < m(r + log 2 e)2 (g 2). (3.7) For binary coding m = 2, and two additional bits are required to terminate the message. This yields (3.6). Corollary 3. If the decomposition tree is a Huffman tree, r-bits registers are used to represent probabilities and a binary arithmetic coder with registers of size g r + 2 is used to code at the nodes, then R < (H + 1)(r + log 2 e)2 (g 2) + 2/n (3.8)

27 3.4. BINARY DECOMPOSITION OF FSM SOURCES 21 Proof. The proof follows immediately from the fact that for the Huffman tree n < H + 1. (3.9) This corollary is essential for coding low entropy sources (H < 1) with large alphabets (m 2). In this case, the use of a Huffman tree results in the redundancy of order O(H + 1), independent on the alphabet size, whereas for an m-ary arithmetic coding the redundancy is of order O(m), see (3.7). Finally, it is always possible to choose such a tree, that n log 2 m. Corollary 4. If the decomposition tree is such that n log 2 m, r-bits registers are used to represent probabilities and a binary arithmetic coder with registers of size g r + 2 is used to code at the nodes, then R < 2(log 2 m + 1)(r + log 2 e)2 (g 2) + 2/n Thus, for sources with an alphabet of size m > 4, the upper bound on the coding redundancy using binary decomposition combined with binary arithmetic coding is lower than that of m-ary arithmetic coding (see (3.7)). 3.4 Binary decomposition of FSM sources An alphabet binarization can be as well used in coding FSM sources. Let an FSM source be defined by the alphabet A, a set of states S = {s}, the initial state s 1 and a state transition rule. In this case, each state is assigned a decomposition tree τ s T m with the corresponding set of states (decision nodes) Λ s = {η s }. In general, the decomposition trees τ s can be different (and may be also optimized) for each state. At each state the source generates a binary string of decisions corresponding to a source symbol. The resulting binary sequence can be modeled as a double FSM source with the set of states, which is the product of the sets: S = Λ s S. 3.5 Binary decomposition and universal coding Consider a memoryless m-ary source. If the source parameters P(A) = {p(a), a A} are not known, then the subsequences of binary decisions can be coded using a universal coding approach. In this case, the decision probabilities can be estimated according to (2.28), where s corresponds to a node η. The use of a binary decomposition technique does not increase the stochastic complexity of the source model, since the number of parameters is retained the same. An m-ary memoryless source and its binary counterpart have m 1 free parameters (in the latter case this is the number of internal nodes of the decomposition tree). This concerns FSM sources as well, where the number of parameters is (m 1) S.

28 22 CHAPTER 3. SOURCE CODING VIA BINARY DECOMPOSITION Thus, binarization of source symbols at least does not make a compression algorithm worse in terms universal coding redundancy (2.25) or (2.26) (due to the lack of knowledge on the parameters). Asymptotically this redundancy is the same. Yet, for finite length messages, the redundancy may be even less, resulting in a better compression performance. We will show it in the following example. Let us assume a memoryless m-ary source, and let the output message u n 1 of length n > 0 be coded by an ideal m-ary arithmetic coder using adaptive coding probability calculation defined by (2.29) with the parameter vector α a = 1, a A. Then, the 2 code length is defined as ϕ(u n 1 ) = Γ ( ) m 2 π m 2 Γ ( n + m 2 = n H + m 1 2 ) ( Γ ϑ(a u n 1 ) + 1 ) 2 a A log 2 n + O 1 (1). where Γ( ) is a Gamma function, ϑ(a u n 1) denotes the symbol counts and (3.10) H = a A ϑ(a u n 1) n log 2 ϑ(a u n 1) n (3.11) is the empirical entropy of the message. Let the same message be coded using the binary decomposition technique combined with an ideal binary arithmetic coder. Then the code length is the sum of the codes of binary sub-strings, corresponding to the decomposition tree nodes: ϕ(u n 1) = η Λ ϕ(b nη 1 (η)) = η Λ ( n η Hη + 1 ) 2 log 2 n η + O η (1). (3.12) Thus ϕ(u n 1 ) = n η Hη log 2 n η + O 2 (1) (3.13) η Λ where n η denotes lengths of the binary sub-strings, H η = ϑ(0 unη 1 ) n η log 2 ϑ(0 u nη 1 ) n η η Λ ϑ(1 unη 1 ) n η is the empirical entropy of the binary sub-strings and log 2 ϑ(1 u nη 1 ) n η (3.14) O 2 (1) = η Λ O η (1). (3.15) It can be shown that n H = η Λ n η Hη. (3.16)

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

Compression techniques

Compression techniques Compression techniques David Bařina February 22, 2013 David Bařina Compression techniques February 22, 2013 1 / 37 Contents 1 Terminology 2 Simple techniques 3 Entropy coding 4 Dictionary methods 5 Conclusion

More information

Introduction to Medical Image Compression Using Wavelet Transform

Introduction to Medical Image Compression Using Wavelet Transform National Taiwan University Graduate Institute of Communication Engineering Time Frequency Analysis and Wavelet Transform Term Paper Introduction to Medical Image Compression Using Wavelet Transform 李 自

More information

CHAPTER 2 LITERATURE REVIEW

CHAPTER 2 LITERATURE REVIEW 11 CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION Image compression is mainly used to reduce storage space, transmission time and bandwidth requirements. In the subsequent sections of this chapter, general

More information

Information, Entropy, and Coding

Information, Entropy, and Coding Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated

More information

http://www.springer.com/0-387-23402-0

http://www.springer.com/0-387-23402-0 http://www.springer.com/0-387-23402-0 Chapter 2 VISUAL DATA FORMATS 1. Image and Video Data Digital visual data is usually organised in rectangular arrays denoted as frames, the elements of these arrays

More information

JPEG Image Compression by Using DCT

JPEG Image Compression by Using DCT International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Issue-4 E-ISSN: 2347-2693 JPEG Image Compression by Using DCT Sarika P. Bagal 1* and Vishal B. Raskar 2 1*

More information

Introduction to image coding

Introduction to image coding Introduction to image coding Image coding aims at reducing amount of data required for image representation, storage or transmission. This is achieved by removing redundant data from an image, i.e. by

More information

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac)

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac) Project Proposal Study and Implementation of Video Compression Standards (H.264/AVC and Dirac) Sumedha Phatak-1000731131- sumedha.phatak@mavs.uta.edu Objective: A study, implementation and comparison of

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8. Reading.. IMAGE COMPRESSION- I Week VIII Feb 25 Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive, transform coding basics) 8.6 Image

More information

A comprehensive survey on various ETC techniques for secure Data transmission

A comprehensive survey on various ETC techniques for secure Data transmission A comprehensive survey on various ETC techniques for secure Data transmission Shaikh Nasreen 1, Prof. Suchita Wankhade 2 1, 2 Department of Computer Engineering 1, 2 Trinity College of Engineering and

More information

Modified Golomb-Rice Codes for Lossless Compression of Medical Images

Modified Golomb-Rice Codes for Lossless Compression of Medical Images Modified Golomb-Rice Codes for Lossless Compression of Medical Images Roman Starosolski (1), Władysław Skarbek (2) (1) Silesian University of Technology (2) Warsaw University of Technology Abstract Lossless

More information

Region of Interest Access with Three-Dimensional SBHP Algorithm CIPR Technical Report TR-2006-1

Region of Interest Access with Three-Dimensional SBHP Algorithm CIPR Technical Report TR-2006-1 Region of Interest Access with Three-Dimensional SBHP Algorithm CIPR Technical Report TR-2006-1 Ying Liu and William A. Pearlman January 2006 Center for Image Processing Research Rensselaer Polytechnic

More information

Conceptual Framework Strategies for Image Compression: A Review

Conceptual Framework Strategies for Image Compression: A Review International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 Conceptual Framework Strategies for Image Compression: A Review Sumanta Lal

More information

Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India

Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India Abstract Image compression is now essential for applications such as transmission and storage in data bases. In this paper we review and discuss about the image compression, need of compression, its principles,

More information

Gambling and Data Compression

Gambling and Data Compression Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction

More information

Parametric Comparison of H.264 with Existing Video Standards

Parametric Comparison of H.264 with Existing Video Standards Parametric Comparison of H.264 with Existing Video Standards Sumit Bhardwaj Department of Electronics and Communication Engineering Amity School of Engineering, Noida, Uttar Pradesh,INDIA Jyoti Bhardwaj

More information

Michael W. Marcellin and Ala Bilgin

Michael W. Marcellin and Ala Bilgin JPEG2000: HIGHLY SCALABLE IMAGE COMPRESSION Michael W. Marcellin and Ala Bilgin Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ 85721. {mwm,bilgin}@ece.arizona.edu

More information

MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION K. Vidhya 1 and S. Shenbagadevi Department of Electrical & Communication Engineering, College of Engineering, Anna University, Chennai,

More information

FCE: A Fast Content Expression for Server-based Computing

FCE: A Fast Content Expression for Server-based Computing FCE: A Fast Content Expression for Server-based Computing Qiao Li Mentor Graphics Corporation 11 Ridder Park Drive San Jose, CA 95131, U.S.A. Email: qiao li@mentor.com Fei Li Department of Computer Science

More information

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai

More information

An Efficient Compression of Strongly Encrypted Images using Error Prediction, AES and Run Length Coding

An Efficient Compression of Strongly Encrypted Images using Error Prediction, AES and Run Length Coding An Efficient Compression of Strongly Encrypted Images using Error Prediction, AES and Run Length Coding Stebin Sunny 1, Chinju Jacob 2, Justin Jose T 3 1 Final Year M. Tech. (Cyber Security), KMP College

More information

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let) Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition

More information

Probability Interval Partitioning Entropy Codes

Probability Interval Partitioning Entropy Codes SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract

More information

Coding and decoding with convolutional codes. The Viterbi Algor

Coding and decoding with convolutional codes. The Viterbi Algor Coding and decoding with convolutional codes. The Viterbi Algorithm. 8 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition

More information

Video Coding Basics. Yao Wang Polytechnic University, Brooklyn, NY11201 yao@vision.poly.edu

Video Coding Basics. Yao Wang Polytechnic University, Brooklyn, NY11201 yao@vision.poly.edu Video Coding Basics Yao Wang Polytechnic University, Brooklyn, NY11201 yao@vision.poly.edu Outline Motivation for video coding Basic ideas in video coding Block diagram of a typical video codec Different

More information

Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding

Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding C. SARAVANAN cs@cc.nitdgp.ac.in Assistant Professor, Computer Centre, National Institute of Technology, Durgapur,WestBengal,

More information

Transform-domain Wyner-Ziv Codec for Video

Transform-domain Wyner-Ziv Codec for Video Transform-domain Wyner-Ziv Codec for Video Anne Aaron, Shantanu Rane, Eric Setton, and Bernd Girod Information Systems Laboratory, Department of Electrical Engineering Stanford University 350 Serra Mall,

More information

Analysis of Compression Algorithms for Program Data

Analysis of Compression Algorithms for Program Data Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory

More information

Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria

Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria Lossless Medical Image Compression using Predictive Coding and Integer Wavelet Transform based on Minimum Entropy Criteria 1 Komal Gupta, Ram Lautan Verma, 3 Md. Sanawer Alam 1 M.Tech Scholar, Deptt. Of

More information

Lecture 4: BK inequality 27th August and 6th September, 2007

Lecture 4: BK inequality 27th August and 6th September, 2007 CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

More information

RN-Codings: New Insights and Some Applications

RN-Codings: New Insights and Some Applications RN-Codings: New Insights and Some Applications Abstract During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently

More information

Fast Arithmetic Coding (FastAC) Implementations

Fast Arithmetic Coding (FastAC) Implementations Fast Arithmetic Coding (FastAC) Implementations Amir Said 1 Introduction This document describes our fast implementations of arithmetic coding, which achieve optimal compression and higher throughput by

More information

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac.

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac. Video Encryption Exploiting Non-Standard 3D Data Arrangements Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac.at Andreas Uhl 1 Carinthia Tech Institute & Salzburg University Outline

More information

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to: Chapter 3 Data Storage Objectives After studying this chapter, students should be able to: List five different data types used in a computer. Describe how integers are stored in a computer. Describe how

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

Study and Implementation of Video Compression standards (H.264/AVC, Dirac)

Study and Implementation of Video Compression standards (H.264/AVC, Dirac) Study and Implementation of Video Compression standards (H.264/AVC, Dirac) EE 5359-Multimedia Processing- Spring 2012 Dr. K.R Rao By: Sumedha Phatak(1000731131) Objective A study, implementation and comparison

More information

Less naive Bayes spam detection

Less naive Bayes spam detection Less naive Bayes spam detection Hongming Yang Eindhoven University of Technology Dept. EE, Rm PT 3.27, P.O.Box 53, 5600MB Eindhoven The Netherlands. E-mail:h.m.yang@tue.nl also CoSiNe Connectivity Systems

More information

Figure 1: Relation between codec, data containers and compression algorithms.

Figure 1: Relation between codec, data containers and compression algorithms. Video Compression Djordje Mitrovic University of Edinburgh This document deals with the issues of video compression. The algorithm, which is used by the MPEG standards, will be elucidated upon in order

More information

Video compression: Performance of available codec software

Video compression: Performance of available codec software Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes

More information

Quality Estimation for Scalable Video Codec. Presented by Ann Ukhanova (DTU Fotonik, Denmark) Kashaf Mazhar (KTH, Sweden)

Quality Estimation for Scalable Video Codec. Presented by Ann Ukhanova (DTU Fotonik, Denmark) Kashaf Mazhar (KTH, Sweden) Quality Estimation for Scalable Video Codec Presented by Ann Ukhanova (DTU Fotonik, Denmark) Kashaf Mazhar (KTH, Sweden) Purpose of scalable video coding Multiple video streams are needed for heterogeneous

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Performance Analysis and Comparison of JM 15.1 and Intel IPP H.264 Encoder and Decoder

Performance Analysis and Comparison of JM 15.1 and Intel IPP H.264 Encoder and Decoder Performance Analysis and Comparison of 15.1 and H.264 Encoder and Decoder K.V.Suchethan Swaroop and K.R.Rao, IEEE Fellow Department of Electrical Engineering, University of Texas at Arlington Arlington,

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 6 November 2014 ISSN (online): 2349-6010 Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine

More information

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska PROBLEM STATEMENT A ROBUST COMPRESSION SYSTEM FOR LOW BIT RATE TELEMETRY - TEST RESULTS WITH LUNAR DATA Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska The

More information

A Secure File Transfer based on Discrete Wavelet Transformation and Audio Watermarking Techniques

A Secure File Transfer based on Discrete Wavelet Transformation and Audio Watermarking Techniques A Secure File Transfer based on Discrete Wavelet Transformation and Audio Watermarking Techniques Vineela Behara,Y Ramesh Department of Computer Science and Engineering Aditya institute of Technology and

More information

Development and Evaluation of Point Cloud Compression for the Point Cloud Library

Development and Evaluation of Point Cloud Compression for the Point Cloud Library Development and Evaluation of Point Cloud Compression for the Institute for Media Technology, TUM, Germany May 12, 2011 Motivation Point Cloud Stream Compression Network Point Cloud Stream Decompression

More information

Part II Redundant Dictionaries and Pursuit Algorithms

Part II Redundant Dictionaries and Pursuit Algorithms Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries

More information

A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES

A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES 1 JAGADISH H. PUJAR, 2 LOHIT M. KADLASKAR 1 Faculty, Department of EEE, B V B College of Engg. & Tech., Hubli,

More information

Image Compression and Decompression using Adaptive Interpolation

Image Compression and Decompression using Adaptive Interpolation Image Compression and Decompression using Adaptive Interpolation SUNILBHOOSHAN 1,SHIPRASHARMA 2 Jaypee University of Information Technology 1 Electronicsand Communication EngineeringDepartment 2 ComputerScience

More information

Colour Image Encryption and Decryption by using Scan Approach

Colour Image Encryption and Decryption by using Scan Approach Colour Image Encryption and Decryption by using Scan Approach, Rinkee Gupta,Master of Engineering Scholar, Email: guptarinki.14@gmail.com Jaipal Bisht, Asst. Professor Radharaman Institute Of Technology

More information

Performance Analysis of medical Image Using Fractal Image Compression

Performance Analysis of medical Image Using Fractal Image Compression Performance Analysis of medical Image Using Fractal Image Compression Akhil Singal 1, Rajni 2 1 M.Tech Scholar, ECE, D.C.R.U.S.T, Murthal, Sonepat, Haryana, India 2 Assistant Professor, ECE, D.C.R.U.S.T,

More information

Data Storage 3.1. Foundations of Computer Science Cengage Learning

Data Storage 3.1. Foundations of Computer Science Cengage Learning 3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

Load Balancing and Switch Scheduling

Load Balancing and Switch Scheduling EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,

More information

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10 Accurate 3D information extraction from large-scale data compressed image and the study of the optimum stereo imaging method Riichi NAGURA *, * Kanagawa Institute of Technology nagura@ele.kanagawa-it.ac.jp

More information

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera Comparison of different image compression formats ECE 533 Project Report Paula Aguilera Introduction: Images are very important documents nowadays; to work with them in some applications they need to be

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India

2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India Integrity Preservation and Privacy Protection for Digital Medical Images M.Krishna Rani Dr.S.Bhargavi IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India Abstract- In medical treatments, the integrity

More information

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet K. Ramkishor James. P. Mammen

More information

Polarization codes and the rate of polarization

Polarization codes and the rate of polarization Polarization codes and the rate of polarization Erdal Arıkan, Emre Telatar Bilkent U., EPFL Sept 10, 2008 Channel Polarization Given a binary input DMC W, i.i.d. uniformly distributed inputs (X 1,...,

More information

SPEECH SIGNAL CODING FOR VOIP APPLICATIONS USING WAVELET PACKET TRANSFORM A

SPEECH SIGNAL CODING FOR VOIP APPLICATIONS USING WAVELET PACKET TRANSFORM A International Journal of Science, Engineering and Technology Research (IJSETR), Volume, Issue, January SPEECH SIGNAL CODING FOR VOIP APPLICATIONS USING WAVELET PACKET TRANSFORM A N.Rama Tej Nehru, B P.Sunitha

More information

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,

More information

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca

More information

An Efficient Architecture for Image Compression and Lightweight Encryption using Parameterized DWT

An Efficient Architecture for Image Compression and Lightweight Encryption using Parameterized DWT An Efficient Architecture for Image Compression and Lightweight Encryption using Parameterized DWT Babu M., Mukuntharaj C., Saranya S. Abstract Discrete Wavelet Transform (DWT) based architecture serves

More information

Lecture 10: Regression Trees

Lecture 10: Regression Trees Lecture 10: Regression Trees 36-350: Data Mining October 11, 2006 Reading: Textbook, sections 5.2 and 10.5. The next three lectures are going to be about a particular kind of nonlinear predictive model,

More information

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS Dasaradha Ramaiah K. 1 and T. Venugopal 2 1 IT Department, BVRIT, Hyderabad, India 2 CSE Department, JNTUH,

More information

How To Improve Performance Of The H264 Video Codec On A Video Card With A Motion Estimation Algorithm

How To Improve Performance Of The H264 Video Codec On A Video Card With A Motion Estimation Algorithm Implementation of H.264 Video Codec for Block Matching Algorithms Vivek Sinha 1, Dr. K. S. Geetha 2 1 Student of Master of Technology, Communication Systems, Department of ECE, R.V. College of Engineering,

More information

Analysis of an Artificial Hormone System (Extended abstract)

Analysis of an Artificial Hormone System (Extended abstract) c 2013. This is the author s version of the work. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpose or for creating

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:

More information

Analysis of Algorithms I: Optimal Binary Search Trees

Analysis of Algorithms I: Optimal Binary Search Trees Analysis of Algorithms I: Optimal Binary Search Trees Xi Chen Columbia University Given a set of n keys K = {k 1,..., k n } in sorted order: k 1 < k 2 < < k n we wish to build an optimal binary search

More information

COMPRESSION OF 3D MEDICAL IMAGE USING EDGE PRESERVATION TECHNIQUE

COMPRESSION OF 3D MEDICAL IMAGE USING EDGE PRESERVATION TECHNIQUE International Journal of Electronics and Computer Science Engineering 802 Available Online at www.ijecse.org ISSN: 2277-1956 COMPRESSION OF 3D MEDICAL IMAGE USING EDGE PRESERVATION TECHNIQUE Alagendran.B

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

FFT Algorithms. Chapter 6. Contents 6.1

FFT Algorithms. Chapter 6. Contents 6.1 Chapter 6 FFT Algorithms Contents Efficient computation of the DFT............................................ 6.2 Applications of FFT................................................... 6.6 Computing DFT

More information

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya (LKaya@ieee.org) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper

More information

Message-passing sequential detection of multiple change points in networks

Message-passing sequential detection of multiple change points in networks Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal

More information

Quality Optimal Policy for H.264 Scalable Video Scheduling in Broadband Multimedia Wireless Networks

Quality Optimal Policy for H.264 Scalable Video Scheduling in Broadband Multimedia Wireless Networks Quality Optimal Policy for H.264 Scalable Video Scheduling in Broadband Multimedia Wireless Networks Vamseedhar R. Reddyvari Electrical Engineering Indian Institute of Technology Kanpur Email: vamsee@iitk.ac.in

More information

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents

More information

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music ISO/IEC MPEG USAC Unified Speech and Audio Coding MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music The standardization of MPEG USAC in ISO/IEC is now in its final

More information

Arithmetic Coding: Introduction

Arithmetic Coding: Introduction Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation

More information

Wavelet-based medical image compression

Wavelet-based medical image compression Future Generation Computer Systems 15 (1999) 223 243 Wavelet-based medical image compression Eleftherios Kofidis 1,, Nicholas Kolokotronis, Aliki Vassilarakou, Sergios Theodoridis, Dionisis Cavouras Department

More information

Analysis of Load Frequency Control Performance Assessment Criteria

Analysis of Load Frequency Control Performance Assessment Criteria 520 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 3, AUGUST 2001 Analysis of Load Frequency Control Performance Assessment Criteria George Gross, Fellow, IEEE and Jeong Woo Lee Abstract This paper presents

More information

Friendly Medical Image Sharing Scheme

Friendly Medical Image Sharing Scheme Journal of Information Hiding and Multimedia Signal Processing 2014 ISSN 2073-4212 Ubiquitous International Volume 5, Number 3, July 2014 Frily Medical Image Sharing Scheme Hao-Kuan Tso Department of Computer

More information

WATERMARKING FOR IMAGE AUTHENTICATION

WATERMARKING FOR IMAGE AUTHENTICATION WATERMARKING FOR IMAGE AUTHENTICATION Min Wu Bede Liu Department of Electrical Engineering Princeton University, Princeton, NJ 08544, USA Fax: +1-609-258-3745 {minwu, liu}@ee.princeton.edu ABSTRACT A data

More information

AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGORITHM

AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGORITHM International Journal of Computer Engineering & Technology (IJCET) Volume 7, Issue 1, Jan-Feb 2016, pp. 09-17, Article ID: IJCET_07_01_002 Available online at http://www.iaeme.com/ijcet/issues.asp?jtype=ijcet&vtype=7&itype=1

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Information Theoretic Analysis of Proactive Routing Overhead in Mobile Ad Hoc Networks

Information Theoretic Analysis of Proactive Routing Overhead in Mobile Ad Hoc Networks Information Theoretic Analysis of Proactive Routing Overhead in obile Ad Hoc Networks Nianjun Zhou and Alhussein A. Abouzeid 1 Abstract This paper considers basic bounds on the overhead of link-state protocols

More information

RN-coding of Numbers: New Insights and Some Applications

RN-coding of Numbers: New Insights and Some Applications RN-coding of Numbers: New Insights and Some Applications Peter Kornerup Dept. of Mathematics and Computer Science SDU, Odense, Denmark & Jean-Michel Muller LIP/Arénaire (CRNS-ENS Lyon-INRIA-UCBL) Lyon,

More information

I. INTRODUCTION. of the biometric measurements is stored in the database

I. INTRODUCTION. of the biometric measurements is stored in the database 122 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL 6, NO 1, MARCH 2011 Privacy Security Trade-Offs in Biometric Security Systems Part I: Single Use Case Lifeng Lai, Member, IEEE, Siu-Wai

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Computing Cubic Fields in Quasi-Linear Time

Computing Cubic Fields in Quasi-Linear Time Computing Cubic Fields in Quasi-Linear Time K. Belabas Département de mathématiques (A2X) Université Bordeaux I 351, cours de la Libération, 33405 Talence (France) belabas@math.u-bordeaux.fr Cubic fields

More information

How To Code With Cbcc (Cbcc) In Video Coding

How To Code With Cbcc (Cbcc) In Video Coding 620 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 7, JULY 2003 Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard Detlev Marpe, Member,

More information

White paper. H.264 video compression standard. New possibilities within video surveillance.

White paper. H.264 video compression standard. New possibilities within video surveillance. White paper H.264 video compression standard. New possibilities within video surveillance. Table of contents 1. Introduction 3 2. Development of H.264 3 3. How video compression works 4 4. H.264 profiles

More information