II Theories and Techniques for Image Compression



Similar documents
Image Compression through DCT and Huffman Coding Technique

JPEG Image Compression by Using DCT

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

Video Coding Basics. Yao Wang Polytechnic University, Brooklyn, NY11201

Reading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.

CHAPTER 2 LITERATURE REVIEW

Introduction to image coding

Information, Entropy, and Coding

Introduction to Medical Image Compression Using Wavelet Transform

encoding compression encryption

Compression techniques

Video-Conferencing System

Figure 1: Relation between codec, data containers and compression algorithms.

A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES

Sachin Dhawan Deptt. of ECE, UIET, Kurukshetra University, Kurukshetra, Haryana, India

Conceptual Framework Strategies for Image Compression: A Review

Data Storage 3.1. Foundations of Computer Science Cengage Learning


Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Diffusion and Data compression for data security. A.J. Han Vinck University of Duisburg/Essen April 2013

FFT Algorithms. Chapter 6. Contents 6.1

ANALYSIS OF THE EFFECTIVENESS IN IMAGE COMPRESSION FOR CLOUD STORAGE FOR VARIOUS IMAGE FORMATS

A deterministic fractal is an image which has low information content and no inherent scale.

Implementation of ASIC For High Resolution Image Compression In Jpeg Format

FCE: A Fast Content Expression for Server-based Computing

Study and Implementation of Video Compression Standards (H.264/AVC and Dirac)

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

Digital Video Coding Standards and Their Role in Video Communications

For Articulation Purpose Only

A comprehensive survey on various ETC techniques for secure Data transmission

ELECTRONIC DOCUMENT IMAGING

Structures for Data Compression Responsible persons: Claudia Dolci, Dante Salvini, Michael Schrattner, Robert Weibel

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

Auto-Tuning Using Fourier Coefficients

Digital Audio and Video Data

Hybrid Lossless Compression Method For Binary Images

ENG4BF3 Medical Image Processing. Image Visualization

DCT-JPEG Image Coding Based on GPU

In the two following sections we separately consider hardware and software requirements. Sometimes, they will be offered for sale as a package.

H.264/MPEG-4 AVC Video Compression Tutorial

Solutions to Exam in Speech Signal Processing EN2300

Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding

Michael W. Marcellin and Ala Bilgin

balesio Native Format Optimization Technology (NFO)

Class Notes CS Creating and Using a Huffman Code. Ref: Weiss, page 433

Video compression: Performance of available codec software

Statistical Modeling of Huffman Tables Coding

INTERNATIONAL TELECOMMUNICATION UNION TERMINAL EQUIPMENT AND PROTOCOLS FOR TELEMATIC SERVICES

Secured Lossless Medical Image Compression Based On Adaptive Binary Optimization

LZ77. Example 2.10: Let T = badadadabaab and assume d max and l max are large. phrase b a d adadab aa b

Raster Data Structures

Khalid Sayood and Martin C. Rost Department of Electrical Engineering University of Nebraska

A Robust and Lossless Information Embedding in Image Based on DCT and Scrambling Algorithms

Coding and Patterns of Data Streams

2+2 Just type and press enter and the answer comes up ans = 4

Artificial Neural Network for Speech Recognition

Dynamic Adaptation in an Image Transcoding Proxy For Mobile Web Browsing

Indexing and Compression of Text

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

ECE 468 / CS 519 Digital Image Processing. Introduction

MPEG-1 and MPEG-2 Digital Video Coding Standards

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Today s topics. Digital Computers. More on binary. Binary Digits (Bits)

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl

Solving Systems of Linear Equations Using Matrices

Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Chapter 1 Introduction

Transform-domain Wyner-Ziv Codec for Video

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

MPEG Digital Video Coding Standards

Digital Transmission of Analog Data: PCM and Delta Modulation

Linear Codes. Chapter Basics

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Steganography Based Seaport Security Communication System

MP3 Player CSEE 4840 SPRING 2010 PROJECT DESIGN.

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

ECE 842 Report Implementation of Elliptic Curve Cryptography

STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION

Using MATLAB to Measure the Diameter of an Object within an Image

The Scientific Data Mining Process

WATERMARKING FOR IMAGE AUTHENTICATION

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Sampling and Interpolation. Yao Wang Polytechnic University, Brooklyn, NY11201

Basics of Digital Recording

Fast Arithmetic Coding (FastAC) Implementations

Development and Evaluation of Point Cloud Compression for the Point Cloud Library

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201

Analysis of Compression Algorithms for Program Data

JPEG File Interchange Format

1 Review of Least Squares Solutions to Overdetermined Systems

MPEG Unified Speech and Audio Coding Enabling Efficient Coding of both Speech and Music

PCM Encoding and Decoding:

School Class Monitoring System Based on Audio Signal Processing

4 Digital Video Signal According to ITU-BT.R.601 (CCIR 601) 43

A WEB BASED TRAINING MODULE FOR TEACHING DIGITAL COMMUNICATIONS

Lab 1. The Fourier Transform

Transcription:

Experiment 7 IMAGE COMPRESSION I Introduction A digital image obtained by sampling and quantizing a continuous tone picture requires an enormous storage. For instance, a 24 bit color image with 512x512 pixels will occupy 768 Kbyte storage on a disk, and a picture twice of this size will not fit in a single floppy disk. To transmit such an image over a 28.8 Kbps modem would take almost 4 minutes. The purpose for image compression is to reduce the amount of data required for representing sampled digital images and therefore reduce the cost for storage and transmission. Image compression plays a key role in many important applications, including image database, image communications, remote sensing (the use of satellite imagery for weather and other earth-resource applications), document and medical imaging, facsimile transmission (FAX), and the control of remotely piloted vehicles in military, space, and hazardous waste control applications. In short, an ever-expanding number of applications depend on the efficient manipulation, storage, and transmission of binary, gray-scale, or color images. An important development in image compression is the establishment of the JPEG standard for compression of color pictures. Using the JPEG method, a 24 bit/pixel color images can be reduced to between 1 to 2 bits/pixel, without obvious visual artifacts. Such reduction makes it possible to store and transmit digital imagery with reasonable cost. It also makes it possible to download a color photograph almost in an instant, making electronic publishing/advertising on the Web a reality. Prior to this event, G3 and G4 standards have been developed for compression of facsimile documents, reducing the time for transmitting one page of text from about 6 minutes to 1 minute. In this experiment, we will introduce the basics of image compression, including both binary images and continuous tone images (gray-scale and color). Video compression will be covered in the next experiment. II Theories and Techniques for Image Compression In general, coding method can be classified into Lossless and Lossy. With lossless coding, the original sample values are retained exactly and compression is achieved by exploring the statistical redundancies in the signal. With lossy coding, the original signal is altered to some extent to achieve a higher compression radio. 1

II.1 II.1.1 Lossless Coding Variable Length Coding [1, Chapter6.4] In variable length coding (VLC), the more probable symbol is represented with fewer bits (using a shorter codeword). The Shannon s first theorem [3] states that the average length per symbol, l, is bounded by the entropy of source, H, i.e., H= pn log 2 pn l= pn ln pn (log 2 pn + 1) = H+ 1 (10.1) where p n is the probability of the n-th symbol, H is the entropy of the source, which represents the average information, l n is the length of the codeword for symbol n, and l is the average codeword length. II.1.2 Huffman Coding The Shannon theorem only gives the bound but not the actual way of constructing the code to achieve the bound. On way to accomplish the later task is by a method known as Huffman Coding. Example: Consider an image that is quantized to 4 levels: 0, 1, 2, and 3. Suppose the probability of these levels are respectively 1/49, 4/49, 36/49 and 8/49. The design of a Huffman code is illustrated in the Figure 1. Symbol Prob Codeword Length 1 2 36/49 1 1 1 1 3 8/49 01 2 0 1 1 4/49 13/49 001 3 0 0 5/49 0 1/49 000 3 Figure 1 An Example of Huffman Coding In this example, we have 2

36 Average length l = + + + = = 49 1 8 49 2 4 1 49 49 3 67 ( ) 14. 49 Entropy of the source H= pk log pk = 116.. H < l < H + 1 II.1.3 Other Variable Length Coding Methods LZW Coding (Lempel, Ziv, And Welsh)[2] is the algorithm used in several public domain software for lossless data compression, such as gzip (UNIX) and pkzip (DOS). One of the most famous graphic file formats GIF also incorporates the LZW coding scheme. Another method known as Arithmetic Coding [2] is more powerful than both Huffman coding and LZW Coding. But it also requires more computation. II.1.4 Runlength Coding (RLC) of Bilevel Images [1, Chapter 6.6] In one dimensional runlength coding of bilevel images, one scans the pixels from left to right along each scan line. Assume that a line always starts and ends with white pixels, one counts the number (referred to as runlength) of white pixels and that of the black pixels alternatively. The last run of white pixels are replaced with a special symbol EOL (end of line). The runlengths of white and black are coded using separate codebooks. The codebook, say, for the white runlengths is designed using Huffman Coding method by treating each possible runlength (including EOL) as a symbol. An example of runlengths Coding is illustrated in the Fig. 2. 3

EOL (End of line) RUN-LENGTH CODING White Runlength Black Runlength ------------------------------------------------------------------- ------------------------------------------------------------------- - - x x x x - - x - - - - - x x x x - - x x x x - - x x x x - - - - x - - - - - x - - - - - x - - - - - x - - x - - - - - x - - - - x - - - - - x - - - - - x - - - - - x - - x - - - - - x - - - - x x x x - - x - - - - - x x x x - - x x x x - - x x x x - - - - x - - - - - x - - - - - - - - x - - - - - x - - - - - x - - - - x - - - - - x - - - - - - - - x - - - - - x - - - - - x - - - - x x x x - - x x x x - x x x x - - x x x x - - x x x x - - ------------------------------------------------------------------- ------------------------------------------------------------------- 2 4 2 1 5 4 2 4 2 4 2 1 5 1 5 1 5 1 2 1 2 1....... 2 4 2 4 1 4 2 4 2 4 Fig. 2 An example of runlength coding II.1.5 Two Dimensional Runlength Coding [1, Chapter 6.6] One dimensional runlength coding method only explores the correlation among pixels in the same line. In two dimensional runlength coding or relative address coding, the correlation among pixels in the current line as well as the previous line is explored. With this method, when a transition in color occurs, the distance of this pixel to the most closest transition pixel (both before and after this pixel) in the previous line as well as to the last transition pixel in the same line are calculated, and the one with the shortest distance is coded, along with an index indicating which type of distance is coded. See Fig. 6.17 in [1]. 4

II.1.6 CCITT Group 3 and Group 4 Facsimile Coding Standard - The READ Code [1,Chapter 6.6] In the Group 3 method, the first line in every K lines is coded using 1-D runlength coding, and the following (K-1) lines are coded using a 2-D runlength coding method known as Relative Element Address Designate (READ). For details of this method and the actual code tables, see [1], Sec. 6.6.1. The reason that the 1-D RLC is used for every K line is to suppress propagation of transmission errors. Otherwise, if the READ method is used continuously, when one bit error occurs somewhere during transmission, it will affect the entire page. The Group 4 method is designed for more secure transmission media, such as leased data lines where the bit error rate is very low. The algorithm is basically a streamline of the Group 3 method, with 1-D RLC eliminated. II.1.7 Lossless Predictive Coding Motivation: The value of a current pixel usually does not change rapidly from those of adjacent pixels. Thus it can be predicted quite accurately from the previous samples. The prediction error will have a non-uniform distribution, centered mainly near zero, which has a lower entropy than the original samples, which usually have a uniform distribution. For detail see [2] Sec. 9.4. With entropy coding (e.g. Huffman coding), the error values can be specified with fewer bits than that required for specifying the original sample values. II.2 Transform Coding (Lossy Coding) [1,Chapter 6.5] Lossless coding can achieve a compression ratio of 2 -- 3 for most images. To further reduce the data amount, lossy coding methods apply quantization to the original samples or parameters of some transformation of the original signal ( e.g. prediction or transformation). The transformation is to exploit the statistical correlation among original samples. Popular methods include linear prediction and unitary transforms. We have discussed linear prediction coding and its application in speech and audio coding in the previous experiment. You have learnt and experimented with uniform and non-uniform quantization in the previous experiment as well. In this section, we focus on transform coding, which is more effective for images. One of the most popular lossy coding schemes for images is transform coding. In block-based transform coding, one divides an image into non-overlapping blocks. For each block, one first transforms the original pixel values into a set of transform coefficients using a unitary transform. The transformed coefficients are then quantized and coded. In the decoder, one reconstructs the original block from the quantized coefficients through an inverse transform. The transform is designed to compact the energy of the original signal into only a few coefficients, and to reduce the correlation among the variables to be coded. Both will contribute to the reduction of the bit rate. 5

II.2.1 The Discrete Cosine Transformation (DCT) The DCT is popular with image signals because it matches well with the statistics of common image signals. The basis vectors of the one dimensional N-point DCT are defined by: n k hk ( n) ( k)cos( ( 2 + 1 ) π = α ), with α ( k) = 2N 1 N k = 0 2 N k = 12,,..., N 1. The forward and inverse transforms are described by: N 1 N 1 n k tk ( ) hk ( n) f( n) ( k) f( n)cos( ( ) * 2 + 1 π = = α ) 2N n= 0 n= 0 N 1 N 1 n k f ( n) = hk ( n) t( k) = ( k) t( k)cos( ( 2 + 1 ) π α ) 2N n= 0 n= 0. Note that the basis vectors vary in a sinusoidal pattern with increasing frequency. Note that the N-point DCT is related to the 2N-point DFT, but is not the real part of it. Each DCT coefficient specifies the contribution of a sinusoidal pattern at a particular frequency to the actual signal. The lowest coefficient, known as the DC coefficient, represents the average value of the signal. The other coefficients, known as AC coefficients, are associated with increasingly higher frequencies. To obtain 2-D DCT of an image block, one can first apply the above 1-D DCT to each row of the image block, and then apply the 1-D DCT to each column of the row transformed block. A 2D transform is equivalent to represent an image block as a supposition of many basic block patterns. The basic block patterns corresponding to 8x8 DCT are shown in Fig. 3. This figure is generated using Matlab script: T=dctmtx2(8); figure; colormap(gray(256));n=1; for (i=1:8) for (j=1:8) subplot(8,8,n), imagesc(t(i,:)'*t(j,:)); axis off; axis image; n=n+1; end; end; 6

Low-Low High-Low Low-High High-High Figure 3 Two-dimensional DCT Basis Images II.2.2 Representation of Image Blocks Using DCT The reason that DCT is well suited for image compression is that an image block can often be represented with a few low frequency DCT coefficients. This is because the intensity values in an image is usually varying smoothly and very high frequency components only exist near edges. Fig. 4 shows the distribution of the DCT coefficient variances (i.e the energy) as the frequency index increases. Fig. 5 shows the approximation of an image using different number of DCT coefficients. We can see that with only 16 out of 64 coefficients, we can already represent the original block quite well. You can experiment with the approximation accuracy by different number of DCT coefficients using the Matlab demo program dctdemo. 7

Variance Coefficient Index (Zig-Zag Order) Figure 4 DCT Coefficient Variance decreases as the frequency index increases Original With 16/64 Coefficients With 8/64 Coefficients With 4/64 Coefficients Figure 5 Images reconstructed with different number of DCT coefficients 8

II.2.3 JPEG Standard for Still Image Compression The JPEG standard refers to the international standard for still image compression recommended by the Joint Photographic Expert Group (JPEG) of the ISO (International Standards Organization) [1, Chap. 6]. It consists of three parts: i) a baseline DCT based lossy compression method for standard resolution and precision ( 8 bits/color /pixel) images; ii) an extended coding method for higher resolution/precision images, which use the baseline algorithm in a progressive manner; and iii) a lossless predictive coding method. Given an image, it is first divided into 8x8 non-overlapping blocks. An 8x8 DCT is then applied to each block. For the DC coefficient of each block, a predictive coding method is used. That is, the current DC value is predicted by the DC value of the previous block, and the prediction error is quantized using a uniform quantizer. The other AC coefficients are quantized directly using different quantizers (i.e. with different step-sizes). The step-sizes for the DC prediction error and other AC coefficients are specified in a normalization matrix. The particular matrix used can be specified in the beginning of the compressed bit stream as side information. Alternatively, one can use the matrix recommended by the JPEG as default, which is shown in Fig 5. Usually one can trade-off the quality with the compression ratio by scaling the normalization matrix using a scale factor. For example, a scale factor of two means the step-size for every coefficient is doubled. On the other hand, a scale factor of 0.5 cuts all the step-sizes by half. A smaller scale factor will lead to a more accurate representation of the original image, but also a lower compression gain (i.e. higher bit rate). Fig. 5 A typical normalization matrix 9

Fig. 6 A typical zig-zag mask. For the binary encoding of the quantized DC prediction error and DC coefficients, a combination of fixed and variable length coding is used. All possible DC prediction error values and all AC coefficient values (after quantization) are divided into separate categories according to their magnitude. See Table 6.14 in [1] for the categorization rule. The DC prediction error is encoded by a two-part code. The first part specifies which category it is in, and the second part specifies the actual relative magnitude in that category. The first part is Huffman coded based on the frequencies of different categories, while the second part is coded using a fixed length codeword. For the AC coefficients, a runlength coding method is used. It arranges the DCT coefficients into a 1D array following a zig-zag order, as illustrated in Fig. 5. The AC coefficients are converted into symbols, each symbol is a pair consisting of the runlength of zeros from the last non-zero value and the following non-zero value. The non-zero value is further specified in two parts, similar to the approach for the DC prediction error. The first part specifies which category it belongs to, and the second part specifies the relative magnitude. The symbol consisting of the zero runlength and the category of the non-zero value is Huffman coded, while the relative magnitude of the non-zero value is coded using a fixed length code. The standard recommends some default Huffman tables for the DC prediction error and the AC symbols. But the user can also specify different tables that are optimized for the statistics of the images in a particular application. For a color image, each color component is compressed separately using the same method. II.3 Vector Quantization Vector quantization (VQ) is another approach for image compression. The idea behind VQ is to determine the best set of basic block patterns (each called a codeword) that can best represent all images blocks present in an image. The set of all basic patterns is called the codebook. Usually, the best codebook for a class of images is pre-designed. In the encoding process, for each given image block, the best matching codeword in the codebook is found. Although this searching process is more CPU-intensive than JPEG, it is much simpler and faster on the decoding (decompression side). Decoding vector quantization coded information involves looking up the appropriate codeword in the code book based on the index created during the encoding process. The complexity of VQ growth exponentially with the size of the block (i.e. number of pixels in a block). In practice, block size equal or less than 4x4 is often used. Figure 7 describes the vector quantization architecture. The simplest approach to processing an image partitions the input image at the encoder into small, contiguous, nonoverlapping rectangular blocks or vectors of pixels, each of which is individually quantized. The vector dimension is given by the number of pixels in the block. The 10

vector of samples is a pattern that must be approximated by a finite set test patterns. The patterns are stored in a dictionary, the codebook. The patterns in the code book are called codewords. During compression, the encoder assigns to each input vector an address or index identifying the codeword in the codebook that best approximates the input vector. In the decoder, each index is mapped into an output vector taken from codebook. The decoder reconstructs the image as a patchwork quilt by reconstructing the vectors in the same sequence as the input vectors. The Intel Indeo technology is based on a proprietary vector quantization methodology. Input Image (Input Vectors) Encoder nearest neighbour search Decoder table lookup Output Vector Code Vectors Code Book Fig. 7. Vector Quantization 11

III Experiment 1) For bi-level runlength coding, if you have an Image x=[1 1 1 1;1 2 2 1;1 1 1 1], the output should yield code=[-1 1 2-1 -1]. Use h:\el593\exp10\e.gif as your input image and perform bi-level-runlength coding manually and count the probability of different runlengths. Then use Huffman Coding to decide the codeword for each symbol and calculate the average code length. You can use dispgif to read the input file and display the data of the image. 2) Demo10.m is a program that performs DCT over each 8x8 image block, then quantizes the DCT coefficients. Finally it performs inverse DCT to obtain a reconstructed image. Play with the program to see the effect of quantization using different quantization scale factors. 3) Modify the demo10.m program so that instead of quantizing the DCT coefficients using a supplied normalization matrix, you will retain only the first L coefficients in a zig-zag order. Try out different values of L (e.g. 1 L 16). Hint you should modify the mask in the program based on which coefficients to be reserved and which to be zeroed out. 4) Modify the demo10.m program so that instead of quantizing the DCT coefficients using a supplied normalization matrix, you will retain only those coefficients that have a magnitude greater than a threshold T. Try out different values of T (e.g. 1 T 256). NOTE: You must show your work to the instructor and get a signature. IV Report 1) Submit the programs you wrote. Include the source code and your output results. 2) For experiment #2,3, 4, comment on the effect of different quantization parameters, the number of retained coefficients, and the threshold. What are the maximum scale factor, the minimum number of coefficients, and the maximum threshold, respectively, that can be while still maintaining an acceptable visual quality of the image? 3) For a video sequence, the image size for each frame is 360x240 and the frame rate is 30 frames/second with full color (3 bytes/pixel). What would be the compression ratio for this sequence to be 1.5 Mbits/second? 12

V References 1) R. C. Gonzalez & R. E. Woods, Digital Image Processing, Addison Wesley 1992. 2) A. N. Netravali and B. G. Haskell, " Digital Pictures -- Representation, Compression, and Standards ", 2nd ed., Plenum Press, 1995. 3) A. V. Oppenheim, R. W. Schafer, " Discrete-Time Signal Processing ", Prentice Hall, 1989. 4) Matlab User's Guide, The Math Works Inc., 1993. 5) Matlab Reference Guide, The Math Works Inc., 1992. 13

6) APPENDIX A. ****************************************************************** * MATLAB Script file for demonstration of DCT * ****************************************************************** function demo10(filename,dx,dy); % Demo program for EL593 Exp.10 % usage : demo8('h:\el593\exp10\lena.img',256,256); Img=fread(fopen(FileName),[dx,dy]); colormap(gray(256)); image(img'); set(gca,'xtick',[],'ytick',[]); title('original Image'); truesize; drawnow y=blkproc(img,[8 8],'dct2'); yy=blkproc(y,[8 8],'mask2'); yq=blkproc(yy,[8,8],'idct2'); figure; colormap(gray(256)); image(yq'); set(gca,'xtick',[],'ytick',[]); title('quantized Image'); truesize; drawnow 14

****************************************************************** * MATLAB Script file for demonstration of DCT (subroutine 1) * ****************************************************************** function [y]=mask2(x); mask=[16 11 10 16 24 40 51 61; 12 12 14 19 26 58 60 55; 14 13 16 24 40 57 69 56; 14 17 22 29 51 87 80 62; 18 22 37 56 68 109 103 77; 24 35 55 64 81 104 113 92; 49 64 78 87 103 121 120 101; 72 92 95 56 112 100 103 99]; x=x/8; % Normally c=1 c=16; mask=c*mask; z=round(x./mask); y=mask.*z; y=8*y; 15