- Open Access
- Total Downloads : 12
- Authors : Basavaraj Patil, Avinash S, Amit K S
- Paper ID : IJERTCONV3IS27101
- Volume & Issue : NCRTS – 2015 (Volume 3 – Issue 27)
- Published (First Online): 30-07-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
JPEG Image Compression using Huffman Coding and Discretre Cosine Transfer
Basavaraj Patil
Asst. Professor, Dept. of ISE
S.D.M. Institute of Technology Ujire, Karnataka, India
Avinash S
Asst. Professor, Dept. of ECE
S.D.M. Institute of Technology Ujire, Karnataka, India
Amit K S
Asst. Professor, Dept. of CSE
-
Institute of Technology Ujire, Karnataka, India
Abstract: The need for an efficient technique for compression of Images ever increasing because the raw images need large amount of disk space seems to be a big disadvantage during transmission & storage. Even though there are so many compression technique already present a better technique which is faster, memory efficient and simple surely suits the requirements of the user. In this paper we proposed the Lossless method of image compression and decompression using a simple coding technique called Huffman coding. It is simple in implementation and uses less memory. The algorithm has been developed and implemented for the given image using Huffman coding techniques in a MATLAB platform.
Keywords Compression, JPEG, DCT, Entropy
-
INTRODUCTION
Basically an image is a rectangular array of pixels. Pixel is the smallest unit of the image. The size of images is the number of pixels i.e., width and height. In the real world an image is defined as a function of two real variables. Example image a(x, y) with a as the amplitude like brightness of the image at the real coordinate position (x, y). Image compression is the method of Data compression on digital images. The objective of image compression is to reduce redundancy of the image data in order which be able to store or transmit in an efficient manner.
Image compression can categorized in to two types
-
Lossy compression
-
Lossless Compression
Lossless compression method is preferable for artificial images such as technical drawings, icons. Because of lossy compression methods the use low bit rates are introduced in compression artifacts. Lossless compression methods can also be chosen for high value content such as medical images, image scans made for research purposes. As the name says, in lossless compression methods no information is lost regarding the image. In other words, we say that the reconstructed image from the compressed image is similar to the actual image. The purpose for image compression is to reduce the quantity of data required for representing sampled digital images and to decrease the cost of transmission & storage. Image compression acts as major task in many real time applications like image database, satellite imagery for weather, remote sensing image communications, and earth-resource
applications. The compressed images are gray scale values between 0 to 255.
Fig 1: Block Diagram for JPEG Encoder and Decoder
Fig 2: Process View
-
-
DISCRETE COSINE TRANSFORM
Each 8×8 block of component is converted to a frequency- domain represented using regularized discrete cosine transform (DCT). For example, one such 8×8 8-bit sub image can be represented as:
Before calculating the DCT of the 8×8 block, its values are shifted from a positive value to one centered around zero. In this step, it reduces the dynamic values in the DCT processing stage that follows and the results as follows:
The next step is to receive the two-dimensional inverse DCT i.e., 2D type-III DCT, given by:
Where
-
x is the pixel row, for the integers 0 x 8.
-
y is the pixel column, for the integers 0 y 8.
-
(u) is defined as above, for the integers 0 u 8.
-
Fu,v is the reconstructed approximate coefficient at coordinates(u,v).
-
fx,v is the reconstructed pixel value at coordinates (x,y)
DCT converts spatial domain to frequency domain. DCT converts the image in appropriate format so that compression can be done in the form of Quantization but DCT does not do any compression.
-
-
QUANTIZATION
Quantization is achieved by compressing a range of values to a single quantum value. Quantization is the step where most of the compression takes place. Quantization is achieved by dividing transformed image matrix by the quantization matrix used. Values of the resultant matrix are then rounded off. In the resultant matrix coefficients situated near the upper left corner have lower frequencies.
A typical quantization matrix, as specified in the original JPEG Standard, is as follows:
Using the above quantization matrix along with the DCT coefficient matrix from above results:
-
DC & AC CODING WITH ZIGZAG SEQUENCE It method accomplishes the compression lossless by
encoding the quantized DCT coefficient. After quantization process the DC coefficient is considered as separate from 63 AC coefficients. The DC coefficient is calculated as the average value of the 64 image samples. The quantized DC coefficients are encoded as the difference between the new DC coefficient and previous coefficient of 8×8 blocks in encoding order as there is strong correlation between DC coefficients of adjacent blocks. DC coefficient consists of significant portion of the energy of image, so that the difference between them provides sufficient compression. After this step, all set of quantized coefficients are structured into the Zigzag sequence. The Zigzag scanning of the 2D quantized coefficients arranges the frequencies into a 1D stream of frequencies from lower to higher. This arrangement helps to
make possible the entropy coding by placing high frequency coefficients after low frequency coefficients.
Fig 3: Zig-Zag Sequence
The zigzag sequence for the fig 3 quantized coefficients is shown in fig 4.
Fig 4: Zig-Zag Sequeunce for quantized coefficients
-
ENTROPY CODING
Entropy coding attains the addition lossless compression by encoding the quantized DCT coefficient more densely based on their statistical distinctiveness. In JPEG two Entropy coding methods are available
-
Huffman Coding
-
Arithmetic Coding
Entropy coding is processed in 2-phases.
-
During the first phase, the zigzag sequence of quantized coefficient is converted into set of transitional symbols.
-
In the second phase the symbols are converted in to a data stream in which the symbols without having externally identified boundaries.
-
Huffman coding uses Huffman table defined by application for compress an image and then the same table is used for decompression. These Huffman tables are predefined or computed specifically for a given image during initialization, prior to compression.
Arithmetic coding doesnt requires tables like Huffman coding because it is able to adapt to the image statistics as it encodes the image. Arithmetic coding is a little complex than Huffman coding for certain implementation, for example, the highest- speed hardware implementation. Trans coding between two coding method is possible by simply entropy decoding with one method and entropy recoding with the other.
In this work we will use a lossless compression and decompression through a technique called Huffman coding (i.e. Huffman encoding and decoding).
The Huffman coding technique collects unique symbols from the source image and calculates its probability value for each symbol and sorts the symbols based on its probability value. Further, from the lowest probability value symbol to the highest probability value symbol, two symbols combined at atime to form a binary tree. Moreover, allocates zero to the left node and one to the right node starting from the root of the tree. To obtain Huffman code for a particular symbol, all zero and one collected from the root to that particular node in the same order.
The basic concept in Huffman coding is to assign some shortest code words to the input blocks with more probabilities and long code words to those with low probabilities. A Huffman code is designed by combing the two least probable characters together and looping this same process until there remains only one character remaining. A code tree is thus generated and the Huffman code is obtained from the labeling of the code tree.
Huffman code procedure is based on the two observations More frequently occurred symbols will have shorter code words than symbol that occur less frequently. The two symbols that occur least frequently will have the same length.
The Huffman code is designed by merging the lowest probable symbols and this process is repeated until only two probabilities of two compound symbols are left and thus a code tree is generated and Huffman codes are obtained from labeling of the code tree. In Huffmans procedure is to code each reduced source, starting with the smallest source and working back to its original source. The minimal length binary code for a two-symbol source, of course, is the symbols 0 and 1.
After the code has been created, coding and/or decoding is accomplished in a simple look-up table manner. The code itself is an instantaneous uniquely decodable block code. It is called a block code, because each source symbol is mapped into a fixed sequence of code symbols. It is instantaneous, because each code word in a string of code symbols can be decoded without referencing succeeding symbols. It is uniquely decodable, because any string of code symbols can be decoded
in only one way. Thus, any string of Huffman encoded symbols can be decoded by examining the individual symbols of the string in a left to right manner.
At first we have as much as the compressor does a probability distribution. The compressor made a code table. The decompressor does not use this method though. It instead keeps the whole Huffman binary tree, and of course a pointer to the root to do the recursion process. In our implementation we'll make the tree as usual and then you'll store a pointer to last node in the list, which is the root. Then the process can start. We'll navigate the tree by using the pointers to the children that each node has. This process is done by a recursive function which accepts as a parameter a pointer to the current node, and returns the symbol.
Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)
And taking the entry-for-entry product with the quantization matrix from above results in
This closely resembles the original DCT coefficient matrix for the top-left portion. Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128)
And adding 128 to each entry
The formula of Mean Square Error, MSE is as followed:
-
-
HUFFMAN CODING AND DECODING
ALGORITHM
Step1- Read the image on to the workspace.
Step2- Convert the given colour image into grey level image. Step3- Call a function which will find the symbols (i.e. pixel value which is non-repeated).
Step4- Call a function which will calculate the probability of each symbol.
Step5- Probability of symbols are arranged in decreasing order and lower probabilities are merged and this step is continued until only two probabilities are left and codes are assigned according to rule that the highest probable symbol will have a shorter length code.
Step6- Further Huffman encoding is performed i.e. mapping of the code words to the corresponding symbols will result in a compressed data.
Step7- The original image is reconstructed i.e. decompression is done by using Huffman decoding.
Step8- Generate a tree equivalent to the encoding tree.
Step9- Read input character wise and left to the table II until last element is reached.
Step10-Output the character encodes in the leaf and returns to the root, and continues the step9 until all the codes of corresponding symbols are known.
-
RESULTS
-
Compression Result:
-
Decompression Result:
-
-
CONCLUSION & FUTURE WORK
-
The experiment shows that the higher data redundancy helps to achieve more compression. The above presented a new compression and decompression technique based on Huffman coding and decoding for scan testing to reduce test data volume, test application time. We conclude that Huffman coding is efficient technique for image compression and decompression to some extent. The compression ratio achieved by using Discrete Cosine Transform in Huffman coding can be further increased by using Discrete Fourier Transform.
REFERENCES
-
A.M Raid, W.M.Khedr, M. A. El-dosuky,Wesam Ahmed,"Jpeg Image Compression Using Discrete Cosine Transform -A Survey",(IJCSES) Vol.5, No.2, April 2014.
-
Jagadish h. Pujar,lohit m. Kadlaskar," A New Lossless method of image compression and decompression using huffman coding techniques",Journal of Theoretical and Applied Information Technology.
-
Mridul Kumar Mathur,Seema Loonker,Dr. Dheeraj Saxena," Lossless Huffman Coding Technique For Image Compression Andreconstruction Using Binary Tree",IJCTA,Jan-Feb 2012,Vol3(1),76-79.
-
Aarti,"Performance Analysis of Huffman Coding Algorithm",IJARCSSE, Volume3,Issue 5,May 2013.
-
Deepak Kumar Jain,Devansh Gaur,Kavya Gaur,Neha Jain, "Image Compression using Discrete Cosine Transform and Adaptive Huffman Coding"IJETTCS,Volume 3, Issue 1, January-February 2014.
-
Harjeetpal singh,Sakhi Sharma,"Hybrid Image Compression Using DWT, DCT & Huffman Encoding Techniques",International Journal of Emerging Technology and Advanced Engineering, Volume 2, Issue 10, October 2012.
-
N. Ahmed, T. Natarajan, and K. R. Rao, "Discrete cosine transform," IEEE Trans. on Computers, vol. C-23, pp. 90-93,1974.
-
A.B.Watson,Image Compression using the DCT, Mathematical Journal, 1995,pp.81-88.
-
D.A.Huffman, A Method for the construction of Minimum-redundancy Codes, Proc. IRE, vol.40, no.10, pp.1098-1101,1952.
-
Léger, A., Omachi, T., and Wallace, G. The JPEG still picture compression algorithm. In Optical Engineering, vol. 30, no. 7 (July 1991), pp. 947954.
-
Robert M. Gray,IEEE, and David L. Neuhoff, IEEE"Quantization", IEEE Trans. on Information Theory, Vol. 44, NO. 6,pp. 2325-2383, OCTOBER 1998. (Invited paper).
-
Jain, A. K. (1989), Fundamentals of Digital Image Processing, Prentice Hall International, Inc.
-
P.Ramesh Babu, Digital Image Processing. Scitech Publications., 2003
-
Rudra Pratap, Getting Started With MATLAB 7. Oxford University Press, 2006
-
Bracewell, R.N., Two Dimensional Imaging. Prentice Hall, 1995.