Comparative Analysis of Various Image Compression Techniques

DOI : 10.17577/IJERTCONV5IS05011

Download Full-Text PDF Cite this Publication

Text Only Version

Comparative Analysis of Various Image Compression Techniques

Gaganjot Singh Jasneet Singh Sandhu

M-Tech Student Assistant Professor

Department of CSE Department of CSE

AIET Faridkot, Punjab, India AIET Faridkot, Punjab, India

Abstract: – Image compression is a process to remove the redundant information from the image so that only essential information can be stored to reduce the storage size, transmission bandwidth and transmission time. The essential information is extracted by various transforms techniques such that it can be reconstructed without losing quality and information of the image. In this paper, comparative analysis of image compression is done by Hybrid (DWT-DCT) Transform. MATLAB programs are written for each of the above methods and conclusions based on the results obtained by hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio. The wavelet transform, which is part of the new JPEG 2000 standard, claims to minimize some of the visually distracting artifacts that can appear in JPEG images.

  1. INTRODUCTION

    The increasing demand for multimedia content such as digital images and video has led to great interest in research into compression techniques. The development of higher quality and less expensive image acquisition devices has produced steady increase in both image size and resolution, and a greater consequent for the design of efficient compression systems. Although storage capacity and transfer bandwidth has grown accordingly in recent years, many applications still require compression.

    Factors related to the need for image compression include:

    • The large storage requirements for multimedia data

    • Low power devices such as handheld phones have small storage capacity

    • Network bandwidths currently available for transmission

    • The effect of computational complexity on practical implementation.

    A data compression system mainly consists of three major steps and that are removal or reduction in data redundancy, reduction in entropy, and entropy encoding. A typical data compression system can be labelled using the block diagram shown in Figure 1

    Fig.1: A data compression model

    It is performed in steps such as image transformation, quantization and entropy coding. JPEG is one of the most used image compression standard which uses Discrete Cosine Transform (DCT) to transform the image from spatial to frequency domain. An image contains low visual information in its high frequencies for which heavy quantization can be done in order to reduce the size in the transformed representation. Entropy coding follows to further reduce the redundancy in the transformed and quantized image data.

    A. Image Compression Based on Entropy

    The principle of digital image compression based on information theory. Image compression uses the concept of Entropy to measure the amount of information that a source produces. The amount of information produced by a source is defined as its entropy. For each symbol, there is a product of the symbol probability and its logarithm. The entropy is a negative summation of the products of all the symbols in a given symbol set. Compression algorithms are methods that reduce the number of symbols used to represent source information, therefore reducing the amount of space needed to store the source information or the amount of time necessary to transmit it for a given channel capacity.

    The mapping from source symbols into fewer target symbols is referred to as compression. The transformation from the target symbols back into the source symbols representing a close approximation form of the original information is called decompression. Compression system consists of two steps, sampling and quantization of a signal. The choice of compression algorithm involves several conflicting considerations. These include degree of compression required, and the speed of operation. Obviously if one is attempting to run programs direct from their compressed state, decompression speed is paramount. The other consideration is size of compressed file versus quality of decompressed image. Compression is also known as encoding process and decompression is known as decoding process. Digital data compression algorithms can be classified into two categories-

    • Lossless compression: In lossless image compression algorithm, the original data can be recovered exactly from the compressed data. It is used generally for discrete data such as text, computer generated data, and certain kinds of image and video information. Lossless compression can achieve only a modest amount of compression of the data and hence it is not useful for sufficiently high compression ratios. GIF, Zip file format, and Tiff image format are popular examples of a lossless compression [30, 3]. Huffman Encoding and LZW are two examples of lossless compression algorithms. There are times when such methods of compression are unnecessarily exact.

      In other words, 'Lossless' compression works by reducing the redundancy in the data. The decompressed

  2. TECNIQUES USED

    1. Discrete Cosine Transform

      Discrete Cosine Transform (DCT) is an orthogonal transform, the DCT attempts to decorrelate the image data. After decorrelation each transform coefficient can be encoded independently without losing compression efficiency.

      The DCT transforms a signal from a spatial representation into a frequency representation. The DCT represent an image as a sum of sinusoids of varying magnitudes and frequencies. DCT has the property that, for a typical image most of the visually significant information about an image is concentrated in just few coefficients of DCT. After the computation of DCT coefficients, the coefficients are normalized according to a quantization table with different scales provided by the JPEG standard computed by psycho visual evidence. Selection of quantization table affects the entropy and compression ratio. DCT has many advantages:

      • It has the ability to pack most information in fewest coefficients.

      • It minimizes the block like appearance called blocking artifact that results when boundaries between sub-images become visible.

        An image is represented as a two dimensional matrix, 2D DCT is used to compute the DCT coefficients of an image. The 2D DCT for an N x N input sequence can be defined as follows:

        D(i, j) =

        1 C(i)C(j) N1 N1 P(x, y) cos ((2x+1)i) cos ((2y+1)j)

        data is an exact copy of the original, with no loss of data.

        • Lossy compression: Lossy compression techniques

        2n

        x=0

        y=0

        2n 2n

        refer to the loss of information when data is compressed. As a result of this distortion, must higher compression ratios are possible as compared to the lossless compression in reconstruction of the image.

        'Lossy' compression technique sacrifices exact reproduction of data for better compression. It removes redundancy and creates an approximation of the original.

    2. Redundancy

    Redundancy different amount of data might be used. If the same information can be represented using different amounts of data, and the representations that require more data than actual information, is referred as data redundancy. In other words, number of bits required to represent the information in an image can be minimized by removing the redundancy present in it. Dataredundancy is of central issue in digital image compression. If n1 and n2 denote the number of information carrying units in original and compressed image respectively, then the

    CR = n1/n2

    Where, P(x, y) is an input matrix image N x N, (x, y) are the coordinate of matrix elements and (i, j) are the coordinates of coefficients.

    Limitations of DCT: For the lower compression ratio, the distortion is unnoticed by human visual perception. In order to achieve higher compression it is required to apply quantization followed by scaling to the transformed coefficient. For such higher compression ratio DCT has following two limitations. First is blocking artifacts is a distortion that appears due to heavy compression and appears as abnormally large pixel blocks. For the higher compression ratio, the perceptible blocking artifacts across the block boundaries cannot be neglected and second one is false contouring, occurs when smoothly graded area of an image is distorted by a deviation that looks like a contour map for specific images having gradually shaded areas. The main cause of the false contouring effect is the heavy quantization of the transform coefficients.

    B. Discrete Wavelet Transform (DWT)

    Wavelets are a mathematical tool for changing the coordinate system in which signals are represented to another domain that is best suited for compression. Wavelet based coding is more robust under transmission and decoding errors. Due to their inherent multi-resolution nature, DWTs are suitable for applications where scalability and tolerable degradation are important.

    Wavelets are tool for decomposing signals such as images, into a hierarchy of increasing resolutions. The more resolution layers, the more detailed features of the image are shown, DWTs are localized waves that drop to zero. DWTs come from iteration of filters together with rescaling. Wavelet produces a natural multi resolution of every image, including the all-important edges. The output from the low pass channel is useful compression. Wavelet has an unconditional basis as a result the size of the wavelet coefficients drop off rapidly. The wavelet expansion coefficients represent a local component thereby making it easier to interpret. Wavelets are adjustable and hence can be designed to suit the individual applications. Its generation and calculation of DWT is well suited to the digital computer. DWTs are only multiplications and additions in the calculations of wavelets, which are basic to a digital computer.

  3. PROPOSED METHOD

    In this work comparative analysis of image compression is done by three transform methods, which are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) & Hybrid (DWT+DCT) Transform. MATLAB programs were written for each of the above method and concluded based on the results obtained that hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio.

    A. Hybrid (DWT+ DCT) Transform

    In heading II two different ways of achieving the goals of image compression, which have some advantages and disadvantages had been discussed, in this section there is a proposal of hybrid transform technique that will exploit advantages of DCT and DWT, to get compressed image. Hybrid DWT-DCT transformation gives more compression ratio compared to JPEG and JPEG2000, preserving most of the image information and create good quality of reconstructed image. Hybrid (DWT+DCT) Transform reduces blocking artifacts, false contouring and ringing effect.

    1. Compression procedure: The input image is first converted to gray image from colour image, after this whole image is divided into size of 32 x 32 pixels blocks. Then 2D DWT applied on each block of 32 x 32 blocks, by applying 2D DWT, four details are produced. Out of four sub band details, approximation sub band is further transformed again by 2D DWT which gives four sub bands of 16 x 16 blocks. Above step is followed to decompose the 16 x 16 block of approximated detail to get new set of four sub band of size 8 x 8. The level of decomposition depends on size of processing block obtained initially, i.e. there is a division of 32 x 32. Hence the level of decomposition is 2. After getting four blocks of size 8 x 8, proposed method uses the approximated details for computation of discrete cosine transform coefficients. These coefficients are then quantize and send for coding. The complete coding scheme is explained in Figure 2.

      Fig. 2. Compression technique using Hybrid transform

    2. Decompression procedure: At receiver side, the algorithm decodes the quantized DCT coefficients and the inverse two dimensional DCT (IDCT) of each block is calculated. Then block is de quantized. Further applying inverse wavelet transform on the de quantized block. Since the level of decomposition while compressing was two, here also inverse wavelet transform is taken two times to get the same block size

    i.e. 32 x 32. This procedure followed for each block received. When all received blocks are converted to 32x 32 by following decompression procedure, explained above. The algorithm arranges all blocks to get reconstructed image. The complete decoding procedure is explained in Figure 3. The hybrid DWT-DCT algorithm has better performance as compared to stand alone DWT and DCT in terms of Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR). In standalone DCT, the entire image/frame is divided into 8 X 8 block in order to apply 8 point DCT.

    Fig. 3: Decompression technique using Hybrid transform

  4. PERFORMANCE MEASUREMENT PARAMETERS

    In this work prominence were given on the amount of compression used and how good the reconstructed image similar to the original. Analysis was done on the basis of the amount of distortion, which was calculated using important distortion measures: mean square error (MSE), peak signal- to-noise ratio (PSNR) measured in decibels (dB) and compression ratio (CR) measures were used as performance indicators. A good compression algorithm would reconstruct the image with low MSE and high PSNR

    1. Mean Square Error (MSE): The MSE is the cumulative squared error between the compressed and the original image. A lower value of MSE means lesser error. In general, it is the average of the square of the difference between the desired response and the actual system output. As a loss function MSE is also called squared error loss. MSE measures the average of the square of the error. For an unbiased estimator, the MSE is the square root of the variance, known as the standard error.

  5. PERFORMANCE EVALUATION AND SIMULATION RESULTS

    This chapter evaluates the performance of the various image compression algorithms. The studied algorithms are applied on several types of medical images. These benchmark images are the standard image generally used for the image processing applications. The results of the meticulous simulation for images and are presented in this section.

    = 1

    =1

    =1

    [(, ) (, )]2

    Comparison of the results of MSE, CR and PSNR are shown in Table I, Table II and Table III respectively.

    Where, I(x, y) is the original image and I(x,

    y) is the reconstructed image and m, n are the dimensions of the image. Lower the value of MSE, the lower the error and better picture quality

    1. Peak Signal to Noise Ratio (PSNR): PSNR is a measure of the peak error. Many signals have very wide dynamic range, because of that reason PSNR is usually expressed in terms of the logarithmic decibel scale in (dB). Normally, a higher value of PSNR is good because it means that the ratio of signal to noise is higher. Here, a signal represents original image and noise represents the error in reconstruction. It is the ratio between the maximum possible power of a signal and the power of the corrupting noise. PNR decreases as the compression ratio increases for an image. The PSNR is defined as:

      12

      = 10 log10 { }

      PSNR is computed by measuring the pixel difference between the original image and compressed image. Values for PSNR range between infinity for identical images, to 0 for images that have no commonality

    2. Compression ratio (CR): Compression ratio (CR) is a measure of the reduction of the detailed coefficient of the data. In the process of image compression, it is important to know how much detailed (important) coefficient one can discard from the input data in order to sanctuary critical information of the original data. Compression ratio can be expressed as:

      Fig.4: Experimental snapshots of original and compressed images

      TABLE I

      Comparison results of MSE

      IMAGES

      DWT+DCT

      Ref. no [4]

      1.jpg

      0.37679

      1.8295

      2.jpg

      0.309242

      0.4567

      3.jpg

      1.45876

      0.1561

      4.jpg

      0.705212

      0.7350

      5.jpg

      0.126074

      2.0095

      Average

      0.5952156

      1.03736

      =

    3. Normalized cross correlation (NC): For image- processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation. That is, the cross-correlation of a t(x,y) template, with a sub image f(x,y) is

    1

    ,

    ((, ) )((, ) )

    Fig. 5 graphical representation of MSE

    TABLE II Comparison results of CR

    IMAGES

    DWT+DCT

    Ref. no [4]

    1.jpg

    55.9256

    49.4178

    2.jpg

    56.7836

    55.4447

    3.jpg

    50.0468

    60.1073

    4.jpg

    53.2034

    53.3783

    5.jpg

    60.6804

    49.0103

    AVERAGE

    55.32796

    53.47168

    IMAGES

    DWT+DCT

    Ref. no [4]

    1.jpg

    91.1978

    86.4319

    2.jpg

    91.094

    91.9231

    3.jpg

    90.0612

    91.9231

    4.jpg

    86.5355

    90.8100

    5.jpg

    89.5468

    85.7119

    AVERAGE

    89.68

    89.36

    Fig. 6 graphical representation of CR TABLE III Comparison results of PSNR

    Gray Scale Images

    Images

    MSE

    PSNR

    CR

    l.jpg

    037679

    55S256

    91.1978

    2.jpg

    0309242

    563836

    91.094

    3.jpg

    1.45876

    512034

    90.0612

    4.jpg

    0305212

    512034

    86.5355

    5.jpg

    0126074

    60.6804

    893468

    Colour Images

    6.jpg

    039438

    514526

    91.5454

    7.jpg

    0.614927

    543654

    91.901

    8.jpg

    0.970452

    523802

    816894

    9.jpg

    0337986

    533521

    89393

    10.jpg

    094879

    52.6515

    893447

    Fig. 7 graphical representation of PSNR TABLE IV (a)

    TABLE IV (b)

    Gray Scale Images

    Images

    CORRELATION

    BER

    l.jpg

    0991978

    0.0178809

    2.jpg

    099094

    0.0176107

    3.jpg

    0.980612

    0.0199813

    4.jpg

    0945355

    0.0187958

    5.jpg

    0.975468

    0.0164798

    Colour Images

    6.jpg

    0995454

    0.0187082

    7.jpg

    099901

    0.0183266

    8.jpg

    0.916894

    0.0190186

    9.jpg

    0.97793

    0.0186039

    10.jpg

    0.977447

    0.0189928

  6. CONCLUSION

The algorithms for compression and decompression for various Image compressions methods such as DCT, DWT, and Hybrid are discussed. DCT requires less computational resources and can achieve the energy compaction property. However, for the higher compression ratio it introduces the blocking artifact and the false contouring effects while image reconstruction. DWT is the only techniques which has capacity of multi resolution compression. However, it requires higher computational complexity as compared to other techniques. Hence, in order to benefit from each other, hybrid DWT-DCT algorithm has been discussed for the image compression.

This hybrid approach speeds up the encoding time by reducing the number range- domain comparison with remarkable amount. Each method can be well suited with different images based on the user requirements.

In this work, analysis of various Image compression techniques for different images is presented based on parameters, compression ratio (CR), mean square error (MSE) & Peak signal to noise ratio (PSNR). This work gives higher compression ratio. DWT gives better compression ratio without losing more information of image. Pitfall of DWT is, it requires more processing power. DCT overcomes this disadvantage since it needs less processing power, but it gives less compression ratio. DCT based standard JPEG uses blocks of image, but there are still correlation exits across blocks. Block boundaries are noticeable in some cases. Blocking artifacts can be seen at low bit rates. In wavelet, there is no need to divide the image.

Hybrid transform gives higher compression ratio and for getting that clarity of the image. It is more suitable for regular applications as it is having a good compression ratio along with preserving most of the information.

REFERENCES

  1. A. K. Jain, Fundamentals of Digital Image Processing, Prentice- Hall Inc, Englewood Cliffs, 1989.

  2. A. M. Raid, W.M. Khedr , M. A. El-dosuky and W. Ahmed, Jpeg Image Compression Using Discrete Cosine Transform – A Survey, International Journal of Computer Science & Engineering Survey, Vol. 5, No. 2, 2014.

  3. C. S. Rawat and S. Mehar, A Hybrid Image Compression Scheme using DCT and Fractal Image Compression The International Arab Journal of Information Technology, Vol. 10, No. 6, 2013

  4. M. Kaur and V. Wasson, ROI based Medical Image Compression for Telemedicine Application, Proceedings of the 4th International Conference on Eco-friendly Computing and Communication Systems, Procedia Computer Science, Vol. 70, pp. 579585, 2015

  5. N. A. Dheringe and B.N. Bansode, Genetic Algorithm Using Discrete Cosine Transform for Fractal Image Encode, International Journal of Soft Computing and Engineering, Vol. 3, Issue-6, 2014.

  6. N. K. More and S. Dubey, JPEG Picture Compression Using Discrete cosine transform, International Journal of Science and Research, Vol. 2, Issue 1, 2013.

  7. P. K. Singh, N. Singh and K. N. Rai, Comparative Study between DCT and Wavelet Transform Based Image Compression Algorithm, Journal of Computer Engineering, Vol. 17, Issue 1, pp. 53-57, 2015

  8. P. Kaur and G. Lalit, Comparative Analysis of DCT, DWT & LWT for Image Compression, International Journal of Innovative Technology and Exploring Engineering, Vol. 1, Issue 3, 2012.

  9. R. C. Gonzolez, R. E. Woods, "Digital Image Processing second edition", Prentice Hall, 2002

  10. S. J. Bagul, N. G. Shimpi and P. M. Patil, JPEG Image Compression Using Fast 2-D DCT Technique, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 3, Issue 11, 2011.

  11. S. Sharma and S. Kaur, Image Compression using hybrid of DWT, DCT and Huffman coding, International Journal for Science and Emerging Technologies, Vol. 5, No. 1, pp. 19-23, 2013.

Leave a Reply