Image Compression Using Huffman Coding Based On Histogram Information And Image Segmentation

DOI : 10.17577/IJERTV1IS8540

Download Full-Text PDF Cite this Publication

Text Only Version

Image Compression Using Huffman Coding Based On Histogram Information And Image Segmentation

  1. Dr. Monisha Sharma (Professor)

  2. Mr. Chandrashekhar K. (Associate Professor)

  3. Lalak Chauhan(M.E. student)

Department of ECE, Shri Shankaracharya College of Engg. and Technology Bhilai (C.G.) INDIA

Abstract – In this paper we propose a method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. The amount of image will be compressed in lossy manner, and in lossless manner, depends on the information obtained by the histogram of the image. The results show that the difference between original and compressed images is visually negligible. The compression ratio(CR) and peak signal to noise ratio(PSNR) are obtained for different images. The relation between compression ratio and peak signal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can also obtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.

Keywords: Histogram, Lossy Compression, Lossless Compression, Compression ratio (CR) and Peak signal to noise ratio (PSNR).

The need for an efficient technique for compression of Images ever increasing because the raw images need large amounts of disk space seems to be a big disadvantage during transmission & storage. Even though there are so many compression techniques already present, a better technique which is faster, memory efficient and simple surely suits the requirements of the user. A synthetic performance of the compression is given by the compression ratio and the Peak Signal to Noise Ratio which are the important parameters to

analyze the image compression. The challenge of compression methods is to find the best compromise between a weak compression ratio and a good perceptual result. This has resulted in various compression techniques over the years. These techniques vary in the time they take, the type of images they can be applied and the loss of data the application can handle. The two main categories of compression are lossy and lossless. Huffman compression is one of the most qualitative lossless compression existing. But, due to the way the codes are generated, it becomes very slow as the size of the image increases with varying pixel intensities. Two

methods to overcome this disadvantage are as follows. Either dividing the image in small blocks, and another way is to decrease the number of pixels having different intensities. The method proposed in this paper lies in the later category. This method is computationally less expensive. One of the reasons for this is that Huffman coding is done in repeated manner in the existing techniques, although on smaller blocks, whereas in the proposed method it is done for the whole image. As the maximum number of pixels is of zero intensity it is faster. Also, in existing method different bit rates are not considered, whereas in the proposed algorithm bit rate is the major factor in deciding how much image has to be compressed in lossy manner and lossless compression.

Segmentation is the process of partitioning a digital image into multiple segments sets of pixels, also known as super pixels. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image. Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristics. When applied to a stack of images, typical in medical imaging, the resulting

contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms.

Compressing an image is significantly different than compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. This also means that lossy compression techniques can be used in this area. Lossless compression involves with compressing data which, when decompressed, will be an exact replica of the original data. This is the case when binary data such as executables, documents etc. are compressed. They need to be exactly reproduced when decompressed. An approximation of the original image is enough for most purposes, as long as the error between the original and the compressed image is tolerable. Other methods present in literature which are hybrid of lossless and lossy techniques. Our method is simple and different from these existing ones lossless compression with top intensities and lossy on the remaining one. This considerably increases the compression ratio provided by Huffman encoding while modifying it for different bit rates. The histogram can also be applied on a per pixel basis where the information result is used to determine the most frequent color for the pixel location.

The outlay of the paper is in four major sections. Section 2 deals with performance evaluation of compression method. Section 3 detailed description of the compression algorithm. Section 4 discusses the results and comparison while Section 5 draws a few conclusions regarding the method.

Error Metrics

Two of the error metrics used to compare the various image compression techniques are the Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR). The MSE is the cumulative squared error between the compressed and the original image, whereas PSNR is a measure of the peak error. The mathematical formulae for the two are MSE

PSNR = 20 * log10 (255 / sqrt(MSE))

where I(x,y) is the original image, I'(x,y) is the approximated version which is actually the decompressed image and M,N are the dimensions of the images. A lower value for MSE means lesser error, and as seen from the inverse relation between the MSE and PSNR, this translates to a high value of PSNR. Logically, a higher value of PSNR is good

MSE = 1 m 1 n1 I i, j K i, j 2

and

because it means that the ratio of Signal to

mn j=0

j=0

Noise is higher.

  1. PERFORMANCE PARAMETER

    COMPRESSION RATIO:-

    Compression Ratio (CR) is defined as number of bit to represent the size of original image to the number of bit to represent the size of compressed image. Compression ratio show that how much times the image has been

    one reconstruction may appear to be closer to the original than another, even though it has a lower PSNR, a higher PSNR would normally indicate that the reconstruction is of higher

    10 I

    MAX 2

    compressed. R = n1

    n2

    where n1 , n2 represent

    quality. PSNR = 10. log I Here, MAX

    MSE

    the number of bit required the original and compressed image respectively.

    PEAK SIGNAL TO NOISE RATIO (PSNR) :-

    The PSNR is most commonly used as a measure of quality of reconsruction of image. The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression it is used as an approximation to human perception of reconstruction quality, therefore in some cases

  2. COMPRESSION TECHNIQUE

    Perform the following steps on it:

    1] Calculate histogram of original image. Histogram produces intensities v/s no. of pixels data. This helps us in determining how many pixels belong to a particular intensity.

    is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255.

    MEAN SQUARED ERROR (MSE):-

    The MSE is the cumulative squared error between the compressed and the original image.

    m 1 n1

    MSE = 1 I i, j K i, j 2 mn

    j=0 j=0

    2] We find the intensity with maximum number of pixels. Store it as image1. Similarly find the intensity with second highest number of pixels. Store it as image2. This is repeated n times.

    3] The original image is divided in two images. One consisting only of highest intensities set. Other consisting of all intensities set. We name the former as image1 and the later as image2.

    4] Apply Huffman encoding with image segmentation. High frequency colors are represented with high accuracy as they contain the maximum information in the image.

    8000

    7000

    6000

    5000

    4000

    3000

    2000

    1000

    0

    0 50 100 150 200 250 300

    Figure 1. Original Image Figure 2. Histogram of original image

  3. RESULTS

    Figure 3.Compressed Image

    PSNR performance with compression ratio

    36

    34

    32

    PSNR

    30

    28

    26

    24

    Mean abslout error performance with compression ratio

    11

    10

    9

    Mean abslout error

    8

    7

    6

    5

    4

    3

    22

    0 5 10 15 20 25

    compression ratio

    2

    0 5 10 15 20 25

    compression ratio

    Figure 4.Relation between PSNR and CR Figure 5. Compression Ratio v/s mse

    TABLE 1

    Compression ratio

    PSNR

    CR = 20%

    23.8251

    CR = 30%

    26.2095

    CR = 40%

    28.0295

    CR = 50%

    30.0878

    CR = 60%

    33.2995

    CR = 70%

    34.3515

    TABLE 2

    CR

    MSE

    5

    4.864

    10

    3.215

    15

    2.973

    20

    2.468

    25

    2.096

    Table 1- compression ratio with psnr Table 2- compression ratio with mse

We have presented a simple and fast method to compress image by modifying the existing Huffman technique using image segmentation and histogram information of image. The method makes the decision of what to compress in lossless manner and what in lossy based on the highest existing intensities in the image. Such intensities are given more importance than those occurring with less frequency. The number of intensities selected depends on the bit rate of the image. Therefore, images with different bits per pixel can be compressed with the proposed algorithm. This takes care of

different bit rates. The results show that when we increase the compression ratio than we get the higher PSNR. From results we can analyze that when we have small compression ratio than we get the maximum error. Whenever the CR increases the error will be minimum. It means the compressed image is almost equal to the original image. The compression obtained is comparable to JPEG and is considerably more than Huffman encoding. It works with different type of images. A lower value for MSE means lesser error, and as seen from the inverse relation between the MSE and PSNR, this

translates to a high value of PSNR. Logically, a higher value of PSNR is good because it means that the ratio of Signal to Noise is higher. Here, the 'signal' is the original image, and the 'noise'

is the error in reconstruction. So, if we find a compression scheme having a lower MSE and a high PSNR, we can recognize that it is a better one.

References :-

  1. Bhooshan, S., Sharma, S.: An efficient and selective image compression scheme using huffman and adaptive interpolation. In: Image and Vision Computing, New Zealand (2009)

  2. Gabriela Dudek , Przemyslaw Borys , Zbigniew J. Grzywna :- Lossy dictionary based image compression method. Image and Vision Computing 25(2007) 883-889

  3. Somchart, C., Masahiro, I., Somchai, J.: A new unfield lossless/lossy image compression based on a new integer dct. IEICE Trans. Inf. Syst. E88-D, 15981606 (2005)

  4. C. Saravanan , R. Ponalagusamy : Lossless Grey-scale image compression using source symbols reduction and Huffman coding. International journal of image processing, vol 3

  5. Subhash Chandra, N., Bala Raju, M., Satyanarayana, B., Raja Vikram, B., Mahaboob

    : Lossy hybrid binary merge coding for image data compression. Journal of Engineering and Applied Sciences (2009)

  6. Albertus joko santoso , Dr. Lukito edi nugroho : Compression ratio and peak signal to noise ratio in grayscale image compression using Wavelet. IJCST Vol. 2 , june 2011.

  7. Jagadish H. Pujar , Lohit M. Kadlaskar:,A New Lossless Method of Image Compression

    and Decompression using Huffman Coding Techniques. Journal of Theoretical and Applied Information Technology 2005 – 2010

  8. Bhooshan, S., Sharma, S.: Image compression and decompression using adaptive interpolation. In: International Conference on Signal Processing, Robotics and Automation, Cambridge, WSEAS (2009)

  9. D. D. Muresan and T. W. Parks, Adaptively quadratic (aqua) image interpolation, IEEE Trans. Image Process., vol. 13, no. 5, pp. 690 698, May 2004.

  10. E. H. W. Meijering, W. J. Niessen, and M.

    A. Viergever, Quantitative evaluation of convolution-based methods for medical image interpolation, Med. Image Anal., vol. 5, no. 2, pp. 111126, 2001.

  11. C. C. Sun. S. J. Ruan, M. C. Shie, T. W. Pai, Dynamic Contrast Enhancement based on Histogram Specification, IEEE Transactions on Consumer Electronics, 51(4), pp.1300-1305, 2005.

  12. J. A. Stark, Adaptive Image Contrast Enhancement Using Generalizations of Histogram Equalization, IEEE Transactions on Image Processing, 9(5), pp.889-896, 2000.

Leave a Reply