- Open Access
- Total Downloads : 90
- Authors : Vandana
- Paper ID : IJERTV3IS10721
- Volume & Issue : Volume 03, Issue 01 (January 2014)
- Published (First Online): 28-01-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Compression and Decompression of Multi-View Images
Vandana
Assistant Professor in Department of Electronics & Communication Engineering Vishveshwarya Group of Institutions, Dadri, India
Abstract
Image Compression reduces the amount of data required to represent the digital image by removing data redundancies and helps in increasing the performance of transmission process. Multi-view image codec is basically the compression and decompression of multi view images. The aim of multi- view image codec is to provide higher compression with reduced level of degradation in the quality of the decompressed image. The multi-view image codec proposed herein uses the JPEG standard Huffman coding for lossless compression.JPEG is first standard introduced for image compression which uses 8×8 block based DCT decomposition. It relocates the highest energies to the upper left corner of the image. The lesser energy or information is relocated into other areas.The evaluation is done on the basis of number of bits per pixel and quality assessment metrics like MSE and PSNR.This method provides greater data compression compared to predictive methods, although at the expense of greater computation.
Keywords: DCT, DWT, JPEG, Huffman, image compression.
-
Introduction
In the recent past, Multi-view image has become an active research area with significant applications in stereoscopic displays, gaming, medical imaging etc. Multi view setup captures the same scene from different viewpoints by a set of synchronized or unsynchronized cameras. The main challenge in this technique is how to process a huge amount of the acquired data and, at the same time, to achieve artifact-free rendering. The proposed solution is image compression using DCT and Huffman coding.
-
Image Compression
Image compression is a method through which we reduce the storage space by minimising the irrelevance and redundancy of image data.The objective is to achieve a reasonable compression ratio as well as better quality of reproduction of image with a low power consumption. It plays a major role in diverse applications like remote sensing, satellite imagery, televideo-conferencing, security industries, medical imaging etc.
Three basic data redundancies can be categorized as follows:
-
Coding redundancy due to the use of fixed length codewords.
-
Interpixel redundancy due to correlations between the pixels of an image.[3]
-
Psycho-visual redundancy due to properties of the human visual system(visually nonessential information).
-
-
Multi-view Image Representation
Multiple views can be from different positions or angles (space) E.g., images captured by multiple cameras at the same time. Or it can be from different time instants E.g., satellite images captured at different times. It can also be from different imaging modalities E.g., CT, MRI, Acoustic images. Multi-view provides more information about an object (higher recognition probability). These image sets contain intra-image as well as inter-image redundancy which when properly exploited can reduce the total amount of image data that have to be stored or transmitted[5]. The large amount of data that result from capturing two or more views of a scene to create its stereoscopic representation has to be efficiently compressed prior to storage or transmission.
-
-
DCT Technique
The Discrete Cosine Transform (DCT) is an example of transform coding. DCT is similar to the discrete Fourier transform. It transforms a signal or image from the spatial domain to the frequency domain. But DCT is fast. The DCT coefficients are all real numbers unlike the Fourier Transform. It has a property of energy compaction.[2]
DCT operates as follows:
-
The image is broken into N*N blocks of pixels. Here N is 8.
-
f(i,j) is the intensity of the pixel in row i and column j
-
F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix
-
DCT relocates the highest energies to the low frequencies; these appear in the upper left corner of the image.
-
Lower right values represent higher frequencies. The lesser energy or information is relocated into these areas.
-
These lower right values are small enough to neglected with a very little distortion to achieve compression.[1]
The Inverse Discrete Cosine Transform (IDCT) has been used in decoder to retrieve the image from its transform representation.
-
Quantization
Quantization is a lossy compression technique used to reduce the number of bits needed to represent the transformed coefficients. For compression of the image, truncating some of the coefficients will not affect the others. This truncation is the lossy process involved in compression. For high compression, the DCT coefficients are normalized by different scales, according to the quantization matrix. The quantized coefficients are all rearranged in a zigzag order.
Fig1. Zigzag order
-
Entropy Coding
Entropy coding achieves lossless compression by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. The JPEG proposed two entropy coding methods – Huffman coding and arithmetic coding. The entropy coding can be considered as a 2-step process. The first step is to convert the zigzag sequence of quantized DCT coefficients into sequence of symbols. The second step is to convert the symbols in to a data stream in which the symbols no longer have externally identifiable boundaries.[4] Huffman coding requires that one or more sets of Huffman data
dictionary or tables be specified. These tables are used to compress the image and are needed to decompress it. Huffman coding is a lossless but efficient algorithm for source symbols that are not equally probable. These codes contain the smallest possible number of code symbols (e.g., bits) per source symbol (e.g., grey level value). So, Huffman coding when combined with technique of reducing the image redundancies using Discrete Cosine Transform (DCT) helps in compressing the image data to a very good extent.
An image compression system needs to have an encoding system and a decoding system.
-
Encoding
-
Load Multiple Images
-
For Each image
-
Apply DCT on each sub-band
-
Quantize each block of DCT
-
Scramble and perform Entropy Coding
-
Form Streami for the ith image
-
-
Attach all streams together to form compressed bitstream.
-
-
Decoding
-
Read compressed bitstream
-
Find number of images (N), Size of each image(m*n) in stream
-
Perform Entropy decoding
-
For each i=1:N
-
Read next (m*n) bits
-
Descramble & Perform IDCT on 8×8 DCT blocks
-
Recover Image
-
-
Perform Benchmarking on the recovered image.
DCT
Transform
Quantization
Entropy (Huffman) Coding
Compressed Bitstream
Fig 2. Encoder
IDCT
Transform
De-quantization
Huffman Decoding
Compressed Bitstream
Fig 3. Decoder
In this decoding process, N's value should be the same as used in encoding process. The output/decompressed images are not exact copies of original images but they are very much similar to them.
-
-
Performance Evaluation
The performance of codec is evaluated based on rate- distortion criteria
Distortion or loss of information is measure by Mean square Error (MSE) between reconstructed image and original image.
MSE = 1 / N Ni=1 MSEi
where MSEi is the MSE between the original and decoded
i-th image.
Peak signal to noise ratio (in dB) is computed as
PSNR = 10*log10(2552/MSE)
Bit rate = (total no of bits in compressed file(s)) divided by (total number of pixels in N images)
-
Simulation Results
For simulation, I applied DCT technique on all images of Multi-view image set[6]. Here Multiview image set contains 7 images of pgm format and the simulation is carried out in matlab software by using 8 x 8 block size. We can see that reconstructed images are not exact as the original images but they are somewhat similar to them.
Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7
Fig 4. Simulation results for quantization factor 0.5
Table 1. Bpp, MSE and PSNR calculation for Image 1- Image7
Table 2. Bpp, MSE and PSNR for Image 1 at different quantization factor
IMAGE |
QUANTI- ZATION FACTOR |
MSE |
PSNR |
BPP (Bits per pixel) |
1 |
0.5 |
2.10E-02 |
64.903 |
5.6826 |
2 |
0.5 |
2.09E-02 |
64.9254 |
5.6927 |
3 |
0.5 |
1.59E+03 |
16.1075 |
5.6752 |
4 |
0.5 |
2.27E+03 |
14.5735 |
5.6998 |
5 |
0.5 |
2.74E+03 |
13.7593 |
5.6834 |
6 |
0.5 |
3.14E+03 |
13.1602 |
5.6816 |
7 |
0.5 |
3.54E+03 |
12.6376 |
5.6412 |
QUANTIZA -TION FACTOR |
MSE |
PSNR |
BPP (Bits per pixel) |
0.2 |
0.0033 |
72.8988 |
6.9836 |
0.4 |
0.0134 |
66.8744 |
6.0013 |
0.6 |
0.03 |
63.3607 |
5.4142 |
0.7 |
0.0408 |
62.0189 |
5.2005 |
0.8 |
0.0534 |
60.8584 |
5.0214 |
0.9 |
0.0675 |
59.8365 |
4.8427 |
1.4 |
0.161 |
56.0615 |
4.2213 |
1.8 |
0.2619 |
53.9498 |
3.9107 |
Graph between PSNR and Bpp for Image 1
PSNR vs Bpp
70
PSNR
60
50
3.91 4.22 4.84 5.02 5.20 5.41 6.00 6.98
BPP
8. Conclusion
In this paper, I have demonstrated DCT for Multiview image compression using JPEG standard which reduces the interpixel redundancies followed by Quantization which reduces the psychovisual redundancies then Coding redundancy is reduced by the use of optimal code word Entropy (Huffman) coding having minimum average length. As it can be seen in the Graph above, with the decrease in the number of bits per pixel, the image compression is achieved. But the PSNR reduces thereby which means the noise or
interference increases in the image. Hence the image quality reduces. So there is a trade off between the PSNR(Peak Singnal to Noise ration) and Bpp (Bits per Pixel).In multiview image compression, I have analysed the average PSNR 28.58 and average number of bits per pixel 5.68. Higher Compression leads to degraded quality of image.
10. References
-
R.C.Gonzalez and R.E.Woods,Digital Image Processing,Second edition,pp. 411-514, 2004.
-
Andrew B. Watson, NASA Ames Research Center "Image Compression Using the Discrete Cosine Transform
"Mathematica Journal, 4(1), 1994, p. 81-88
-
Swastik Das and Rasmi Ranjan Sethy, Digital Image Compression using Discrete Cosine Transform and Discrete Wavelet Transform, B.Tech. Dissertation, NIT, Rourkela, 2009.
-
Kiran Bindu, Anita Ganpati, Aman Kumar Sharma, "A COMPARATIVE STUDY OF IMAGE COMPRESSION
ALGORITHMS", International Journal of Research in Computer Science, eISSN 2249-8265 Volume 2 Issue 5 (2012)pp. 37-42.
-
Aydinoglu, H., Hayes, M.H., III,"Compression of multi- view images"Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference
-
MATLAB Central Website: http://www.mathworks.com/