- Open Access
- Total Downloads : 90
- Authors : Yashwant Deshmukh , B H Pansambal
- Paper ID : IJERTV7IS010048
- Volume & Issue : Volume 07, Issue 01 (January 2018)
- DOI : http://dx.doi.org/10.17577/IJERTV7IS010048
- Published (First Online): 10-01-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Comparative Analysis of Different Deblurring Techniques
Yashwant Deshmukh
M.E. (Signal Processing) BSCOER, Narhe
Pune, India.
Prof. B.H. Pansambal
Professor & Head, Electronics and Telecommunication BSCOER, Narhe
Pune, India
Abstract Cameras shake during exposure time results in unpleasant image blur and is reason of rejection of many photographs. Typical blind deconvolution methods assume frequency-domain constraints on images, or extremely basic parametric forms for the motion trail during camera shake. Actual camera motion trajectory can follow convoluted path, and a spatial domain prior can preserve prominent image characteristics. A blurred image can be recognized as a convolution function of a sharp image and a blur kernel or PSF. So in order to recover the sharp image we need to split the image into its blur kernel and sharp image. The problem lies with the blur kernel estimation. Few methods assume a uniform camera blur over the image and negligible in-plane camera rotation. This unknown blur kernel estimation is called as the Blind deconvolution. Most of the deblurring techniques make use of these concepts. Few months uses sensors to calculate this blur Kernel. Paper aims reviewing all the contemporary techniques for blind and non-blind image deblurring.
The rest of the paper is organized as follows: Section 2 gives a literature review on certain deblurring techniques based on its strengths and weaknesses. Section 3 compares the different schemes by criterias.
-
INTRODUCTION
All Images are normally classified into two types viz. constrained domain images and unconstrained domain images. These classifications permit the image to be considered in an environment where the light is predetermined. Here there is no disturbance of illumination and pose or any other problem and the image can be easily recognized. These images are the constrained images. There are some images where nothing is predetermined. There can be illumination and posing or other issues. These are known as unconstrained domain images. These images can be those taken with a remote camera or those images of an object in motion taken with a static camera. All these images are the unconstrained images. Here there can be recognition problems. If we need recognize the image then we have to restore the image. A blurred image is a convolution function of a sharp image and a blur kernel or PSF. So in order to recover the sharp image we need to divide the image into its blur kernel and sharp image. But the real problem is the estimation of the blur kernel.
-
LITERATURE REVIEW
This section describes of the approaches that were used in deblurring the image like subspace analysis [1], the general blind deconvolution methods [2], deconvolution with statistics,
with noisy pairs [4], using local phase quantization, linear ternary patterns, set theoretic characterization etc.
-
Deblurring using subspace analysis
In this method we have a set of blurred images. From this set more knowledge can be gained. A feature space is constructed such that the blurred faces with the same point spread function are quite similar. In the training phase, a model of each point spread function or blur kernel is calculated in the feature space. For the blur kernel assumption, comparison of a query image of blur kernel is made with each model and selects the closest one to true value.
The known query image is deblurred using the blur kernel analogous to that particular model and then it is recognized. In short this algorithm, contingent PSF using well-read models of facial appearance variation under dissimilar blur. Then the inferred PSFs were used to sharpen both original and resultant images. This method is used to recognize textual character, hand and body postures under blur. The disadvantage is that it may not work well for other objects of uniform texture like a plastic glass. This is not a proven approach for images blurred with multiple unknown factors or with severe blur such as camera shake.
-
Blind Image Deconvolution Method
There are basically two types of deconvolution methods. They are projection based blind deconvolution and maximum likelihood restoration. In the first approach it concurrently restores the true image and point spread function. In the beginning initial estimates of the true image and PSF are made. The nature of the technique is cylindrical. At first the PSF estimate is found and it is followed by image estimate. This process is iterated until a preset convergence criterion is met. The advantage of this method is that it appears robust to inaccuracies of support size and this approach is not noise sensitive. This is not unique and this method can have errors related with local minima.
In the second approach the maximum likelihood estimate of parameters like PSF and covariance matrices. As the PSF estimate is not unique other assumptions like size, symmetry etc of the PSF can be taken into account. The main advantage is that it has got low computational complexity and also helps to obtain blur, noise and power spectra of the true image. The disadvantage with this approach is of algorithm converges to local minima of the estimated cost function.
-
Deblurring With Blur Estimation Algorithm
In general the focal deblurring process is done as a Gaussian low pass filtering. So the problem of blur estimation includes the estimation of the blur kernel. In this method the input image (blurred) is first blurred by Gaussian blur kernels OF different blur radius. After that the variation ratios between the different re-blurred images are used for determining the unknown blur radius. With the edge model it can be seen that the blur radius can easily be calculated from the difference ratio and is independent of edge amplitude or position. The maximum of variation ratio can be seen at the edge positions. The advantage of this is robust estimation in areas having numerous neighboring edges and this method also does not need edge detection with respect to position and angle.
-
Deblurring With Noisy Image Pairs
In this approach the image is deblurred with the help of noisy image. As a first step both the images the blurred and noisy image are used to find an accurate blur kernel. It is often very hard to get blur kernel from single image. Following that a remaining deconvolution is done and this will reduce artifacts that appear as imitation signals which are common in image deconvolution. As the third and final step the remaining artifacts which are present in the non-sharp images are suppressed by gain controlled deconvolution process. The main advantage of this approach is that it takes both the blurred and noisy image and resultant is high quality reconstructed image. With these two images an iterative algorithm has been formulated which estimates initial kernel and decreases Deconvolution artifacts. There is no particular hardware is required. The drawback of this is invariant spatial point spread function.
-
Removing Blur with Image Statistics
In most cases single blur kernel is used to deblur blurred image. A serious problem is caused when an image having motion in different direction is considered. As a result different kernels need to be considered. In this approach, a single frame obtained by segmentation of the whole image is considered. It can be seen that the statistics of the derivatives are very much changed under different blur Kernels. This algorithm searches for mixture model that can best define the distribution observed in the image. It results n two blur kernels and then by taking smooth layers assignment the likelihood is maximized. The output produced is a real world image with rich texture. But it has also got some limitations like the use of box filters, unknown direction of the blur, failure to describe the blur size etc. The blur patterns in real images can also turn much complex. Performance can be improved by taking features instead of simple derivative.
-
Deblurringwith Linear Ternary Pattern
Linearly binary patterns can be called as an extension of LBP features and are also invariant to small misalignments of pixels. This method mainly has 3 divisions. Firstly to eliminate the effects of illumination problem a pre-evaluating chain is presented without eliminating the essential features required for face recognition. Then the local ternary pattern is selected and it is less sensitive to blur effects. Here we can see that the local distance transform based on similarity is better than the local histogramming. When this method is compare with other approaches Multiscale Retinex (MSR [10]),
Logarithmic Total Variation (LTV [8]) this method proves much better. So far this method has not been used along with subspace analysis. It can be incorporated to improve this methods performance.
-
Using Local Phase Quantization
Phase is a property of the images which is invariant to blur. So using this property local phase quantization method has been proposed. Like the linear binary pattern used for recognition histogram of the linear phase quantization can used. It is very simple to implement and fast executing. There the challenge is the various lightning conditions. But this can be eliminated to greater extent with normalization of illumination. Here only phase information is used and so the changes are not affected. Accuracy of this method is found to be much higher than the LBP patterns. It is much better than images whose textures are not blurred.
-
Face Recognition with set theoretic method
Here in the set theoretic approach both blur and illumination problem are taken into account. Instead of taking blind deconvolution as such here we can see that that the different characteristics of blur are included. Also the image is taken as a convex set. Using the Direct Recognition of blurred faces algorithm we can remove the blurring of the images. In the algorithm a sharp image gallery is blurred with a blur kernel applying different conditions. Then the distances between the blurred images are compared with the artificially blurred image and that having minimum distance is taken as the corresponding image. Followed by that the illumination challenges are taken into account. Here the illumination coefficient for image when considered at different planes are considered and is incorporated in the algorithm. Now together with the removal of blur illumination problems are also removes. It is easy to implement, not complex and returns much better result than the other previous approaches. Also here L1 norm distance is taken for making the algorithm robust to pixel misalignments.
-
-
COMPARISON OF DIFFERENT DEBLURRING TECHNIQUES
Different methods were discussed so far. To have a clear picture, see table 1
TABLE 1. COMPARISON TABLE
Aspect
Method
Accuracy
Different Type of user of Blurs
Subspace Analysis
Medium
Low
Blind Image Deconvolution
Medium
Medium
Image Statistics
High
Medium
Local Phase Quantization
High
Medium
Local Phase
Quantization
High
Medium
Set theoretic Approach
High
High
In the first approaches In most of the deblurring approaches it uses the most common technique called the blind image deconvolution. Here the unknown blur kernel is rough estimated and recognition is done on that basis. In the subspace analysis [1] the different texture blur could be easily recognized. But it also had the problem of not solving the images with uniform textures. Blind image deconvolution techniques though like a probability process if blur kernel is found correctly then it is one of the most reliable technique.
Earlier techniques like using linear binary pattern, linear ternary patterns, linear phase quantization etc were used. These methods had advantages of being robust to misalignments in the pixel value.
-
CONCLUSION
From the above analysis we can see that though the subspace analysis [1] and blind image deconvolution [2] finds result to some extent it is prone to errors and is more or less like a probability method. In the local phase quantization technique it is accurate but not robust to different types of blurs and lighting problems can make the deblurring difficult. In the Set theoretic approach we can see that it is more accurate and different blur conditions are added on to make deconvolution method much less complex than the other approaches
ACKNOWLEDGMENT
I extend my grateful acknowledgment to all the authors who rendered their help in the preparation of this paper. I would like to thank all my friends and well wishers whose valuable suggestion and encouragement helped for the research. Above all I am thankful to Almighty for the successful completion of my work.
REFERENCES
-
Nishiyama, M., Hadid,A.,Takeshima,H., Shotton, J, Kozakaya,
T. and Yamaguchi,O. 2011 Facial deblur inference using subspace analysis for recognition of blurred faces, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4.
-
Kundur, D. and Hatzinakos, D. Blind image deconvolution revisited.
-
Hu, H. and Haan, G. 2006 Low cost robust blur estimator Proc.
IEEE Intl Conf. Image Processing, pp. 617 620
-
Yuan, L., Sun, J., Quan, L. and Shum, H.Y. 2007 Image deblurring with blurred/noisy image pairs ACM Trans.
Graphics, vol. 26, no. 3, pp. 1
-
Levin, A. 2006 Blind motion deblurring using image statistics in Proc. Adv. Neural Inform. Process. Syst. Conf pp. 841848.
-
Xiaoyang, T. and Bill ,T. 2007 Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions in AMFG 2007, LNCS 4778, pp. 168182
-
Ojansivu, V. and Heikkilä, J. 2008 Blur insensitive texture classification using local phase quantization in Proc. 3rd Int. Conf. Image Signal Process., pp. 236243.
-
Chen, T., Yin, W., Zhou, X., Comaniciu, D. and Huang, T. 2006 Total variation models for variable lighting face recognition.
IEEE TPAMI 28(9), 15191524
-
Vageeswaran, P., Mitra, K. and Chellappa, R. 2013 Blur and Illumination Robust Face Recognition via Set-Theoretic Characterization IEEE Transactions On Image Processing, VOL. 22, NO. 4
-
Jobson, D., Rahman, Z. and Woodell, G. 1997 A multiscale retinex for bridging the gap between color images and the human observation of scenes IEEE TIP 6(7), 965976