Single Image Super Resolution

DOI : 10.17577/IJERTV1IS10412

Download Full-Text PDF Cite this Publication

Text Only Version

Single Image Super Resolution

Patel Shreyas#1, Baxi Aatha #2

#1 Master in Computer Science & Engineering, Parul Institute of Technology, vadodara, Gujarat, India.

#2 Department of Computer Science & Engineering,

Parul Institute of Engineering & Technology, Vadodra, Gujarat, India

Abstract: These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their short-comings. This computationally in-expensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods. The SR image approaches reconstruct a single higher- resolution image from a set of given lower-resolution images For the reconstruction stage a SR reconstruction model composed of the L1 normdata delity and total variation (TV) regularization is dened, with its reconstruction object function being efciently solved by the steepest descent method. Other SR methods can be easily incorporated in the proposed framework as well. Specifically, the SR computations for multi-view images computation in the temporal domain are discussed.

Keywords Super-resolution imaging, Resolution enhancement , Regularization, robust estimation, super resolution, total variation (TV).

  1. INTRODUCTION

    Super resolution is the process of combining a sequence of low-resolution (LR) noisy blurred images to produce a higher resolution image or sequence. The multiframe super-resolution problem was first addressed in [1], where they proposed a fre- quency domain approach, extended by others, such as [2]. Al-though the frequency domain methods are intuitively simple and

    computationally cheap, they are extremely sensitive to model er-rors [3], limiting their use. Also, by definition, only pure trans-lational motion can be treated with such tools and even small deviations from translational motion significantly degrade per- formance.

    Another popular class of methods solves the problem of reso-lution enhancement in the spatial domain. Non-iterative spatial domain data fusion approaches were proposed in [4][6]. The iterative back- projection method was developed in papers such as

    1. and [8]. In [9], the authors suggested a method based on the multichannel sampling theorem. In [10], a hybrid method, combining the simplicity of ML with proper prior information was suggested. The spatial domain methods discussed so far are generally computationally expensive. The authors in [11] introduced a block circulant preconditioner for solving the Tikhonov regular-ized super-resolution problem formulated in [10] and addressed the calculation of regularization factor for the under- determined case by generalized cross validation in [12]. Later, a very fast super-resolution algorithm for pure translational motion and common space invariant blur was developed in [5]. Another fast spatial domain method was recently suggested in [13], where LR images are registered with respect to a reference frame defining a nonuniformly spaced high-resolution (HR) grid. Then, an interpolation method called Delaunay trian-gulation is used for creating a noisy and blurred HR image, which is subsequently deblurred. All of the above methods assumed the additive Gaussian noise model.

      This paper is organized as follows. Section II explains the observation model concepts of the reconstruction of the image. Section III justifies the various methods of the reconstruction. Section IV justifies relation to other methods to reconstruction of the image. Section V concludes this paper.

  2. OBSERVATION MODEL FOR SUPER-RESOLUTION IMAGE

    As depicted in Fig. 1, the image acquisition process is mod- eled by the following four operations: (i) geometric trans- formation, (ii) blurring, (iii) down- sampling by a factor of q1 × q2, and (iv) adding with white Gaussian. Note that the geometric transformation includes translation, rotation, and scaling. Various blurs (such as motion blur and out- of-focus blur) are usually modeled by convolving the image with a low-pass lter, which is modeled by a point spread func- tion (PSF). The given image (say,

    with a size of M1 × M2)is considered as the high- resolution ground truth,which is to be compared with the high-resolution image reconstructed from a set of low-resolution images (say, with a size of L1 × L2 each; that is, L1 = M1/q1 and L2 = M2/q2) for conducting performance evaluation. To summarize mathematically,

    y(k) = D(k)P(k)W(k)X + V(k), (1)

    = H(k)X + V(k), (2)

    where y(k) andXdenote the kth L1×L2 low-resolution image and the original M1×M2 high-resolution image, respectively, and k = 1, 2,…,. Furthermore, both y(k) and X are repre- sented in the lexicographic-ordered vector form, with a sizeof L1L2 ×1 and M1M2 ×1, respectively, and each L1 × L2 image can be transformed (i.e., lexicographic ordered) into a L1 L2×1 column vector, obtained by ordering the image rowby row. D(k) is the decimation matrix with a size of L1 L2 ×

    Fig. 1 The observation model,establishing the relationship between the original high-resolution image and the observed low-resolution images.The observed low-resolution images are the warped, blurred,down-sampled and no is version of the original high-resolution image .

    M1M2, P(kis the blurring matrix of size M1M2 × M1M2, andW(k) is the warpingmatrix of size M1M2

    ×M1M2. Con-sequently, three operations can be combined into one trans- form matrix H(k) = D(k)P(k)W(k) with a size of L1 L2 ×M1M2. Lastly, V(k) is a L1L2 × 1 vector, representing the white Gaussian noise encountered during the image acqui- sition process. Note that V(k) is assumed to be

    independent with X. Over a period of time, one can capture a set of (say,) observations . With such establish-ment, the goal of the SR image reconstruction is to produce one high-resolution image X based. It is important to note that there is another observation model commonly used in the literature (e.g., [3437]). The only difference is that the order of warping and blurring oper- ations is reversed; that is, y(k) = D(k)W(k)P(k)X + V(k). When the imaging blur is spatio-temporally invariant and only global translational motion is involved among multiple observed low-resolution images, the blur matrix P(k) and the motion matrix W(k) are commutable. Consequently, these two models coincide. However, when the imaging blur is spatio- temporally variant, it is more appropriate to use the second model. The determination of themathematicalmodel for formulating the SR computation should coincide with the imaging physics (i.e., the physical process to capture low- resolution images from the original high-resolution ones).

  3. SUPER-RESOLUTION IMAGE RECONSTRUCTION

    The generation of the low resolution image can be modeled as a combination of smoothing and down-sampling operation of natural scenes by low quality sensors. Super resolution is the inverse problem of this generation process. One criteria of solving this inverse problem is minimizing the reconstruction error. Various methods are proposed in literature to deal with the inverse problem. In following section I categorize the different SR methods available in existing paper.

      1. Interpolation Methods

        Image interpolation is the process of converting the image from one resolution to other resolution. This process is performed on a one dimension basis row by row and then column by column. Image interpolatio estimates the intermediate pixel between the known pixels by using different interpolation kernel.

        • Nearest Neighbor Interpolation

          Nearest neighbor interpolation is the simplest interpolation from the computational point of view. In this, each output interpolated pixel assign the value of nearest sample point in the input image [2]. This process just displaces the intensity from reference to interpolated one so it does not change the histogram. It preserves the sharpness and dose not produce the blurring effect but produce aliasing.

        • Bi-linear Interpolation

          In Bi-linear interpolation the intensity at a point is determined from weighted some of intensity at four pixel closet to it. It changes the intensity so histogram is also change. It slightly smoothes the image but does not create an aliasing effect.

        • Bi-cubic Interpolation

          In cubic interpolation intensity at point is estimated from the intensity of 16 closest to it. The basis function is Bi-cubic gives smooth image but computationally demanding.

        • B-spline Interpolation

          Spline interpolation is the form of interpolation where interpolant is a special piecewise polynomial called a spline. There is a whole family of the basis function used in interpolation which is given as [2]. Higher order interpolation is much more used when image required many rotation and distortion in separate step. However for single step enhancement is increased processing time.

        • Hybrid Approach of Interpolation

        In 2008, H. Aftab et al. [3] proposed a new hybrid interpolation method in which the interpolation at edges is carried out using the covariance based method and interpolation at smooth area is done by using iterative curvature based method. After finding edges and smooth area using information from the neighborhood pixels edge is interpolated using covariance based method. The covariance coefficient of HR image is obtaining using co- variance parameter of LR image. In smooth are a curvature interpolation is carried out by first performing bilinear interpolation along the direction where the second derivative is lower and in

        diagonal case the difference between diagonal is calculated and use bilinear interpolation where the intensity difference is less. This method has significant advantage in terms of the processing time, peak signal to noise ratio and visual quality compared to bilinear, bi-cubic and nearest neighbor.

      2. Iterative back projection algorithm

        In this algorithm [1]-[3] back projection error is used to construct super resolution image. In this approach the HR image is estimated by back projecting the error between the simulated LR image and captured LR image. This process is repeated several times to minimize the cost function and each step estimate the HR image by back-projecting the error. The main advantage of this method is that this method converges rapidly, less complexity and low-less number of iteration is required. In recently numbers of improvements are used with this approach which is different edge preserving mechanisms.

      3. Robust Learning-Based Super-Resolution

        This algorithm [5] synthesizes a high-resolution image based on learning patch pairs of low- and high-resolution images. However, since a low- resolution patch is usually mapped to multiple high-resolution patches, unwanted artifacts or blurring can appear in super-resolved images. In this paper, we propose a novel approach to generate a high quality, high-resolution image without introducing noticeable artifacts. Introducing robust statistics to a learning-based super- resolution, we efficiently reject outliers which cause artifacts. Global and local constraints are also applied to produce a more reliable high- resolution image. Learning-based super-resolution algorithms are generally known to provide HR images of high quality. However, their practical problem is the one-to- multiple mapping of an LR patch to HR patches, which results in image quality degradation.

      4. An Efficient Example-Based Approach for Image Super-Resolution

        This algorithm [6], [7] uses learning method to construct super resolution image. The main

        contributions of these algorithms are: (1) a class- specific predictor is designed for each class in our example-based super-resolution algorithm this can improve the performance in terms of visual quality and computational cost; and (2) different types of training set are investigated so that a more effective training set can be obtained. The classification is performed based on vector quantization (VQ), and then a simple and accurate predictor for each category, i.e. a class-specific predictor, can be trained easily using the example patch-pairs of that particular category. These class- specific predictors are used to estimate, and then to reconstruct, the high-frequency components of a HR image. Hence, having classified a LR patch into one of the categories, the high-frequency content can be predicted without searching a large set of LR-HR patch-pairs.

      5. Learning Based Super Resolution using Directionlets

    In this algorithm [9] example based method using directionlets (skewed anisotropic wavelet transform) are used to generate high resolution image. It does scaling and filtering along a selected pair of direction not necessary horizontal and vertical like wavelet transform. In this approach the training set is generated by subdividing HR images and LR images into the patches of size 8*8 and 4*4 respectably. And then best pair of the direction is assign to each pair from five set of directions [(0,90),(0,45),(0,-45),(90,- 45),(90,45)] and then grouping the patches according to direction which reduce the searching time. Input LR image is contrast normalized and then subdivided into 4*4 patches. Each patch is decomposed into eight bands passing using directionlets. The directional coefficient of six bands HL,HH,VL,VH,DL,DH are learn from training set. Minimum absolute difference MAD criterion is used to select the directionlets coefficient. For AL and AH cubic interpolated LR image is used. These learned coefficients are used to obtain SR image by taking inverse directionlets transform. At the end contrast normalize is undo. Simple wavelet which is isotropic and does not follow the edges results in the artifacts which are removed in this case.

  4. RELATION TO OTHER METHODS

    Since this survey paper proposes a new approach to the super resolution restoration problem, it is appropriate to relate this new approach to the methods already known in the literature. In the sequel, we will present a brief description of each of the existing methods in light of the new results. The three main known methods for super resolution restoration are the IBP method [31][33], the frequency domain approach [24][26], the POCS approach [34][35], and the MAP approach [37]. This section will concentrate to propose some novel approach of single image super resolution with edge preservation..

    1. The IBP Method

      The IBP method [31][33] is an iterative algorithm that projects the temporary result onto the measurements, simu- ating them this way. The above simulation error is used to update the temporary result. If we take this exact reasoning and apply it on our proposed model in (2.1), denoting the temporary result at the th step by , we get for the simulated measurements . The proposed update equation in the IBP method [31][33] is given in scalar form, but when put in matrix notations, we get where Q(k) are some error relaxation matrices to be chosen. The conguration obtained in (4.1) is a simple error relaxation algorithm (such as the steepest descent, the GaussSiedel algorithms, or other algorithms), which minimizes a quadratic error as dened in (2.4). This analogy means that the IBP method is none other than the ML (or least squares) method proposed here without regularization. In the IBP method presented in [31][33], th matrices Q(k) were chosen to be Q(k)={[1/f(k)]*[1/C(k)]*[D(k)]} where C(k ) is a reblurring operator, and D(k) is an interpolation to be determined [31][33]. If we choose the simple SD algorithm for the solution of (2.5), we get that . This result implies that choosing the transpose of the blur matrix as the reblurring operator, and zero padding as the interpolation operator gives almost the same result as the IBP method. The only difference is the choice of the warp matrix in the above two congurations. Since , the IBP method uses the additional positive-denite inverse of the matrices to

      the error relaxation matrices proposed by the SD algorithm. These additional terms may compromise the convergence properties of the IBP algorithm, whereas the SD (and others) approach performed directly on the ML optimization problem assures convergence.

      According to the above discussion, therefore, the new approach has thus several benets when compared to the IBP method, as follows.

      1. There is a freedom to choose faster iterative algorithms (such as the CG) to the quadratic optimization problem.

      2. Convergence is assured for arbitrary motion charac-teristic, linear space variant blur, different decimation factors for the measurements, and different additive noise statistics.

      3. Locally adaptive regularization can be added in a simple fashion, with improved overall performance.

    2. The POCS Method

      The approach taken in [34][36] is the direct application of the POCS method for the restoration of superresolution image. The suggested approach did not use the smoothness constraint as proposed here, and chose to use the distance measure in order to get simpler projection operators. In the sequel, we have presented the bounding ellipsoid method as a tool to relate the POCS results to the stochastic estimation methods. We have seen that applying only ellipsoids as constraints gives a very similar result to the ML and the MAP methods [33]. In [34][36], it is suggested to add only the amplitude constraint given in to the trivial ellipsoid constraints. We have shown that instead, we can suggest a hybrid method that has a unique solution, and yet is very simple to implement.

    3. Nonsubsampled Contourlet Transform Based Learning

    Efficient representation of visual information lies at the heart of many image processing tasks, including compression, denoising, feature extraction, and inverse problems. Efficiency of a representation refers to the ability to capture significant information about an object of interest using a small description.

    For image compression or content-based image retrieval, the use of an efficient representation implies the compactness of the compressed file or the index entry for each image in the database. For practical applications, such an efficient representation has to be obtained by structured transforms and fast algorithms. For one-dimensional piecewise smooth signals, like scan-lines of an image, wavelets have been established as the right tool, because they provide an optimal representation for these signals in a certain sense. In addition, the wavelet representation is amenable to efficient algorithms; in particular it leads to fast transforms and convenient tree data structures. These are the key reasons for the success of wavelets in many signal processing and communication applications; for example, the wavelet transform was adopted as the transform for the new image-compression standard, JPEG-2000 [20]

    However, natural images are not simply stacks of 1-D piecewise smooth scan-lines; discontinuity points (i.e. edges) are typically located along smooth curves (i.e. contours) owing to smooth boundaries of physical objects. Thus, natural images contain intrinsic geometrical structures that are key features in visual information. As a result of a separable extension from 1-D bases, wavelets in 2-D are good at isolating the dis-continuities at edge points, but will not see the smoothness along the contours. In addition, separable wavelets can capture only limited directional information an important and unique feature of multidimensional signals. These disappointing be-haviors indicate that more powerful representations are needed in higher dimensions.

    To see how one can improve the 2-D separable wavelet transform for representing images with smooth contours, consider the following scenario. Imagine that there are two painters, one with a wavelet-style and the other with a new style, both wishing to paint a natural scene. Both painters apply a refinement technique to increase resolution from coarse to fine. Here, efficiency is measured by how quickly, that is with how few brush strokes, one can faithfully reproduce the scene.

  5. PROPOSED SCHEME

Super resolution is the problem of regenerating a high resolution image for one or multiple low resolution images of same scene. Most of the method reviewed are based on multiple low resolution images and mathematically complex. So, the objective of this is a generating high resolution image from single low resolution image, and this is known as single image super resolution. Such single image super resolution problems arise in a number of real word applications. A common application is the online image exchange. To save the storage space and communication bandwidth; it would be desirable that the low resolution image is downloaded and enlarged by user with some appropriate super resolution techniques. In super resolution there is always one aim to restore the high frequency component back which lies at the edges in the image. To take care of these all things, the contribution of this is to propose some novel approach of single image super resolution with edge preservation.

  1. Conclusion:

    It can also be served as an appreciable front-end pre- process- ing stage to facilitate various image processing applications to improve their targeted terminal performance The SR imaging has been one of the fundamental image processing research areas. It can overcome or compensate the inherent hardware limitations of the imaging system to provide a more clear image with a richer and informative content.. In this sur-vey paper, our goal is to offer new perspectives and out looks of SR imaging research, besides giving an updated overview of existing SR algorithms. It is our hope that this work could inspire more image processing researchers endeavoring on this fascinating topic and developing more novel SR tech- niques along the way.

  2. References

  1. Baikun Wan and Lin Meng, Video Image Super-resolution Restoration Based on Iterative

    Back-Projection Algorithm , CIMSA, Hong Kong, China, 2009, pp 46- 49.

  2. Chen- Chiung Hsieh and Yo-Ping Huang Video Super-Resolution by Motion Compensated Iterative Back-Projection Approach journal of information science and engineering, vol 27, no 3, 2011, pp 1107-1122

  3. S. Dai, M. Han, Y. Wu, and Y. Gong, Bilateral Back-Projection for Single Image Super Resolution, IEEE Conference on Multimedia and Expo (ICME), 2007, pp. 1039-1042.

  4. Vaishali B. Patel, Chintan K. Modi, Chirag

    N. Paunwala, Suprava Patnaik, Hybrid Approach For Single Image Super Resolution Using Isef And Ibp: Specific Reference To License Plate Proceedings of the IASTED, Canada, June 2011, pp 152- 157

  5. Changhyun Kim and Kyuha Choi Robust learning-based super-resolution Proceedings of IEEE 17th International Conference on Image Processing, 2010, pp 2017 2020

  6. Xiaoguang Li and Kin Man Lam An efficient example-based approach for image super-resolution IEEE Int. Conference Neural Networks & Signal Processing Zhenjiang, China, June 2008, pp 575 580

  7. W.T. Freeman, T.R. Jones, and E.C. Pasztor, Example-Based Super- Resolution, IEEE Computer Graphics and Applications, vol. 22, no. 2, 2002, pp. 56-65.

  8. R. Gonzalez and R. Woods, Digital Iage Processing, 3rd Edition, Pearson Eduction, Inc, Publishing as Prentice Hall, pp. 714-715.

  9. M. Irani and S. Peleg, Motion Analysis for Image Enhancement: Resolution, Occlusion and Transparency, Journal of Visual Communication and Image Representation (JVCIP), vol 4, no 4, 1993, pp. 324-335.

  10. C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet Transforms, Prentice Hall, New Jersy, 1998.

  11. M. Beaulieu, S. Foucher, L. Gagnon, Multi- spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 4032-4034.

  12. Nunez J, Otazu X, Fors O, Prades A, Pala V, Arbiol R. "Multiresolution-based image fusion with additive wavelet decomposition". IEEE Transactions on Geoscience and Remote Sensing, vol 37, no 3, 1999, pp 12041211

  13. M. Beaulieu, S. Foucher, L. Gagnon, Multi- spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 4032-4034.

  14. M.V.Joshi and S.Chaudhuri, A learning based method for image super-resolution from zoomed observations, Proc. of 5th Int. Conf. on Advances in Pattern Recognition (ICAPR03) pp.179-182, Calcutta, India, Dec.2003.

  15. J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Transactions on Signal Processing,vol. 41, no. 12, pp. 34453462, 1993

  16. C.V.Jiji, M.V.Joshi and S.Chaudhuri, Single- frame image super-resolution using learned wavelet coefficients International Journal of Imaging Systems and Technology, vol.14, no.3, pp.105-112, 2004

  17. Videos lectures on Advanced Digital Signal Processing-Wavelet And Multirate by B. H. Gadre.

  18. Nunez J, Otazu X, Fors O, Prades A, Pala V, Arbiol R. "Multiresolution-based image fusion with additive wavelet decomposition". IEEE Transactions on Geoscience and Remote Sensing, vol 37, no 3, 1999, pp 12041211

  19. M. Beaulieu, S. Foucher, L. Gagnon, Multi- spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 4032- 4034.

  20. Do M N, Vetterli M. "The contourlet transform: an efficient directional multiresolution image representation". IEEE Transactions on

Image Processing,vol 14, no 12, 2005, pp

20912106

  1. Bamberger R H, Smith M J T. "A filter bank for the directional decomposition of images: theory and design". IEEE Transactions on Signal Process, vol 40, no 4, 1992, pp 882893

  2. J. P. Zhou, Arthur L. Cunha, and Minh N. Do. Nonsubsampled contourlet transform: construction and application in enhancement, IEEE ICIP. 2005, pp. 469-472.

  3. Da Cunha A L, Zhou J P, Do M N. "The nonsubsampled contourlet transform: theory, design and applications". IEEE Transactions on Image Processing, 2006, vol 15,no 10, pp 30893101

  4. D. P. Capel, Image mosaicing and super- resolution, Ph.D. disserta- tion, Univ. of Oxford, Oxford, U.K., 2001.

  5. M. Gevrekci and B. K. Gunturk, Super resolution under photometric diversity of images, EURASIP J. Adv. Signal Process.,vol.2007, 2007, Article ID 36076.

  6. J.Ma, J. C.-W. Chan, and F. Canters, Fully automatic sub-pixel image registration of multi-angle CHRIS/Proba data, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 7, pp. 28292839, July 2010.

  7. W. Y. Zhao and S. Sawhney, Is super- resolution with optical ow feasible?, in Proc. ECCV, LNCS 2350, 2002, pp. 599613.

  8. G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, Registration of challenging image pairs: Initialization, estimation, and decision, IEEE Trans. Pattern Anal.Mach. Intell., vol. 29, no. 11, pp. 1973 1989,Nov.2007.

  9. H. Trussel and B. Hunt, Sectioned methods for image restoration, IEEE Trans. Acoust., Speech Signal Process., vol. ASSP-26, no. 2, pp. 157164, 1978.

  10. Q. Tian and M. N. Huhns, Algorithms for subpixel registration, Comput. Vision, Graphics, Image Process., vol. 35, pp. 220233, Aug. 1986.

  11. S. Periaswamy and H. Farid, Medical image registration with partial data, Med. Image Anal., vol. 10, no. 3, pp. 452464, Jun. 2006.

  12. X. Feng, Analysis and approaches to image local orientation estima- tion, M.S. thesis, Dept. Comput. Eng., Univ. California, Santa Cruz, Mar. 2002.

  13. M. A. Martín-Fernández, M. Martín-Fernández, and C. Al- berola-Lopez, A log-euclidean polyafne registration for articulated structures in medical images, in Proc. MICCAI, 2009, pp. 156164.

  14. A. Mohammad-Djafari, Super-resolution: A short review, A new method based on hidden Markov modeling of HR image and future challenges, Comput. J., vol. 52, no. 1, pp. 126141, 2009.

  15. F. Chen, J.Ma, J. C.-W. Chan, and D. Yan, Quantitative measurement of the homogeneity and co

    ntrast of step edges in the estimation of the point spread function of a satellite image, Int. J. Remote Sens., vol. 32, no. 22, pp. 71797201, 2011.

  16. S. Baker and T. Kanade, Limits on super- resolution and how to break them, IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 9, pp. 1167 1183, Sep. 2002.

  17. L. C. Pickup, Machine learning in multi-frame image super-resolu- tion, Ph.D. dissertation, Univ. Oxford, Oxford, U.K., Feb. 2008.

Leave a Reply