- Open Access
- Authors : Prachii Kumar
- Paper ID : IJERTV11IS110080
- Volume & Issue : Volume 11, Issue 11 (November 2022)
- Published (First Online): 28-11-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Exploring Image Reconstruction with Orthogonal Matching Pursuit and Least Angle Regression
Prachii Kumar
Dept. of Electronics and Communication Engineering
R.V College of Engineering Bangalore, India
AbstractImage reconstruction is a process of recovering the required components that constitute an image. Over the last decade and a half, a technique to reconstruct images was developed. This technique, known as Compressed Sensing (CS), requires only sparse measurements while the Nyquist sampling theorem requires all necessary samples to recover a signal. This paper explores two algorithms which solve for a sparse solution, namely, OMP and LARS. It surveys the two procedures by measuring different metrics side-by-side and highlighting the essence of the algorithms.
Index TermsImage reconstruction, OMP, LARS, Signal Pro- cessing
-
INTRODUCTION
The reconstruction of signals is a common goal in the field of signal processing. The Nyquist-Shannon sampling theorem suggests that in order to reconstruct a signal perfectly, it would have to be sampled at a rate that is at least twice its highest frequency component. For signals that are not naturally ban- dlimited, like images, the temporal resolution will determine the sampling rate. However, in data acquisition systems, due to aliasing, an antialiasing-low-pass filter is traditionally used to bandlimit the signal. Therefore, the Nyquist-Shannon theorem still plays an essential role.
Compressed sensing is a signal processing technique that can reconstruct a signal with fewer measurements than tra- ditionally required. It fundamentally relies on two principles, namely, sparsity and incoherence. [1]
-
Sparsity
To demonstrate sparsity mathematically, consider an un-
known vector x Rn; such as the n-pixels of an image. Consider to be a standard basis; such as the fourier basis
n
xs = i (2)
i=1
Given that is an n × n matrix, the compressible signal x
can then be written as:
xs = s (3)
-
Incoherence
The latter principle that compressed sensing relies on is incoherence. In the context of incoherent sampling, consider
, a sensing matrix Rm×n, where m << n. Let there exist a sparse matrix which is incoherent with respect to , as
their product will have new samples which arent found in a standard basis [2]. Instead of recovering x, which will require n requirements, consider a measurement vector y Rm.
y = x (4)
From eq. (3) s is the sparsest possible sequence of coefficients that would make up the image x. The dictionary C = substitutes the above equation to become:
y = Cx (5)
Thus, the above equation is of the form Ax = b, wherein, with the information of y and C, one must solve for x. For the two algorithms that we are exploring in this paper, there are a
which can expanded as can be represented few assumptions. It assumes C Rm is an input matrix and is
as follows : = [1 2 ..n] x l2 normalized. The residual vector r Rm demonstrates the
difference between the measurement vector y and the solution
n
x = ii (1)
i=1
Where is understood as a sequence of coefficients of x. In order to compress x, the vector s will have to be sparse. It will majorly have zero value entries.
vector. A support set S Rk is a set that consists of the indices of the active columns of matrix C.
-
-
ALGORITHMS FOR SPARSE APPROXIMATION
In this paper we discuss the two algorithms for the re- construction of a noisy image. Their individual features are highlighted below.
-
Least Angle Regression
Over a recent period of time, attention to the domain of l1 normalization has increased drastically. It has offered
a step of othrogonalization, in order to prevent the algorithm from choosing a column of the matrix C repeatedly [6].
Algorithm 2 Orthogonal Matching Pursuit
techniques to solve for a sparse solution of underdetermined
Input: (i) the measurement vector y
Rm (ii) the matrix
systems. Donoho and Tsaig [4] proposed that an algorithm
named Homotopy algorithm (a modified LARS algorithm), in addition to solving the l1 minimization problem, can solve for the sparse solutions just as rapidly as OMP/LARS. The name least angle came from geometrical interpretation of the LARS process. It chooses the updated direction that makes the smallest and equal angle with all active columns [3].
Algorithm 1 Least Angle Regression
C Rm×n (iii)The threshold for error
Output: The sparse vector x Rn Initialise :
-
The residual r0 = y
2) The counter value p = 0
3) x0 = 0
4) Support set S = {}
Compute:
1) p = p + 1
-
Calculate the correlation vector vp = CT rk1
Input: (i) the measurement vector y
Rm (ii) the matrix
-
Find the next column of matrix C by using the index
C Rm×n (iii) The threshold for error
obtained from the largest absolute entry of vp.
Output: The sparse vector x Rn
i = arg max
jCp
|vp(j) |
Initialise :
-
The residual r0 = y
-
2) The counter value p = 0
3) x0 = 0
Where Cp is a set that the excludes values that are in the support set S
-
Add i to Sp = Sp1 {i}
-
Solve the least square problem: CT CSxp(S) = CT y
S S
4) Support set S = {} 6) Calculate the residual vector rp = y Cxp
Compute:
1) p = p + 1
2) Calculate the correlation vector by vp = CT rp1
-
If the residual vector r ¿ , the algorithm will go back
to the first step. Else, the algorithm will terminate and
return x = x
-
Calculate the absolute maximum value in the vector
vp, p = ||vp ||
-
If the value of p happens to be extremely small or even 0, then the algorithm terminates and the values
of xp is returned. Else, the algorithm continues and the following steps are continued.
-
The support set S will then be built using {j:vp(j) =
p }
-
The least squares problem will then be solved, so that
the active entries may be found. The updated direction should be considered.
s
CT CSdp(S) = sign(vp(S))
where sign(vp(S)) returns the sign of the entries of the correlation vector vk
-
The inactive entries of the updated direction become
-
0, dp(Sv) = 0
-
Calulate the minimum step size for p
-
The solution vector is updated to xp = xp1 + pdp
-
Find the new residual vector: rp = y Cxp
11) If r||p 2|| < , the algorithm may be terminated and
output x = xv as the solution vector. If not, increase the iteration k = k +1 and return to computing the
correlation vector.
-
-
-
Orthogonal Matching Pursuit
In 1993, Mallat and Zhang [5] proposed a sparse approxi- mation algorithm that they named the Matching Pursuit (MP). This algorithm searches for a solution for an underdetermined linear system. The MP algorithm was later modified to the OMP, which uses a least squared formula instead and adds
p
-
-
EXPERIMENT AND SIMULATION
We conducted an experiment by using a noisy image from a dataset and measure the performance of both algorithms on it.
-
Conditions
The Smartphone Image Denoising Dataset (SIDD) con- tains images representing 160 scenes. Tey are presented in pairs of noisy and ground truth. This dataset has pictures which were shot on the iPhone 7, LG G4, Google Pixel and Samsung Galaxy S6. For this experiment, patches of size (7,7) were extracted from a sample image. The dictionary for this algorithm was learnt in batches, and the comparison of the two algorithms is measured through two parameters. These parameters are Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).
-
Results
The measured PSNR values for both methods are shown in Table I. For the LARS algorithm, the values do not increase drastically and centre around 28 dB. The OMP algorithm increases in PSNR with the increase of the non-zero coeffi- cients. The OMP algorithm displayed higher values of PSNR. The SSIM is a perception-based metric that calculates image quality degradation.
As shown in Table II, the values of SSIM increase upwards with the increase in of non-zero coefficients. It is worth noting that OMP reaches a value that is extremely close to 1. From fig. 1, there are upward trends of both the algorithms. At no
TABLE I
PSNR VALUES FOR OMP AND LARS
PSNR (dB)
Non-zero
coefficients (s)
OMP
LARS
28.44
25.70
2
30.58
25.76
4
33.72
27.51
6
37.10
28.93
8
38.11
28.54
10
38.90
28.91
12
41.18
28.28
14
point do these values overlap, although a slight spike at 8 non-zero coefficients is noticeable.
TABLE II
SSIM VALUES FOR OMP AND LARS
SSIM
Non-zero
coefficients (s)
OMP
LARS
0.882
0.749
2
0.932
0.791
4
0.964
0.839
6
0.985
0.865
8
0.986
0.869
10
0.991
0.876
12
0.994
0.877
14
Fig. 1. Graph of SSIM vs. non-zero coefficients for LARS and OMP
-
-
CONCLUSION
This paper explores two sparse approximation algorithms by testing them on an image reconstruction experiment. The gathered results as shown above were for a given noisy image, so that we could obtain a general insight into the metrics of OMP and LARS. The time taken to compute the calculation for OMP is far lesser than for LARS. This is because there are many additional steps that LARS executes, that OMP does not have. OMP updates the largest possible entries such that the values in the support set are orthogonal to residual r. As for LARS, the solution coefficients are updated to the smallest value, which will result in a column to join the support set or be dropped from it. For further study, it would be
Fig. 2. Original noisy image from SIDD
Fig. 3. Reconstruction with OMP and 4 non-zero coefficients
Fig. 4. Reconstruction with LARS and 4 non-zero coefficients
encouraged to analyse the performance of OMP and LARS in other circumstances where values of C are highly correlated.
REFERENCES
[1] E. J. Candes and M. B. Wakin, An Introduction To Compressive Sampling, in IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21- 30, March 2008, doi: 10.1109/MSP.2007.914731. [2] J. M. Duarte-Carvajalino and G. Sapiro, Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Op- timization, in IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1395-1408, July 2009, doi: 10.1109/TIP.2009.2022459. [3] Hameed, Mazin Abdulrasool. Comparative Analysis of Orthogonal Matching Pursuit and Least Angle Regression. N.p.: Michigan State University, Electrical Engineering, 2012. [4] Donoho, David Tsaig, Yaakov Drori, Iddo Starck, Jean-Luc. (2012). Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit. IEEE Transactions on Infor- mation Theory. 58. 1094-1121. 10.1109/TIT.2011.2173241. [5] Mallat, Ste´phane Zhang, Zhifeng. (1994). Matching Pursuit with Time- Frequency Dictionaries. Signal Processing, IEEE Transactions on. 41. 3397 – 3415. 10.1109/78.258082. [6] Michael Elad. 2010. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (1st. ed.). Springer Publishing Company, Incorporated.