- Open Access
- Total Downloads : 6
- Authors : Daniya T John , Hymavathy K.P.
- Paper ID : IJERTCONV3IS05011
- Volume & Issue : NCETET – 2015 (Volume 3 – Issue 05)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Video Denoising using Dual Tree Complex Wavelet Transform
Ms.Daniya T John Electronics and Communication KMCT College of Engineering Kallanthode, Calicut, India
Ms. Hymavathy K.P. Electronics and Communication KMCT College of Engineering Kallanthode, Calicut, India
Abstract Denoising using DT-CWT is a novel way to reduce noise introduced or exacerbated by video during enhancement methods, in particular algorithms based on histogram equalization technique, but not only. The output video of enhanced one tend to exhibit noise with unknown statistical distribution. To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced video is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced video frames. Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non- enhanced video frame coefficients is computed across the six orientations of the DTWCT, then it is normalized. The result is a map of the directional structures present in the non- enhanced video frame. Said map is then used to shrink the coefficients of the enhanced video frame. The shrunk coefficients and the coefficients from the non-enhanced video frame are then mixed according to data directionality.These operations are repeated for each frame Finally, a noise- reduced version of the enhanced video is computed via the inverse transforms.
KeywordsDual Tree Complex Wavelet Transform (DTCWT), Image enhancement, Noise reduction, Shrinkage
-
INTRODUCTION
ALTHOUGH the field of image enhancement has been active since before digital imagery achieved a consumer status, it has never stopped evolving. The present work introduces a novel multi-resolution denoising method, tailored to address a specific video quality problem that arises when using image enhancement algorithms based on histogram equalization. While inspired by the peculiar problem of such methods, the proposed approach also works for other enhancement methods that either introduce or exacerbate noise. This work builds and expands on a previous article by Fierro et al. [1].
Histogram Equalization,this method usually increases the global contrast of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas
of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.
The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.
Among denoising algorithms, multi-resolution methods have a long history. A particular branch is that of transform space coefficients shrinkage, i.e. the magnitude reduction of the transform coefficients according to certain criteria. Some of the most commonly used transforms for shrinkage-based noise reduction are the Wavelet Transform (WT) [2][4], the Steerable Pyramid Transform [8][10], the Contourlet Transform [5][7] and the Shearlet Transform [8][10]. With the exception of the WT, all other transforms lead to over-complete data representations. Over-completeness is an important characteristic, as it is usually associated with the ability to distinguish data directionality in the transform space.
Independently of the specific transform used, the general assumption in multi-resolution shrinkage is that image data gives rise to sparse coefficients in the transform space. Thus, denoising can be achieved by compressing (shrinking) those coefficients that compromise data sparsity. Such process is usually improved by an elaborate statistical analysis of the dependencies between coefficients at different scales. Yet, while effective, traditional multi- resolution methods are designed to only remove one particular type of noise (e.g. Gaussian noise). Furthermore, only the input image is assumed to be given. Due to the unknown statistical properties of the noise introduced by the use of enhancement method, traditional
approaches do not find the expected conditions, and thus their action becomes much less effective.
-
DUAL TREE COMPLEX WAVELET TRANSFORM
The Discrete Wavelet Transform (DWT) has been a founding stone for all applications of digital image processing: from image denoising to pattern recognition, passing through image encoding and more. While being a complete and (quasi-)invertible transform of 2D data, the Discrete Wavelet Transform gives rise to a phenomenon known as checker board pattern, which means that data orientation analysis is impossible. Furthermore, the DWT is not shift-invariant, making it less useful for methods based on the computation of invariant features.
In an attempt to solve these two problems affecting the DWT, Freeman and Adelson first introduced the concept of Steerable filters [11], which can be used to decompose an image into a Steerable Pyramid, by means of the Steerable Pyramid Transform (SPT) [12]. While, the
1,2(x, y) = h (x)h(y), 2,2(x, y) = g(x)g(y), (1)
1,3(x, y) = h(x)h (y) 2,3(x, y) = g(x)g(y)
3,1(x, y) = g(x)h(y), 4,1(x, y) = h(x)g(y),
3,2(x, y) = g(x)h(y), 4,2(x, y) = h(x)g(y) (2)
3,3(x, y) = g(x)h(y) 4,3(x, y) = h(x)g(y).
The relationship between wavelet filters h and g is shown below
g0(n) h0(n-1) for j=1 (3)
g0(n) h0(n-0.5) for j>1 (4) where j is the decomposition level.
When combined, the bases give rise to two sets of real,two-dimensional , oriented wavelets ie,
i(x,y) = 1 (1,i(x,y) 2,i(x,y)) (5)
2
i + 3(x,y) = 1 (1,i(x,y) + 2,i(x,y)) (6)
SPT is an over-complete representation of data, it grants 1 2
the ability to appropriately distinguish data orientations as well as being shift-invariant. Yet, the SPT is not devoid of problems: in particular, filter design can be messy, perfect reconstruction is not possible and computational efficiency can be a concern.
Thus, a further development of the SPT, involving the use of a Hilbert pair of filters to compute the energy response, has been accomplished with the Complex Wavelet Transform (CWT) [13]. Similarly to the SPT, in order to retain the whole Fourier spectrum, the transform needs to be over-complete by a factor of 4, i.e. there are 3 complex coefficients for each real one.While the CWT is also efficient, since it can be computed through separable filters, it still lacks the Perfect Reconstruction property.
Therefore, Kingsbury also introduced the Dual- treeComplex Wavelet Transform (DTCWT), which has the added characteristic of Perfect Reconstruction at the cost of approximate shift-invariance [14].
Since the topic is extremely vast, only a brief introduction of the 2D DTCWT is given. The reader is referred to the the work by Selesnick et al. [15] for a comprehensive coverage on the DTCWT and the relationship it shares with other transforms.
Fig. 1. Quasi-Hilbert pairs wavelets used in the dual-tree complex wavelet transform. Each pair is shown in a column, with the even part on top and the odd one on bottom.
The 2D Dual Tree Complex Wavelet Transform can be implemented using two distinct sets of separable 2D wavelet bases, as shown below.
1,1(x, y) = h(x)h (y), 2,1(x, y) = g(x)g(y),
i(x,y) = 2(3,i(x,y) + 4,i(x,y)) (7)
i + 3(x,y) = 1 (3,i(x,y) 4,i(x,y)) (8)
2
The most interesting characteristic of such wavelets is that they are approximately Hilbert pairs. One can thus interpret the coefficients deriving from one tree as imaginary, and obtain the desired 2D DTCWT.
-
HISTOGRAM EQUALIZATION
This method usually increases the global contrast of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.
It can be used on color images by applying the histogram method separately to the Red, Green and Blue components of the RGB color values of the image. However, applying the histogram method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image[4].
In spite of fundamental advantage in histogram equalization, it has a significant drawback of changing the brightness globally, which results in either under-saturation or over-saturation of important regions. Due to this reason, for the implementation of contrast enhancement in consumer electronic products it is advised that the loss of intensity values by the histogram processing should be minimized in the output image.
-
PROPOSED METHOD
The main idea behind this work can be summarized as follows: directional content is what conveys information to the Human Visual System.For obvious geometrical reasons, intensity changes of a directional nature are more easily crossed (or sampled) than point-like structures such as noise.
Following such idea, the proposed method revolves around the shrinkage, according to data directionality, of the wavelet coefficients generated by the Dual Tree Complex Wavelet Transform. The DTCWT is chosen for the ability to distinguish data orientation in transform space, its
for j = 1 J do
for k = 1 6 do
j, j,
j, j,
, (bI k)2 + (cIk)2
end for
mm(stddev(ej,k), median(ej,k), j)
for k = 1 6 do
k
k
k
k
b E j,k wj . bEj,k +(1-wj) . bj, I c Ej,k wj . cEj,k +(1-wj) . cj,kI ij,kI ord(bj, I)
k
k
if ij, I {1,2} then
bj, O b j, E
k k
relative simplicity and other useful properties.
The HVS has been proven to be more sensitive to
cj, O c j, E
k k
k k
else
changes in in the achromatic plane (brightness), than chromatic ones [16]. Hence, the proposed method first converts the image in a space where the chroma is separated from the luma (such as YCbCr), and operates on the wavelet space of the luma channel. The choice to use
end for end for
bj, O bj,kI cj, O cj,kI
k
k
k
k
end if
only the luma channel does not lead to any visible color
artifact.
YE idcwt( bO,cO)
Finally, a fundamental assumption is made: the input video is considered to be either free of noise, or contaminated by non perceivable noise. If such an assumption holds, the input video contains the information needed for successful noise reduction.
In the denoising of video, take each frame and apply the denoising algorithm given by Algorithm 1, which is given below. This denoising algorithm is applied to every frames of the video, and last created a new denoised video.
The algorithm for the proposed method is given as Algorithm 1. For ease of reference, a visual description is also given in Fig. 8. The following subsections explain the details of the shrinkage process and the tests performed to optimize the algorithm parameters.
Input Video (noise free)
Denoise
Output video (Denoised)
Input Video (noise free)
Denoise
Output video (Denoised)
Enhance
Enhanced video(noisy)
Enhance
Enhanced video(noisy)
Fig 2 : Proposed method block diagram
A. Wavelet Coefficients Shrinkage
j,k j,k
j,k j,k
Assuming level j of the wavelet pyramid, one can compute the energy for each direction of the non-enhanced image k{1, 2, . . . , 6} as the sum of squares of the real coefficients bI and the complex ones cI
j, j,
j, j,
e j,k = (bI k )2 + (cI k )2. (9)
Coefficients associated with non-directional data will have similar energy in all directions. On the other hand, directional data will give rise to high energy in one or two directions, according to its orientation.
The standard deviation of energy across the six directions k = 1, 2, . . . , 6 is hence computed as a measure of directionality.
ej = stddev(ej,k) (10)
Since the input coefficients are not normalized, it naturally follows that the standard deviation is also non- normalized. The Michaelis-Menten function is thus applied to normalize data range. Such function is sigmoid-like and it has been used to model the cones responses of many species. The equation is as follows :
x
Algorithm 1 Algorithm for Proposed Noise-Reduction
mm(x , , ) =
x +
(11)
Method
E RG B enhance( IRG B ) IY CbCr rgb2ycbcr( I RG B )
EY CbCr rgb2ycbcr( E RG B ) YI Y channel of IY CbCr
(bI , cI ) dtcwt(YI )
YE Y channel of EY CbCr
Repeat
(bE,cE) dtcwt(YE) YE is iteration dependent
where x is the quantity to be compressed, a real-valued exponent and the data expected value or its estimate.
Hence, a normalized map of directionally sensitive weights for a given level j can be obtained as
w j = mm(e j , median(e j,k ), j ) (12)
k
where the choice of depends on j. A shrunk version of the enhanced images coefficients,
according to data directionality,is then computed as
I
I
b E j,k wj . bEj,k +(1-wj) . bj,k (13)
c Ej,k wj . cEj,k +(1-wj) . cj,kI (14)
Since the main interest is retaining directional information, we obtain a rank for each of the non-enhanced coefficients as,
I
I
ij,kI = ord(bj,k ) , {1,2,.6} (15)
where ord is the function that returns the rank according to natural ordering.
The output coefficients are then computed as follows
bj, O = b j, E if ij,kI {1,2} (16)
( b )
k k
k
k
bj, E if ij,kI {3,4,5,6}
E I
E I
O E I
O E I
cj,k = c j,k if ij,k {1,2} (17)
cj,k if ij,k {3,4,5,6}
The meaning of the whole sequence can be roughly expressed as follows: where the enhanced image shows directional content, shrink the two most significant coefficients and replacethe four less significant ones with those from the non-enhanced image.
The reason why only the two most significant coefficients are taken from the shrunk ones of the enhanced image is to be found in the nature of directional content. For an content of an image to be directional, the responses across the six orientations of the DTCWT need to be highly skewed. In particular, any data orientation can be represented by a strong response on two adjacent orientations, while the remaining coefficients will be near zero. This will make it so that the two significant coefficients are carried over almost un-shrunk.
( a )
( c )
Fig. 3. Proposed method flowchart. (a) Luma channels of both the nonenhanced and the enhanced image frames are transformed using the DTCWT, and the obtained coefficients are elaborated. The output coefficients are transformed into the output images luma channel via the inverse DTCWT. (b) and (c) Computation indicated by the box in Fig. 8(a) is performed per level of the decomposition. (b) Directional energy map is first computed as the standard deviation of sum-of-squares of the coefficients. A weight map is then obtained by using the Michaelis Menten function for normalization. The real (even) coefficients of the enhanced image are also ranked according to their magnitude. (c) Weight map is used to scale the coefficients of the enhanced image. The resulting scaled coefficients and the coefficients from the nonenhanced image are mixed according to the ranking. The process in (c) is illustrated for even coefficients only, but it is repeated identically for odd coefficients.
-
CONCLUSION
This work presents a noise reduction method based on Dual Tree Complex Wavelet Transform coefficients shrinkage. The main point of novelty is represented by its application in post processing on the output of an image enhancement method (both the non enhanced video and the enhanced one are required) and the lack of assumptions on the statistical distribution of noise. On the other hand, the non-enhanced video is supposed to be noise-free or affected by non perceivable noise.
Following well known properties of the Human Visual System, the image frames are first converted to a color space with distinct chromatic and achromatic axes, then only the achromatic part becomes object of the noise reduction process. To achieve pleasant denoising, the proposed method exploits the data orientation discriminating power of the Dual Tree Complex Wavelet Transform to shrink coefficients from the enhanced, noisy image frames. Always according to data directionality, the shrunk coefficients are mixed with those from the non- enhanced, noise-free image frames. The output video is
then computed by inverting the Dual Tree Complex Wavelet Transform and the color transform.
The methods main limitations are the necessity of two input images (one non-enhanced and one enhanced) and its iterative nature, which expands computation time considerably with respect to one-pass algorithms.
ACKNOWLEDGMENT
The author express their sincere thanks to HOD, group tutor guide, staff in Electronics and Communication Department, KMCT College of Engineering and the authors that is used to implement this paper for many fruitful discussions and constructive suggestions during the implementation of this paper.
REFERENCES
-
M. Fierro, W.-J. Kyung, and Y.-H. Ha, Dual-tree complex wavelet transform based denoising for random spray image enahcement methods,in Proc. 6th Eur. Conf. Colour Graph., Imag. Vis., 2012, pp. 194199.
-
] H. A. Chipman, E. D. Kolaczyk, and R. E. McCulloch, Adaptive bayesian wavelet shrinkage, J. Amer. Stat. Assoc., vol. 92, no. 440, pp. 14131421, 1997.
-
A. Chambolle, R. De Vore, N.-Y. Lee, and B. Lucier, Nonlinear wavelet image Processing: Variational problems, compression, and noise removal through wavelet shrinkage, IEEE Trans. Image Process., vol. 7, no. 3, pp. 319335, Mar. 1998.
-
] D. Cho, T. D. Bui, and G. Chen, Image denoising based on wavelet shrinkage using neighbor and level dependency, Int. J. Wavelets, Multiresolution Inf. Process., vol. 7, no. 3, pp. 299311, May 2009.
-
] S. Foucher, G. Farage, and G. Benie, Sar image filtering based on the stationary contourlet transform, in Proc. IEEE Int. Geosci. Remote Sens. Symp., Jul.Aug. 2006, pp. 40214024.
-
W. Ni, B. Guo, Y. Yan, and L. Yang, Speckle suppression for sar images based on adaptive shrinkage in contourlet domain, in Proc. 8th World Congr. Intell. Control Autom., vol. 2. 2006, pp. 10017 10021.
-
K. Li, J. Gao, and W. Wang, Adaptive shrinkage for image denoising based on contourlet transform, in Proc. 2nd Int. Symp. Intell. Inf. Technol. Appl., vol. 2. Dec. 2008, pp. 995999.
-
] Q. Guo, S. Yu, X. Chen, C. Liu, and W. Wei, Shearlet-based image denoising using bivariate shrinkage with intra-band and opposite orientation dependencies, in Proc. Int. Joint Conf. Comput. Sci. Optim., vol. 1. Apr. 2009, pp. 863866.
-
X. Chen, C. Deng, and S. Wang, Shearlet-based adaptive shrinkage threshold for image denoising, in Proc. Int. Conf. E-Bus. E- Government, Nanchang, China, May 2010, pp. 16161619.
-
J. Zhao, L. Lu, and H. Sun, Multi-threshold image denoising based on shearlet transform, Appl. Mech. Mater., vols. 2932, pp. 2251 2255, Aug. 2010.
-
W. Freeman and E. Adelson, The design and use of steerable filters, IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 9, pp. 891906, Sep. 1991.
-
E. P. Simoncelli and W. T. Freeman, The steerable pyramid: A flexible architecture for multi-scale derivative computation, in Proc. 2nd Annu. Int. Conf. Image Process., Oct. 1995, pp. 444447.
-
N. G. Kingsbury, Image Processing with complex wavelets, Philos. Trans. Math. Phys. Eng. Sci., vol. 357, no. 1760, pp. 2543 2560, 1999.
-
N. G. Kingsbury, The dual-tree complex wavelet transform: A new technique for shift invariance and directional filters, in Proc. 8th IEEE Digit. Signal Process. Workshop, Aug. 1998, no. 86, pp. 14.
-
I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, The dual- tree complex wavelet transform: A coherent framework for multiscale signal and image Processing, vol. 22, no. 6, pp. 123 151, Nov. 2005.
-
M. Livingstone, Vision and Art: The Biology of Seeing. New York: Harry N. Abrams, 2002.