Removing the Rain Streaks from The Video Using MCA Algorithm

DOI : 10.17577/IJERTV2IS110881

Download Full-Text PDF Cite this Publication

Text Only Version

Removing the Rain Streaks from The Video Using MCA Algorithm

Removing the Rain Streaks from The Video Using MCA Algorithm

Yugashini.K1, Nivodhini.M.K2

  1. PG Scholar,Department of computerscience, KSR COLLEGE OF ENGINEERING,Tiruchengode, TamilNadu.

  2. Assistant Professor, Department of computerscience, KSR COLLEGE OF ENGINEERING,Tiruchengode

Removing rain streaks from the video is the challenging task and has been researched broadly. But finding the exact problem in removing the rain streaks in a single image is still unclear. In this paper for removing rain streaks from a single image, MCA algorithm is used. It is based on the structural component, which is directly applied conventional image decomposition method. Using bilateral filter the image is decomposed into low frequency and high frequency. Using MCA algorithm high frequency part is filtered into rain component and non rain component. The rain components from the image will be removed. Hence a clarity image will be given as output.

Key words: rain streaks, structural, bilateral filter.

Various weather conditions such as rain, snow will cause difficult photographic special equipment of time-based fields with images or

videos. Dynamic images are divided into rain and snow. The different parameteric camera capturing a dynamics of rain and carnal based motion shadow model characterizing the photometry of rain. This paper is among the first specifically addressed the problem of removing rain streaks in a single image. The rain streaks removal in a single image fall into image noise or image restoration. The exposure time and depth of field to smoothen the effect of rain without altering the background screen. The denoising and K-SVD dictionary training algorithm is used to remove a noise in the image where the image quality is not clear. So Morphological component analysis (MCA) used to remove the rain streaks . Instead of applying a conventional image decompose technique MCA method initially smoothing the image using a bilateral filter and the split the image into low frequency and high frequency . The high frequency parts are decomposed into rain and nonrain based on patch extraction on the dictionary and sparse coding. A rain streaks is effectively removed from an image and produces a novel image.

This paper is organized as follows: Section 2 introduces related works on removing rain streaks from our method. Our existing consist of section 3. Our proposed constructions are presented in sections

4 ,5 and 6. Our experimental result in section 7. Finally section 8 concludes this paper.

  1. The soft voting algorithms [3, 7,6] have focused on the removal of rain streaks in video sequences, which are captured by static cameras. These algorithms detect and remove rain streaks by exploiting high temporal correlation between consecutive frames. They assume that rain streaks shift between consecutive frames and detect the rain streaks regions by observing the temporal brightness change. Then, they restore rain-free pixels in each frame by taking the average pixel values of the previous frame and the following frame. Dynamic weather such as rain and snow, the different shapes and movements of the particles make the problem more complicated [8,20]. Due to the random distribution and complex performance of rain streaks, the classical image denoising algorithms are not suitable for restoration of rain-affected image. Barnum et al. [1] Proposed an alternative approach based on the frequency analysis of rain streaks. Assuming that rain streaks in an entire video sequence have similar shapes and orientations, they detected the rain streaks by selecting repeatedly occurring frequency components through the video sequence. All these algorithms [1,4,6] can remove rain streaks effectively, but they require the temporal information in video sequences. Therefore, they are not applicable to still images.

  2. Removal of rain streaks has recently received much attention. Below approaches are all based on detecting and removing rain streaks in a video. The rain removal of an image may also fall into the category of the i) vision-based rain detection and removal ii) image noise removal. iii) Single image based rain removal.

    1. VISION-BASED RAIN DETECTION AND REMOVAL

      Without modifying the background exposure time and depth of the array can be softened. Origin characteristics of rain streaks for finding and deleting rain streaks from videos . Later the model of a mono rain streaks in the image space was developed to detect the rain streaks. From this method to improve the accuracy of the image . The rain streaks in a image 1(a) to improve the accuracy of the image refer fig 1(b).

      Fig 1(a) input image 1(b) output image

    2. IMAGE NOISE REMOVAL

      Denoising algorithm is used to remove the unstructured or structured noise from an image. Image denoising approach is used for sparse and redundant representation for dictionaries in order to get an effective and promising image. K-SVD dictionary training algorithm is used for finding the image signal in spares decomposition completed in a redundant dictionary. Using the K-SVD dictionary training algorithm and denoising algorithm image can be corrupted by itself , on the other hand it can obtain high quality images. Decomposition algorithm is effectively used in Bayesian treatment. Dictionary based image denoising methods are not effectively implemented in removing rain streaks. Noise image in fig 2(a) by using algorithm the image will be denoising refer fig 2(b).

      Fig 2(a)noise image 2(b) denoise image

    3. SINGLEIMAGE BASED RAIN REMOVAL

      Using a video based approach to find the temporal correlation in consecutive frames. Due to different parameters in consumer cameras, video- based approach is significantly degraded. Rain streaks in each frame are degrading the accuracy due to non stationary background. So it is difficult to

      detect the neighborhood pixel. Appearance of rain streaks in single image get degraded by a gradient based process are time-varying gradient in a similar direction refer fig 3(b). SIFT/SURF images get matches the unreliable interesting point caused by rain streaks attain in fig 3(c). HOG based process is used to detect the accuracy of the rain streaks.

      Fig 3(a) Input image ,3(b) gradient image, 3(c)removed rain streaks.

  3. To remove a rain streaks from a framework in a single image using the morphological component analysis(MCA) algorithm, rain streaks get removed using sparse coding and dictionary learning process. The sparse coding technique is identifying a small number of non zeros or significant coefficient corresponding to an atom in a dictionary . A dictionary learning from the training models from the HF part of the image are removed itself it is divided into two sub dictionaries by acting HOG feature-based dictionary atom grouping. By using bilateral filter the image can be decomposed into high frequency and low frequency in a MCA method. Using dictionary learning and sparse coding in a MCA algorithm to split rain component and nonrain

    component in HF path . In sparse coding to achieve MCA-based image decomposition, the geometric component is obtained in the HF part, then the LF part achieve the rain streaks removed version of this image. Traditional MCA algorithms are all directly performed on an image in the pixel domain. A geometric detail of single frames has no temporal or motion information between successive. MCA based images are automatically decompose , easy to remove rain steaks. No extra sample images are needed for dictionary learning because it is fully automatic and self contained . To enrich the dictionary exemplar patches from a set of nontraining images.

    1. IMAGE DECOMPOSITION USING A MCA

      Utilize the morphological component analysis into a single atom. In an image G of M pixel

      is a position of S layers, denoted by G =s=1 Is denotes the sth component, such as geometry or textural component of G. To decompose the image G

      into {Gs}s=1 the MCA algorithms iteratively minimize the energy function:

      (* + * + )

      and cobalt been used, where the wavelet for global discrete cosine transformation and curvelet for local discrete cosine transformation as a dictionary for representing a textual component of an image. In the local dictionary represents the sparse coefficient of patches extracted from an image. DCT for dictionary represent the textural component of the image. Dictionary selection and related parameter setting have different kinds of image decomposition. Global DCT and local DCT component are represented sparsely independent.

      Fig 4(i) Structure image, 4(ii)Texture image

    2. SPARSE CODING AND DICTIONARY LEARNING

      ( )

      Sparse coding are based on the linear

      where RMs denotes the sparse coefficents corresponding to Gs with the respect to the dictionary Ds , is a regulation parameter, and Es is the energy to the type Ds.

      To decompose an image into geometric and textural component fig 4 . In geometric function have wavelet

      generative model. In this model, the symbols are combined in a linear fashion to approximate the input. To construct a dictionary D containing the local structure of textures for sparsely represent each patch extracted from the textural component of the image. A set of available training exemplars (similar patches extracted from the component) Xi i=1, 2, up, to decompose the learning dictionary D

      specifying Xi by solving the following optimization problem:

      D,KXn ( )

      Where denotes the sparse coefficient of Xi with respect to D and is a regularization parameter . In an online dictionary learning algorithm where the sparse coding is usually achieved via orthogonal matching pursuit (OMP) . Finally image decomposition is obtained by the MAC algorithm.

      The sparse coding technique is identifying a small number of non zeros or significant coefficient corresponding to an atom in a dictionary. Using a MAC algorithm to remove a rain streaks in a framework using two local dictionaries for training patches extracted from rain image. Without using a rain component easily decomposed a rain image into rain component and geometric component. The reasons are i) in a rain image no assume a portion of rain component and geometric component in a global dictionary. ii ) in a rain image geometric component is mixed with rain streaks, so its segmented into local patches to extract the rain patches that mainly contain self learning of rain atom. iii) Local region images are exhibited different characteristic , local patches based dictionary learning rain atom are compared to global dictionary. Fig 5(a) indicate the sparse coding 5(b) indicate the dictionary learing.

      Fig5(a)sparsecoding

      Fig 5(b) dictionary learning

    3. RAIN STREAKS REMOVAL FRAMEWORK

      The rain streak removal framework is formulated to remove a rain streaks in the decomposed image from a single image. Bilateral filter is used to decomposed input image into LF and HF. In a LF part information can be obtained where as in HF part edges/texture information may include in the image. In a dictionary training exemplar patch extracted from HF part of the image to perform the HOG feature based dictionary atom clustering.

  4. A bilateral filter is non-linear, edge- preserving and noise-reducing smoothing filter. The smoothen image refer fig 6(ii)b. To calculate the intensity value of the every pixel in an image swapped a weight value from neighboring pixels. So weights are based on a Gaussian distribution. The weights not only depend on Euclidean distance of pixels, but also the radiometric differences. Average

    nearby pixel is used to replace the pixel value. The sharp edges are equally encompassing through each pixel and weights to neighboring pixel.

    The bilateral filter is defined as:

    ( ) ( ) ( ( ) ( )) (

    )

    Where as is the filtered image , is the original input image to be filtered , are the matches

    of the existing pixel to be filtered is the window centered in . These functions are Gaussian function: is for smoothing the differences in intensities, is the spatial core for smoothing changes in coordinates.

    Fig 6(i) working of bilateral filter

    Fig 6(ii).b.smoothern image a.input image

  5. Distribution of intensity gradient or edge directions are used to find the local object occurrence and shape within the image. To achieve by dividing the image into small joined regions, called cells, and for each cell collecting a histogram of gradient directions for the pixels within the cell. To improve the accuracy calculate the intensity to a large region of the image in the contract-normalized of local histogram called a tablet, such value are standardized to all cells within the tablet. This standardization result is at variance to changes in brightness or stakeout. The HOG descriptor operates on local cells, the method maintains an invariant to regular and photometric changes, except for entity direction. These changes appear only in higher spatial states fig 7.

    Fig 7. Histogram Oriented Gradient

  6. The rain streaks removal is based on sparse coding and dictionary learning using MCA algorithm, the result is shown in step by step process (a) is the original image (b) rain streaks removed image (c) and

    1. are based on proposed algorithm (e) rain removed version refer fig 8.

      Fig 8 (a) original image (b) rain streaks removed image (c) and (d) are based on proposed algorithm

    2. rain removed version

Using MCA algorithm the rain streaks are removed from single image. With the help of sparse coding and dictionary learning algorithm the MCA based image is decomposed. The dictionary learning method is fully automatic and self contained in which no extra training samples are required. To further enhance the performance of rain removal, an extended dictionary of non rain atoms learned from non rain training image is introduced. The experimental result shows that the MCA method achieves comparable performance when compared with state-of-the-art video-based rain removal algorithms without using temporal or motion

information for rain streak detection and filtering among successive frames. As a future work the visual quality can be enhanced by enhancing the sparse coding, dictionary learning and dictionary partitioning process.

We wish to thank our department HOD Mr. Rajivkannan .

  1. P. C. Barnum, S. Narasimhan, and T. Kanade, Analysis of rain and snow in frequency space, Int. J. Comput. Vis., vol. 86, no. 2/3, pp. 256274, Jan. 2010.

  2. J. Bobin, J. L. Starck, J. M. Fadili, Y. Moudden, and D. L. Donoho, Morphological component analysis: An adaptive thresholding strategy, IEEE Trans. Image Process., vol. 16, no. 11, pp. 2675 2681, Nov. 2007.

  3. J. Bossu, N. Hauti`ere, and J. P. Tarel, Rain or snow detection in image sequences through use of a histogram of orientation of streaks, Int. J. Comput. Vis., vol. 93, no. 3, pp. 348367, July 2011.

[4]N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., San Diego, CA, Jun. 2005, vol. 1, pp. 886893.

[5]J. M. Fadili, J. L. Starck, M. Elad, and D. L. Donoho, MCALab:Reproducible research in signal and image decomposition and in-painting, IEEE Comput. Sci. Eng., vol. 12, no. 1, p. 4463, Jan./Feb.2010.

  1. K. Garg and S. K. Nayar, Detection and removal of rain from videos, in Proc. IEEE CVPR, June 2004, pp. 528535.

  2. K. Garg and S. K. Nayar, When does a camera see rain?, in Proc. IEEE ICCV, Oct. 2005, pp. 10671074.

  3. K. Garg and S. K. Nayar, Vision and rain, Int. J. Comput. Vis., vol. 75, no. 1, pp. 327, Oct. 2007.

  4. J. C. Halimeh and M. Roser, Raindrop detection on car windshields using geometricphotometric environment construction and intensity-based correlation, in Proc. IEEE Intell. Veh. Symp., Xian, China, Jun.2009, pp. 610615.

  5. L. Itti, C.Koch, and E.Niebur, Amodel of saliency-based visual atten-tion for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 12541259, Nov. 1998.

  6. Y. Jia, M. Salzmann, and T. Darrell, Factorized latent spaces with structured sparsity, in Proc. Conf. Neural Inf. Proc. Syst., Vancouver,BC, Canada, Dec. 2010, pp. 982990.

  7. O. Ludwig, D. Delgado, V. Goncalves, and U. Nunes, Trainable classier-fusion schemes: An application to pedestrian detection, in Proc. IEEE Int. Conf. Intell. Transp. Syst., St. Louis, MO, Oct. 2009,pp. 16.

[13]J. Mairal, F. Bach, and J. Ponce, Task-driven dictionary learning,IEEE Trans. Pattern Anal. Mach. Intell., to be published, to be pub-lished.

[14] B. A. Olshausen and D. J. Field, Emergence of simple-cell receptive eld properties by learning a sparse code for natural images, Nature,vol. 381, no. 6583, pp. 607609, Jun. 1996.

[15]S. Patterson, Photoshop Rain Effect-Adding Rain to a Photo [Online].

Available:http://www.photoshopessentials.com/photo

-effects/rain/

[16] M. Roser and A. Geiger, Video-based raindrop detection for improved image registration, in IEEE Int. Conf. Comput. Vis.Workshops, Kyoto, Sep. 2009, pp. 570577.

[17]M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R.J. Johannesson, and A. Radmanesh, Video-based automatic incident detection for smart roads: The outdoor environmental challenges re-garding false alarms, IEEE Trans. Intell. Transp. Syst., vol. 9, no. 2,pp. 349360, Jun. 2008.

[18]H. R. Sheikh and A. C. Bovik, Image information and visualquality, IEEE Trans. Image Process., vol. 15, no. 2, pp. 430444,Feb. 2006.

  1. J. L. Starck, M. Elad, and D. L. Donoho, Image decomposition via the combination of sparse representations and a variational ap-proach, IEEE Trans. Image Process., vol. 14, no. 10, pp. 1570 1582, Oct. 2005.

  2. X. Zhang, H. LI, Y. Qi, W. K. Leow, and T. K. Ng, Rain removal in video by combining temporal and chromatic properties,

in Proc. IEEE ICME, July 2006, pp. 461464.

Leave a Reply