Traditional Color Video Enchancement Based On Adaptive Filter

DOI : 10.17577/IJERTV1IS9274

Download Full-Text PDF Cite this Publication

Text Only Version

Traditional Color Video Enchancement Based On Adaptive Filter

Dwitej1, O.Sudhakar2, Dr. V.Sailaja3

1M.tech.Student, Godavari institute of ENG & Technology, Rajahmundry

2Assistant Professor, Godavari institute of ENG & Technology, Rajahmundry

3Head of Department (ECE), Godavari institute of ENG & Technology, Rajahmundry

Abstract

Video enhancement is a very important pre- processing stage in face detection and face recognition applications especially when the environment is very dark. In this paper, A color video enhancement with the adaptive filter is proposed. The enhancement technique works very effectively for images captured under extremely dark environment as well as non-uniform lighting environment where

bright regions are kept unaffected and dark objects in bright background. The algorithm finds the importance of color information in color image enhancement and utilizes color space conversion to obtain a much better visibility. Experimental results show that to reducing halo, colo distortion and produce better visibility.

Keywords-Adaptive filter, color video enchancement, HVS, color space conversion and image fusion

I.INTRODUCTION

Video enhancement is very useful tool in many security and surveillance applications. In nighttime images/video are difficult to understand because they lack background context due to poor illumination. As a real life example, when you look at an image or video seen from a traffic camera posted on the web or shown on TV, it is very difficult to understand from which part of the town this image is taken, how many lanes the highway has or what buildings are nearby [1]. INDANE is a more robust technique that enhances images taken under non-uniform lighting conditions by enhancing the darker regions in the image retaining the brighter regions unaffected and restoring natural colors [2]. Retinex [3-5] is an effective technique for color image enhancement, which can produce a very good enhanced result. But the enhanced image has color distortion and the calculation is complex. Li Tao and Vijayan K. Asari proposed a robust color image enhancement

algorithm [6]. The algorithm can enhance color image without distortion, but the edges of the color image could not be handled well. The algorithm use Gaussian filter to estimate background image. Gaussian kernel function is isotropic, which leads to the inaccurate estimation of background image, resulting in the halo phenomenon. Considering the above two algorithms, a new bio-inspired color image enhancement algorithm is proposed by the author [7]. A novel algorithm based on I luminance- Reflectance Model for Enhancement (IRME) has been developed and proven to be very effective for images captured under insufficient or non-uniform lighting conditions [8]. The algorithm is based on luminance perception and processing to achieve dynamic range compression while retaining or enhancing visually important features. Conventional image enhancement techniques such as global brightness and contrast enhancement, gamma compression and histogram equalization, are incapable of providing satisfactory enhancement results for underexposed or saturated images. The acquiring of the background image is important in many color image enhancement technologies and we also need to estimate the background image in this algorithm. In traditional algorithms, only distance and luminance information of pixels is considered in estimation of background image. They all overlook the important information of color imagecolor information.

In this paper is organized as follows. Background of image enhancement in section II. Obtain luminance and background image from video is presented in Section III. Section VI describes the Adaptive adjustment of color image. Calculate the color restoration in section V.The simulation results are presented in Section VI. Concluding remarks are made in Section VII.

  1. BACKGROUND

    Image enhancement is a very important pre- processing stage in face detection and face recognition applications especially when the environment is very dark. It is also a very useful tool in many security and surveillance applications. Many conventional image enhancement techniques have been presented such as automatic gain/offset, non- linear gamma correction, and non-linear transformation of pixel intensity techniques. In this section, image enhancement technique that is based on logarithm transformation of the luminance of the pixels in the image. The algorithm consists of independent steps for luminance enhancement with dynamic range compression and contrast enhancement. The luminance enhancement step considers the maximum color component of the pixels in the nonlinear transformation with dynamic range compression based on a logarithmic approach while the ratios of the original color bands (R, G and

    B) are preserved.

  2. OBTAIN LUMINANCE AND BACKGROUND IMAGE FROM VIDEO

    First we read the input video in to the computer. Then extracted an image from the video from the background purpose which Will be Used in the processing the other frames in a video .As the camera is stationary, the background changes little during the video capture time. Oppositely, the motion part changes all the time. So we add the frames together to strengthen the background and at the same time weaken the motion part. For each frame from video to obtain luminance and background image as shown below.

    The luminance image of each frame is IL(x,y). Subjective luminance is the logarithmic function of the light intensity into human eyes [9]. We get the logarithmic function of the original luminance image and then normalize it to get the subjective luminance IL.

    The color images we usually see are mostly in RGB color space, which employ red, green, and blue three primary colors to produce other colors. In RGB color pace, other colors are synthesized by three primary colors, which is not effective in some cases. Consequently, we use another color space YUV color space instead of the RGB color space in the algorithm proposed. The importance of using YUV color space is that its brightness image Y and chroma images U, V are separate. Y stands or the luminance, and U, V are color components.

    In this paper, to obtain the background image according to the Y, U, V values at pixel (x,y)

    Where

    GR is the distance parameter of intensity image. We use the below formula to obtain the distance parameter

    Here

    (Xi,yi) is the neighbor pixels of Y,U,V values

    GI is the distance parameter of U, V image. We use the below formula to obtain the distance parameter

    Gc is the scale parameter of pixel filtering

    IL(x,y)=log(Y(x,y))/log(255)

    Where

    Y(x,y)= Brightness image in YUV color image

    Here

    U(x,y),V(x,y) = chrome images of YUV image

    I(x,y)=intensity value at (x,y)

    R ,I ,C are the scale parameters, whose values are 20,30 ,60 respectively.

    Transforming the RGB color image into YUV color space, we can get directly the luminance image. Let the YUV color image through the adaptive filter, and the background image can be obtained then go for adaptive adjustment as explained in section IV.

  3. ADAPTIVE ADJUSTMENT

    The image human eye seeing is related to the contrast between the image and its background image [9].By using adaptive adjustment to obtain the local enhancement IE(x,y).

    We use the formula to obtain the local enhancement

    IE(x,y)=(x,y).IL(x,y);

    (x,y)is the function of adaptive regulation. IE(x,y) is local enhanced color image, and th enhanced color image can be obtained after the color restoration for IE.

    Where

    (x,y)=(a+b).w(x,y);

    where, is intensity coefficient according to the cumulative distribution function (CDF) of the luminance image. w(x,y) is the ratio value between the background image and the intensity image. a and b are constants, we can adjust them to achieve good adjustment results.

    g is the grayscale level when the cumulative distribution function(CDF) of the intensity image is 0.1. If more than 90% of all pixels have intensity higher than 190, is 1; when 10% of all pixels have intensity lower that 60, is 0; other times linear changes between 0 and 1.

    W(x,y)=IB(x,y)/I(x,y);

  4. CALCULATE COLOR RESTORATION

    To apply the fast Fourier transform (FFT) of IE we can get the image I.

    The enhanced images qualities usually evaluated through subjective measurement. For the objective estimation, global mean and contrast enhancement index are used to assess enhanced images. The contrast enhancement index is defined as follows:

    where CM is the average local variance comparison between the enhanced image and the original image, and C is the local variance comparison. Dividing the image into non-overlapped regular small blocks, and then calculating the mean value of variance for every block, the local variance can be obtained. Furthermore, calculating the average value of all local variance, we can get the CM value.

  5. RESULTS

    In this section, we discuss the results of the color video enhancement based on filter. The algorithm finds the importance of color information in color image enhancement and utilizes color space conversion to obtain a much better visibility.

    Fig.1. First colom represent as orginal frame fro video.Second colom represent as the background image o and third colom represent as enchanced images of proposed algorithm.

  6. CONCLUSION

    A new color video enhancement algorithm is proposed in this paper. The algorithm is related to human visual It proposes a new adaptive filter has better visibility, the details are clear, and the colors are vivid and natural.

  7. REFERENCES

  1. Ramesh Raskar, Adrian Ilie, Jingyi Yu Image Fusion for Context Enhancement.

  2. L. Tao and K. V. Asari, An Integrated Neighborhood Dependent Approach for Nonlinear Enhancement of Color Images, IEEE Computer Society International Conference on Information Technology: Coding and Computing ITCC 2004, Las Vegas, Nevada, April 5-7, 2004.

  3. Meylan L, Susstrunk S. High dynamic range image rendering with a retinex-based adaptive filter[J]. IEEE Transactions on Image Processing, 2006, 15(9): 2820-2830.

  4. Funt B, Ciurea F, McCann J. Retinex in MATLAB[J]. Journal of Electronic Imaging, 2004, 13(1): 48-57.

  5. Kimmel R, Elad M, Shaked D, et al. A variational framework for Retinex[J]. International Journal of Computer Vision, 2003, 52(1): 7-23.

  6. Li Tao, Vijayan K.. Asari. A Robbust Image Enhancement Technique for Improving Image Visual Quality in Shadowed Scenes[A]. Proccedings of the 4th International Conference on Image and Video Retrieval[C]. Springer, Berlin, ALLEMAGNE,2005, vol.3568,395-404.

  7. Wang Shou-jue, Ding Xing-hao, Liao Ying-hao, Guo dong-hui, A Novel Bio-inspired Algorithm for Color Image Enhancement, Acta Electronica Sinica, 2008.10, Vol.36, No.10: 1970-1973.(in Chinese)

  8. Tao, L. and Asari, K. V., "An efficient illuminance-reflectance nonlinear video stream enhancement model," IS&T/SPIE Symp. On Elect. Imaging: Real-Time Image Processing III, San Jose, CA, January 15-19, 2006.

  9. Webster M A. Human colour perception and its adaptation[J]. Network: Computation in Neural Systems, 1996, 7(4): 587-634.

Leave a Reply