- Open Access
- Total Downloads : 1457
- Authors : Khitish Kumar Gadnayak, Pankajini Panda, Niranjan Panda
- Paper ID : IJERTV2IS100173
- Volume & Issue : Volume 02, Issue 10 (October 2013)
- Published (First Online): 07-10-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Survey on Image Dehazing Methods
Khitish Kumar Gadnayak |
Pankajini Panda |
Niranjan Panda |
Asst. Prof., Comp. Sc & Engg. |
Asst. Prof., Information Technology |
Asst. Prof., Comp. Sc & Engg. |
C.V.Raman College of Engg. |
C.V.Raman College of Engg. |
I.T.E.R.,SOA University |
Abstract
One of the important problems in the area of image processing is the restoration of the images those are corrupted due to various degradations. Images of outdoor scenes captured in a bad weather conditions contain atmospheric degradation such as haze, fog, smoke caused by the particles in the atmospheric medium absorbing and scattering as the light travels from the scene point to the observer. Due to the presence of these atmospheric particles there is a resultant decay in the colour and contrast in the captured image in the bad weather conditions. This may cause difficulty in detecting the objects in the captured hazy images or scenes. Now -a- days due to the recent development of the computer vision area, it is possible to improve the outdoor hazy images and remove the haze from the images. This paper describes the different haze removal processes to remove the haze in the captured hazy images to recover a better and improved quality haze free images.
Keywords:
airlight, attenuation, scene radiance, transmission map
-
Introduction
Images of outdoor scenes are basically degraded by the presence of different particles and the water droplets in the atmosphere. Haze, fog, smoke are such atmospheric phenomena due to atmospheric absorption and scattering. While capturing a scene in the camera in a bad weather condition the irradiance received by the camera from the scene point is attenuated along the line of sight. The incoming light flux is blended with the light from all other directions called the airlight. The amount of scattering depends on the distance of the scene points from the camera; the degradation is variant in nature. Due to this the there is
Haze removal or dehazing is highly required in computer vision applications and in computational photography. Removing the haze layer from the input hazy image can significantly increase the visibility of the scene. The haze free image is basically visually pleasing in nature. Many vision algorithms suffer from low- contrast scene radiance. Haze or fog the atmospheric particles give the scene depth information. In image processing area haze removal is one of the challenging problem or task as because the haze is dependent on unknown depth. For a single input hazy image the haze removal problem is under constrained problem. Therefore many researchers adopted the method in which they have considered multiple images or additional images.
-
Theoretical Background
This section describes the different atmospheric model those describe the degradation level in the captured image.
-
Atmospheric Scattering Model
Attenuation and airlight are the two atmospheric scattering models which describe the atmospheric scattering characteristics.
-
Attenuation Model.
The attenuation model describes the way light gets attenuated as it travels from the object or scene point to the observer or the camera. Due to the atmospheric scattering, a fraction of light is removed from the incident ray .The unscattered light is called the direct transmission, which is transmitted to the observer. The attenuated irradiance received at the observer is given by
()
a resultant decay in the colour and the contrast of the captured degraded image.
(, ) =
2
=depth of the scene point from the observer
=wavelength
()=scattering coefficient of the atmosphere
=horizon brightness
=function describes the reflectance properties
Figure 1: Attenuation and airlight
In the Figure 1 the solid arrow shows the attenuated irradiance or the direct transmission.
-
Airlight Model.
This model describes how the atmosphere acts as a source to reflect the environmental illuminations towards the observer.
In the above figure the dotted arrow represents the airlight. The light reflected to the line of sight travels the entire path length d, the distance from the scene point to the observer or the camera.
The irradiance due to the airlight is given by
, = 1
The total irradiance received is the sum of irradiance due to the direct attenuation and the airlight.
, = (, ) + (, )
2.2 Haze Formation Model:
In a computer vision the widely used model for the formation of hazy images is given as:
= + 1
Where indicates the position of the pixel, is the observed hazy image, is the scene radiance which is the haze free image that is to be restored, is the global atmospheric light, is the medium of transmission describing the portion of the light that is not scattered and reaches the camera. The transmission has a scalar value ranges from 0 to 1 for each pixel and the value indicates the depth of the information of the scene objects directly. For a uniform medium the
transmission can be expressed as = () where
is the scattering coefficient of the medium. It indicates the scene radiance is attenuated exponentially with the scene depth.
Figure 2: Haze Model
Basically the image received by the observer is the combination of the attenuated version of the scene radiance with an additive haze layer, where the atmospheric light represents the colour of the haze. The ultimate goal of the haze removal is to find the scene radiance , and from the observed hazy image. Therefore the image dehazing is the under constraint problem. Haze removal or image dehazing is one of the highly desired computer vision applications which tries to remove the haze layer from the captured hazy image to a get a better and desired haze free image. But while doing the dehazing for a colour or gray image, the transmission co-efficient (or the alpha map) is unknown, the atmospheric light (or airlight) as well as the scene radiance (or the haze free image) is unknown. Therefore if the airlight and the transmission co-efficient can be found out then the scene radiance can easily be recovered.
-
-
-
Dehazing Methods
Under the bad weather conditions the atmosphere contains the fog and haze particles so that the color and contrast of the images are drastically degraded. The degradation level increases with the distance from the camera to the object. The removal of haze from the captured hazy images needs to estimate the depth of the haze. The initial works for the haze removal uses multiple input images those have been
taken under a bad weather condition of a same scene and the recent haze removal process requires single input image for the estimation of the depth.
Schechner and et al [3] paper is based on the fact that usually airlight scattered by the atmospheric particles is partially polarized. The Polarization filtering alone cant remove the haze effects. This paper describes the image formation process considering the polarization effect of atmospheric scattering and inverting the process is required to get a haze free image. The image is basically composed of two unknown components, one is the scene radiance in the absence of the haze and the airligt (the ambient light scattered towards the viewer).To recover these two unknowns two independent images are required and can easily be obtained as because the airlight is usually partially polarized. This paper also describes an approach that does not need the weather conditions to change and it can be applied instantly. The images taken through a polarizer uses polarization filtering is used in photography through haze. The polarization filtering and the orientation of the polarization filter improve the contrast of the input image.
Tan [8] proposed single image based dehazing method and his proposed method based on the optical model given as
I x = L x ed x + L(1 ed x )
In the optical model the first term is the direct attenuation and the second term is the airlight. I is the observed image, x is the 2D spatial location , L is the atmospheric airlight, which is assumed to be globally constant and r is the reflectance of the object in the image. is the atmospheric attenuation coefficient. d is
the distance between the object and the observer. He then expressed it in terms of light chromaticity and as color vector components. This proposed approach is based on the assumption that the clear day images has high contrast as compared to the images those are affected by the bad weather. Relying upon this assumption, Tan removed the haze by maximizing the local contrast of the restored image.
Fattal [9] introduced a new approach for single image dehazing which produced a haze free image from the input hazy image. Fattal formulated the refined image formation model that relates to the surface shading and the transmission function. Generally the haze formation mode can be described as
= + 1 ()
Where () is the transmission coefficient which is a scalar component and is given as = ().In the
model the first term is the direct attenuation and the second term is airlight. () is observed input hazy image, () is the haze free image (scene radiance) and
is global atmospheric color vector. Fattal grouped the pixel belonging to the same surface having the same reflectance and the same constant surface albedo and he proposed Independent Component Analysis method to determine the surface shading and the transmission. The basic key idea of his work is to resolve the airlight- albedo ambiguity and assuming that the surface shading and the scene transmission are uncorrelated.
He and et al [10] dark channel prior is based on the prior assumption is basically used for single image dehazing process. This dark channel prior is based on the statistic approach of the outdoor haze free image. It has been observed that in most of the local regions which do not cover the sky, some pixels have very low intensity in at least one color (RGB) channel and these pixels are known as the dark pixels. In hazy images the intensity of the dark pixels in that color channel is basically contributed by the airlight and these dark pixels are used to estimate the haze transmission. After estimation of the transmission map for each pixel, combining with the haze imaging model and soft matting technique [18] to recover a high quality haze free image.
Ancuti and et al. [13] described that the haze is the atmospheric phenomenon which degrades the visibility of the outdoor images captured under bad weather conditions. This paper describes the dehazing approach for a single input image. This approach is based on the fusion strategy and it has been derived from the original hazy image inputs by applying a white balance and contrast enhancing procedure. The fusion enhancement technique estimates perceptual based qualities known as the weight maps for each pixel in the image. These weight maps control the contribution of each input to the final obtained result. Different weight maps like luminance, chromaticity and saliency are computed and to minimize the artifacts produced during the weight maps, the multiscale approach uses the laplacian pyramid representations combination with gaussian pyramids of normalized weights. As this approach tries to minimize the artifacts per pixel based has a greater improvement rather than considering a patch based method due to the assumption of contrast airlight in the patch.
Chu and et al. [15] analysed on the concept that the degradation level that is affected by the atmospheric haze is basically dependent on the depth of the scene. Pixel in each of the part of the image tends to
have the similar depth and based on these assumptions that the degradation level affected by the haze in each region is same and each pixel have the similar transmission, the given input image is segmented in to different region .After segmentation the transmission map is estimated for each region and the transmission map is refined using soft matting [18]. The proposed method consists of five different phases such as image segmentation, atmospheric light estimation, and cost function for estimation of the transmission map. Refinement of the transmission using soft matting [18] and lastly recovering the scene radiance. The input image is segmented in to different region using mean shift region segmentation algorithm. After the image segmentation, using dark channel prior proposed by He et al [10] to estimate the atmospheric light. After then using the proposed algorithm [19] the cost has been found out for the estimation of the transmission map
.The transmission map is refined by applying soft matting [18]. Then the desired haze free image is recovered by recovering the scene radiance using the dark channel prior [10].
Xie and et al [16] paper describes the dehazing process using dark channel prior and multi-scale retinex. This paper also focuses on the approach which provides the automatic and fast acquisition of transmission map of the scene. The proposed approach is based on the implementing the multi scale retinex algorithm on the luminance component in YCbCr space of the input image to get the pseudo transmission map
.The obtained pseudo transmission map is very much similar to the transmission map obtained by using the dark channel prior by He et.al[10]. Combining the haze imaging model and the dark channel prior, a high quality haze free image is recovered.The input hazy image has been transformed from RGB color space to YCbCr space and then by using the multiscale retinex algorithm, on the luminance component of the transformed image with some adjustment to get the transmission map. Then combining both the haze image model and the retinex algorithm a better haze free image is recovered.
Schaul and et al. [17] focused on the fact that in outdoor photography, the distant object are appeared as blurred and loses its color and visibility due to the degradation level affected by the atmospheric haze. In this paper the key idea is used to fusion of the visible and a near-infrared image of the given input image to obtain a dehazed image and it also describes the multi- resolution approach using the edge preserving filter to minimize the artifacts those are produced during the dehazing process. The proposed approach describes
that from the given input hazy image both visible and near-infrared images are extracted .By applying an edge- preserving multi-resolution decomposition based on the Weighted Least Square (WLS) optimization framework as described by Farbman et al.[20] to both visible and near- infrared images. Pixel level fusions criteria are used to maximize the contrast to improve the regions those contain the haze.
-
Conclusion
In this paper we have described that the haze layer present in the captured input mage is dependent on the scene depth and it is variant in nature. Also in this paper we have addressed different method in which the haze can be estimated from the captured hazy images and after estimating the depth map and using the image formation model a better and improved haze free image can be recovered.
-
References
- li>
-
S.G.Narasimhan, S.K.Nayar, Chromatic framework for vision in bad weather, IEEE Conference on Computer Vision and Pattern Recognition, vol.1, pp. 598605, 2000.
-
Y.Y.Schechner, S.G.Narasimhan, S.K.Nayar, Instant Dehazing of Images Using Polarization, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 325- 332, 2001.
-
Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. IJCV 48, 233254 (2002)
-
S.G. Narasimhan and S.K. Nayar, Contrast Restoration of Weather Degraded Images,
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 6 pp. 713-724, June 2003.
-
Narasimhan, S.G., Nayar, S.K.,Interactive deweathering of an image using physical models,In Workshop on Color and Photometric Methods in Computer Vision (2003)
-
Shwartz, S., Namer, E., Schechner, and Y.Y. Blind haze separation. Computer Vision and Pattern Recognition, vol. 2, pp.19841991 (2006)
-
R. Tan, Visibility in Bad Weather from a Single Image, Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2008.
-
R. Fattal, Single Image Dehazing, ACM Trans. Graph., SIGGRAPH, vol. 27, no. 3, p. 72, 2008.
-
K. He, J. Sun, and X. Tang, Single image haze removal using dark channel prior, IEEE Conference on Computer Vision and Pattern Recognition, pp. 19571963, 2009.
-
X. Lv, W. Chen, I.F. Shen Real-time dehazing for image and video, In Pacific Conference on Computer Graphics and Applications, pp. 6269 (2010)
-
J.-P.Tarel, N. Hautiere, Fast Visibility Restoration from a Single-color or Gray-level Image, Proc. IEEE 12th
International Conference Computer Vision, 2009, pp. 2201 2208.
-
Ancuti, Codruta Orniana, Cosmin Ancuti, Philippe Bekaert. "Effective single image dehazing by fusion." Image Processing (ICIP), 2010 17th IEEE International Conference on. IEEE, 2010.
-
Guo, Fan, et al. "Automatic Image Haze Removal Based on Luminance Component." Wireless Communications Networking and Mobile Computing (WiCOM), 2010 6th International Conference on. IEEE, 2010.
-
Chu, Chao-Tsung, and Ming-Sui Lee. "A content- adaptive method for single image dehazing." Advances in Multimedia Information Processing-PCM 2010. Springer Berlin Heidelberg, 2011. 350-361.
-
Xie, Bin, Fan Guo, and Zixing Cai. "Improved single image dehazing using dark channel prior and multi-scale Retinex."Intelligent System Design and Engineering Application (ISDEA), 2010 International Conference Vol. 1. IEEE, 2010.
-
Schaul, Lex, Clément Fredembach, and Sabine Susstrunk. "Color image dehazing using the near-infrared." International Conference on Image Processing (ICIP), 2009 16th IEEE International Conference on. IEEE, 2009.
-
A. Levin, D. Lischinski, and Y. Weiss, A closed Form Solution to Natural Image Matting, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 61-68, 2006
-
Oakley, J.P., Bu, H.: Correction of simple contrast loss in color images. IEEE Transactions on Image Processing 16(2), 511522 (2007)
-
Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, Edge preserving decompositions for multi-scale tone and detail manipulation, International Conference on Computer Graphics and Interactive Techniques, pp. 110, 2008.
S.Narasimhan, S.Nayar, Vision in bad weather, in Proc. IEEE International Conference on Computer Vision, Sep. 1999, pp. 820827.