Improved Weight Map Guided Single Image Dehazing

DOI : 10.17577/IJERTV5IS031014

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 301
  • Authors : Manali Dalvi, Kamlesh Shah, Dhanusha Shetty, Amey Vanmali
  • Paper ID : IJERTV5IS031014
  • Volume & Issue : Volume 05, Issue 03 (March 2016)
  • DOI : http://dx.doi.org/10.17577/IJERTV5IS031014
  • Published (First Online): 26-03-2016
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Improved Weight Map Guided Single Image Dehazing

Manali Dalvi

Department of Electronics and Telecommunication University of Mumbai

Mumbai, India

Kamlesh Shah

Department of Electronics and Telecommunication University of Mumbai

Mumbai, India

Dhanusha Shetty

Department of Electronics and Telecommunication University of Mumbai

Mumbai, India

Amey Vanmali

Department of Electronics and Telecommunication University of Mumbai

Mumbai, India

Abstract Haze is an outdoor phenomenon which is caused due to very fine, widely dispersed particles of dust, smoke, light vapor giving the air an opalescent appearance. Haze reduces transparency of air because of scattering that in turn reduces visibility. In bad weather, size of these particles increases increasing scattering. The pictures taken under such hazy weather conditions develop a non-uniform layer of haze which reduces the clarity and dissembles the details of the image. This paper presents a technique for recovering clarity of such hazy images when only single picture is available as an input. Since dealing with a single image is difficult hence our method splits the degraded image into two separate inputs, one of which is white balanced and the other one is contrast enhanced. These two obtained inputs are operated separately by three different weight maps. In order to fuse the overall information and preserve the small details along with the brightness, our method uses Local entropy, Visibility and Saturation as weight maps. Further, our method uses a multi-scale fusion approach using Laplacian and Gaussian pyramids to produce the final dehazed output. Our method aims to improve the existing multi-scale fusion technique for single image dehazing. Our technique yields results which have comparable brightness, contrast and better visibility in both the background and foreground regions. Also it enhances the colorfulness of picture which reveals more details. The quantitative analysis also underlines the improved quality of our results.

Keywords Laplacian-Gaussian Pyramids, Single Image Dehazing, Weight maps.

  1. INTRODUCTION

    The pictures captured in the presence of haze are often degraded. Haze is caused by scattering of sunlight from various particles present in the air. This results in poor visibility of the image which is the common problem in many applications of computer vision. For example, in computer vision many algorithms are based on the fact that the input image to be processed is the exact scene radiance, i.e. there is no interruption of haze. However this is not the case always, therefore efficient haze removal algorithms have to be implemented.

    There are several dehazing techniques proposed till now. The haze removal techniques require estimating the depth of the haze; this is because the degradation level due to haze increases as distance from lens to the object increases. The earlier works required multiple input images of the same scene or other additional equipments. The multiple image dehazing

    method based on polarization takes multiple images of the same hazy scene by changing various polarization filters. If the polarization filter attached to camera rotates then same images will have different degrees of polarization. Fang et al.

    [1] introduced a method, by considering polarization effects caused by both the airlight and the object radiance. To dehaze an image by using polarization technique, Fang presented a new polarization hazy imaging model.

    Yu et al. [2] have proposed a dehazing method using atmospheric scattering model. First the atmospheric veil is a coarse approximation and the further estimation is made smoothen using a fast bilateral filtering approach that preserves edges. The difficulty of this approach is a linear function with numerous input image pixels used.

    Fang et al. [3] proposed a dehazing algorithm for uniform bad weather conditions by taking multiple input images. This method uses atmospheric scattering model and the basic need of this method is to find an over determined system by forming the hazy images. The hazy images are then compared with images taken in clear days to obtain the transmission and global airlight. The transmission and global airlight which are estimated are applied over local hazy area.

    Fattal [4] developed a method for haze removal that uses a single hazy input image. By assuming the object shading separation and medium transmission, the transmission from albedo of an input image is estimated. The main aim of this method is to resolve the airlight-albedo ambiguity along with the assumption that the surface shading and the scene transmission are uncorrelated. Fattals method shows good results for scenes having different scene albedos. However, the method fails when it comes to dense hazy images or in cases where the assumption doesn't hold true.

    He et al. [5] proposed a method to dehaze outdoor images using a dark channel prior. The prior is based on the assumption that most local regions in an image, excluding the sky regions, contain pixels having very low intensities in atleast one color channel. This prior is used with the haze imaging model to determine the transmission map which is then filtered using the soft matting algorithm. This technique works well for dense haze images. Also very less halo artifacts are observed in the results. The depth map obtained in the method facilitates better understanding of the scene. The dark channel prior fails efficiently when the surface object is analogous to atmospheric light.

    Tan [6] introduced a method for recovering the visibility of an image degraded due to bad weather conditions. The method uses two observations, first, the clear day images have more contrast than the degraded images and second, the airlight variation of an image tends to be smooth. Based on these facts, the technique proposed maximizes the local contrast of the image, estimates the direct attenuation and airlight for the same, and thus recovers a fog-free image.

  2. HAZE IMAGING MODEL

    While capturing an outdoor scene, the light received by the camera after its reflection from the object is not exactly the same as it was transmitted. The reason behind this phenomenon is that when [7] the light travels through the atmosphere it is influenced by the aerosol present in the atmosphere, due to which part of the light gets scattered in different directions while some part is absorbed by dust, smoke and other dry particles which results in poor visibility of the scene. Haze reduces the scene radiance which results in reduced contrast and clarity of the scene. Also, the appearance of the scene is dependent on the depth of haze present as well as the distance between scene and the observer.

    The most widely used model for this thesis is image degradation model or haze formation model proposed by

    also involves estimation of weight maps for each pixel which are nothing but the perceptual qualities of an image. These weight maps decide how each of the derived inputs contributes to the final result attained. Different weight maps such as luminance, chrominance and saliency are computed and designed in a per-pixel manner to be able to relate the spatial details of degraded regions. Finally, multi-scale fusion technique is used wherein the derived inputs represented using a Laplacian pyramid and the normalized weights with a Gaussian pyramid representation are fused together.

    1. Derived Inputs

      This approach proceeds as follows. The first derived input is a white balanced version of the input hazy image. It is animportant processing step that eliminates the unrealistic color casts in an image so that objects which appear white in person are rendered white in your photo. The white balancing algorithms that can be used are shades-of-gray, grey-edges or Grey-world.

      The second derived input is the contrast enhanced version of the input image. This is done in order to increase the contrast of the hazy regions by stretching the intensity values. The following equation is applied to obtain the contrast enhanced output for each input pixel x :

      McCartney [8]. According to this model, only a portion of the

      I x I x

      3

      light is received by the observer while the rest is being attenuated in its atmospheric path. Under these circumstances, it [7] is normally observed that the hazy images of a scene have linear combination between direct attenuation (D) and the airlight (A).

      2 I

      Where I 2 is the second derived input, value of is 2.5, I

      is the input image and I is the average luminance of the

      I DDirect Attenuation AAirlight

      image I .

      h

      Contrast enhancement significantly amplifies the visibility

      Ih I * t x A * 1 t x

      1 in hazy parts of an image, but sometimes, to the extent that

      Where I h

      is the degraded image by haze, I is the scene

      fine details are lost or destroyed.

      radiance or haze free image, A is the airlight constant also

    2. Weight Maps

      known as veiling light and tx

      is the medium transmission

      The derived inputs alone cannot restore a haze free image

      indicating the portion of the light that is not scattered and reaches the camera. The transmission t in a homogeneous atmosphere can be calculated as:

      tx exp * dx 2

      Where is the attenuation coefficient due to scattering and

      d represents the distance to the observer.

  3. RELATED WORK

    max

    max

    Haze is an atmospheric phenomenon which diminishes the perceptibility of the outdoor images. Ancuti et al. [10] proposed a method based on fusion strategy that uses a single input hazy image to refine the visibility of such images. The main idea behind this fusion based strategy is that primarily two input images are derived from the original hazy image such that the true visibility for each region of the scene is reacquired in at least one of the derived inputs. These derived inputs depict the haze-free regions and their aim is to increase visual details of the hazy regions. The first derived input is

    and this necessitates the need for weight maps.

    Luminance measures the visibility at each pixel by assigning low values to regions with low visibility and high values to regions with good visibility. This weight map is given by the following equation:

    L

    L

    W k 1/ 3Rk Lk 2 Gk l k 2 Bk Lk 2 4

    Here, k indexes the derived inputs, R, G and B represent the

    color channels of the derived inputs and L represent the luminance.

    However, this map reduces the color information and the global contrast which is why two more weight maps are assigned which are chromaticity and saliency.

    Chromaticity is the weight map assigned to control the saturation gain in the output image and thus increase its colorfulness. It is given by the equation:

    obtained by white balancing the original input image with aim of natural rendition of image and the second input is derived

    W k x exp

    S k x S k 2

    5

    by using Enhanced Contrast technique in which average luminance of the entire image is simply subtracted from the original hazy image. Additionally, this fusion based technique

    C 2 2

    Here, default value of (i.e. standard deviation) is 0.3, S

    But this method doesnt work well when the haze in the

    is the saturation value of the derived input and

    Smax

    is the

    image is non-homogeneous. Also, the distant objects or

    maximum of the saturation range (equal to 1 for higher saturated pixels).

    Visual Saliency is that quality of an image which highlights the presence of an object, or rather a pixel with respect to its surrounding, thus grasping our attention. Here, the saliency algorithm of Achanta et al. [9] is used for saliency computation. The equation for the same is given as:

    regions are not completely devoid of haze.

  4. PROPSED WORK

A fusion strategy is controlled by the fundamental characteristics of the original image which are nothing but the weight maps and it depends on our selection of the inputs and the weights. Thus, weight maps play a very important role in deciding the extent to which an image can be dehazed

W k x I hc x I

6

successfully, which is why we introduce new weight maps to

S k k

test the competence of the fusion technique.

I

I

k

k

Where

represents the arithmetic mean pixel value of

Initially, we start with the same approach as in the paper

the input and

hc k

is the input images blurred version to

proposed by Ancuti et al. [10]. The flow chart for the same is as shown in Fig. 1.

I

I

exclude the high frequency components. Each resulting weight

map is then normalized to get a consistent weight map.

  1. Multi-scale Fusion

Image fusion in the simplest terms is the process wherein information from multiple images is combined to form a single image. Pixel level image fusion refers to the processing of various combination of detail information gathered through different sources for better understanding of a scene. The multi-scale fusion provides an optimized solution for fusing different images since it is fast and operated by per pixel computational manner. In multi-scale fusion, first the inputs are weighted by corresponding three weight-maps i.e. luminance, chrominance and saliency to enhance the detected features in the image. The weights are normalized in order to have same scale as that of these inputs. But direct application of these normalized weight-maps leads to cause naive blending which introduces halo artifacts i.e. a light line around sharp edges of an image. To avoid this problem multi-scale technique is used; in which Laplacian operator is applied over two derived inputs i.e. white balance and enhance contrast. Band pass filter along with down sampling gives Laplacian pyramid that enhances the details especially at the edges. Also, Gaussian pyramid is used in multi-scale technique which is estimated for each normalized weight-map. Gaussian pyramid is estimated same as Laplacian pyramid but only the low pass filter is used instead of the band pass filter. Finally, the Laplacian inputs and Gaussian weights are fused at each level separately to form a fused pyramid which is given by:

Fl x Gl W k xLl I k x 7

k

Where l represents the number of levels of the pyramid whose default value is l 5 and LI is the Laplacian version of the input I while GW represents the Gaussian version of

the normalized weight map W . This fused image is nothing but dehazed version of the original image.

Thus, Ancuti et al. [10] proposed a method which is less complex as compared to the previous techniques since it uses only a single degraded image. Compared to other single image dehazing methods as proposed by Fattal [4], Tarel and Hautiere [11] which have certain artifacts in their results, this method is less prone to artifacts. It maintains the actual color

Fig. 1: Flow of our method

The first derived input is a white balanced input. This input is obtained using a plain white balancing operation. However, an additional input is required to improve the contrast of the hazy regions in the image ince white balancing alone cannot solve the visibility problem. Thus the second input is contrast enhanced version of original hazy image. As a replacement to the weight maps previously discussed in technique of Ancuti et al. [10], we define new weights namely the local entropy, visibility and saturation of the derived inputs as shown in Fig.

1. These weight maps serve the purpose of conserving regions with good perceptibility.

The Local Entropy weight map: The local entropy of a color image is related to the complexity contained in a given neighborhood typically defined by a structuring element. This weight map can detect subtle variations in the local gray level distribution. It splits the image into disjoint regions and then treats each region as a separate information entity. If within an

of the scene compared to the previous techniques.

image we consider a small neighborhood window k

with

size Mk Nk , then the entropy for this window can be given as:

S

S

W k x, y

Rk m2 G k m2 Bk m2

3

10

k G1

1

WE Pj log2

8

Rk Gk Bk

j0

Pj

Where

m and k indexes the derived

Where

n j

Pj M N

represents the probability of gray

3

input.

k k These three features extracted from each of the derived

level j in the neighborhood k in an image having G gray inputs are fused together using the same fusion technique

level, n j denotes the number of pixels with gray level j in window k .

The Visibility weight map: The human visual system forms the basis for this weight map. Visibility reflects the clarity of an image. The visibility [12] for an M N image F is defined as:

proposed by Ancuti et.al. Now, to preserve the most important details, the derived inputs are weighted by the obtained maps in the Fusion process. Further the strategy is designed in a multi-scale manner. In order to denigrate the artifacts acquainted by the weight maps, pyramidal representation are used. The two pyramidal representations are Laplacian and Gaussian pyramid. These representations are used in recovering the fine details of the image that were lost in the

k M N F m, n

process above and also preserves the quality of the image.

WV

m1 n1

1

9 V. RESULTS AND DISCUSSION

We estimate the efficiency of fusion based dehazing method using the new weight maps by testing this method on

Where k indexes the derived input, Fm, n denotes the

gray value at pixel position m, n, is a visual constant and

is the mean intensity value of the derived input image.

Saturation weight map: The saturation weight map measures the intensity of color in an image, such that an image with very less saturation approaches a black and white image. This map [13] is calculated as the standard deviation at each pixel within the R, G and B channel. It is given by the

equation:

several hazy images and comparing the results with the results obtained by the fusion based technique proposed by Ancuti et al. [10]. The weight maps used by Ancuti et al. evaluate the desired perceptual based quality for every pixel that decides how the input contributes to the final result. Our weight maps further refine the visual quality of the image in comparison to the previously proposed weight maps.

The initial results that we attained were for fixed values of local entropy neighborhood Ne , visual constant and visibility neighborhood N . But for deriving optimum results, we go for image specific parametric values thus varying the parameters to attain values that would yield the best results. Fig. 3 shows result obtained for Ne 3 , 0.65 and N 3 .

Input 1 Local Entropy Visibility Saturation Result of Ancuti et al.

Input 2 Result of our method

Fig. 2: Derived inputs and corresponding weight maps.(Image courtesy: Rannan Fattal) The rightmost column shows comparison of Ancuti et.al [10] and our result

Original Image Result of Ancuti et al. [10] Result of our method

Fig. 3: Comparison of result for Train image with Ancuti et al. [10]. The first row shows the full image and the second and third row shows the zoomed portion of the Train image. Original Image (Image courtesy: Rannan Fattal).

Original Image Result of Ancuti et al. [10] Result of our method

Fig. 4: Comparison of result for Canon image with Ancuti et al. [10]. The first row shows the full image and the second and third row shows the zoomed portion of the Canon image. Original Image (Image courtesy: Rannan Fattal).

The zoomed portion showing the tree trunk reveals details in the background with better clarity and greater amount of dehazing in our result than those seen in result of Ancuti et al. which have a blurred appearance. Also, the portion showing tracks in result of Ancuti et al. appears darkened at the track edges while clear details could be seen in our result. Fig. 4 shows result obtained for Ne 5 , 0.65 and N 5 . Here

we can see that the foreground and background are better enhanced and better dehazed in our result than in output of Ancuti et al. The background shows greater haze removal in our result and finer details can be easily detected in the foreground. Also our result shows good amount of color information. This is mainly due to the assignment of saturation weight map which helps in enhancing the colors of the image considerably.

Fig. 5-8 show our results obtained for different parameter values and the direct comparison of our results with the outputs of Ancuti et.al. Method of Ancuti et al. is computationally efficient and produces results that are less prone to artifacts as compared to the other single image dehazing techniques. Our method has an additional advantage

of producing better results for images with a non- homogeneous haze layer.

Fig. 5 shows the results for Mountain image, where our result is obtained for the values Ne 5 , 1 and N 9 . As can be seen from Fig. 5, Ancuti et al. result shows oversaturation for the leftmost portion of the image resulting in a dark appearance thus not only losing the details that were supposed to be recovered but also losing the ones that actually existed in the original hazy image. Our result on the contrary is able to recover and restore the fine details to a greater extent with no loss of existing information and thus accomplishes better dehazing.

Fig. 6 shows the results for Sweden image, where our result is obtained for values Ne 3 , 0.35 and N 3 .In Fig. 6, our result shows increased clarity, greater degree of dehazing and enhanced saturation than the results obtained by Ancuti et al. without compromising the details of the image.

Fig. 7 shows the result for image named Trees, where our result is obtained for the values Ne 7 , 1.8 and N 5 . In Fig.7, our result shows that the fine details such as the

Fig. 5: Results for Mountain image. L to R; Original image (Image courtesy: Rannan Fattal), result of Ancuti et al. [10], result of our method.

Fig. 6: Results for Sweden image. L to R; Original image (Image courtesy: Rannan Fattal), result of Ancuti et al. [10], result of our method.

Fig. 7: Results for Trees image. L to R; Original image (Image courtesy: Rannan Fattal), result of Ancuti et al. [10], result of our method.

Fig. 8: Results for Pumpkins image. L to R; Original image (Image courtesy: Rannan Fattal), result of Ancuti et al. [10], result of our method.

leaves and branches of the trees in the first row are recovered Also, our result obtained for the image named Pumpkins

with more clarity and also the natural color of the tree trunk is in Fig. 8 for the values Ne 3 , 1and N 3 shows that

retained.

the background (sky region) and the grassy regions consisting of the pumpkins are significantly dehazed whilekeeping the true colors intact.

TABLE I: Quantitative analysis of results of Ancuti et al. [10] and results of our method

Images

Result of Ancuti et al.

Result of our method

Local Entropy

Variance

Saturation

Local Entropy

Variance

Saturation

Canon

3.8386

0.0082

0.1035

3.9652

0.0100

0.1519

Train

4.3962

0.0229

0.1148

4.4789

0.0217

0.1339

Mountain

3.5552

0.0212

0.1977

3.9277

0.0220

0.3345

Sweden

4.6835

0.0392

0.2647

4.6845

0.0423

0.3165

Trees

4.5353

0.0169

0.3377

4.5176

0.0181

0.4358

Pumpkins

4.8196

0.0562

0.4597

4.7653

0.0601

0.5010

Next, we go for the quantitative assessment of our results. The parameters on which we base our analysis are local entropy, variance and saturation. Local entropy tells us how well the significant details and information of an image are being retrieved, variance gives us the contrast information and saturation helps to interpret the perceived color information in an image. Table I shows the quantitative comparison of the results obtained by Ancuti et al. and our method.

We compute the parameters for different results obtained and the values recorded are as shown in Table I. It can be seen that our results for Canon, Mountain and Sweden exceed in values of all the three parameters than those obtained in results of Ancuti et al. However, the results for Pumpkins and Trees lag behind in values of local entropy and the result for Train lags behind in variance. This shows that our results are consistently excellent in terms of variance (contrast information) and saturation (color information). Our method is simpler since no separate depth maps have to be estimated and it has an added advantage of performing exceptionally well for images characterized by non-uniform haze where the method proposed by Ancuti et al. [10] fails. Our method, however, may not work for particular images. Under certain circumstances, a compromise has to be made between the picture information and the true color information of the image. Retaining the exact details will mean losing the saturation aspect of the image and vice versa. We have obtained the above results by implementing our method in MATLAB.

VI. CONCLUSION

In this paper, we have proposed a new improved fusion based strategy for recovering the haze free images which are otherwise plagued by the atmospheric haze. As weight maps play a vital role in deciding which characteristics influence the final appearance of the output image, we tamper with the existing weight maps and introduce new weights namely the local entropy, visibility and saturation with the sole purpose of overcoming the drawbacks seen in results of Ancuti et al. [10] and restoring effective color balance with haze removal. The visual and quantitative inspection indicates that our results have better visual appearance with improved dehazing and vibrant colors.

ACKNOWLEDGMENT

We wish to express our heartfelt thanks to Prof. Ashish Vanmali from Department of Electronics and Telecommunication Engineering at Vidyavardhinis College of Engineering and Technology, University of Mumbai, for his insightful suggestions, valuable guidance and constructive criticism.

REFERENCES

  1. Shuai Fang, XiuShan Xia, Xing Huo, ChangWen Chen, Image dehazing using polarization effects of objects and airlight, Opt.Express 22, 19523-19537,2014.

  2. Yu Jing, Chuangbai Xiao, and Dapeng Li.Physics-based fast single image fog removal, IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, 2010.

  3. Fang, Faming, Fang Li, Xiaomei Yang, Chaomin Shen and Guixu Zhang, "Single image dehazing and denoising with variational method", IEEE International Conference on Image Analysis and Signal Processing (IASP), pp. 219-222, 2010.

  4. R Fattal, Single image dehazing, ACM Trans. Graph., SIGGRAPH, vol. 27, no. 3, p. 72, 2008.

  5. K. He, J. Sun, and X. Tang, Single image haze removal using dark channel prior, in Proc. IEEE Conf. Computer. Vis. Pattern Recognit., June.2009, pp. 19561963.

  6. R. T. Tan, Visibility in bad weather from a single image, in Proc. IEEE Conf. Computer. Vis. Pattern Recognition., Jun. 2008, pp. 18.

  7. NitishGundawar, V. B. Baru, Improved Single Image DehazingBy Fusion, May.2014, vol.3, Issue.5.

  8. E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules and Particles. New York, NY, USA: Wiley, 1976.

  9. R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, Frequency-tuned salient region detection, in Proc. IEEE Conf. Comput. Vis. Patern Recognit., Jun. 2009, pp. 1597-1604.

  10. C. O. Ancuti and C. Ancuti, Single image dehazing by multiscale fusion, IEEE Trans. Image Process, vol. 22, no.8, pp. 3271-3282, 2013.

  11. N.Hautiere, J.-P. Tarel, and D. Aubert, Towards fog-free in-vehicle vision systems through contrast restoration, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2007, pp. 1-8.

  12. Shutao Li, James T. Kwok and Yaonan Wang, Multifocus Image Fusion Using Artificial Neural Networks, Pattern Recognition Letters, Volume 23,Issue 8, Pages 985-997, Junev2002.

  13. T.Mertens, J. Kautz and F.V. Reeth. Exposure fusion Computer Graphics And Applications.Pacific Conference on, 0:382-390,2007.

Leave a Reply