Implementation of Pixel Level Color Image Fusion for Concealed Weapon Detection

DOI : 10.17577/IJERTV3IS070573

Download Full-Text PDF Cite this Publication

Text Only Version

Implementation of Pixel Level Color Image Fusion for Concealed Weapon Detection

Vishnupriya G. L

Signal Processing and VLSI SBMJCE

Kanakapura, India

K. S. Srinivas

Asst. Prof., Dept. of ECE SBMJCE

Kanakapura, India

AbstractImage fusion has various applications in the field of military, law and enforcement. Image fusion for Concealed Weapon Detection (CWD) has attracted lots of interest in the field of military. In this paper linear pixel level image fusion has been implemented using vector valued total variation algorithm (VTVA). It includes decomposition of covariance matrix of multispectral bands using cholesky decomposition. The decomposed data is linearly transformed and mapped. The statistical properties of the resultant fused image can be controlled by the user.

KeywordsConcealed Weapon Detection(CWD), pixel level image fusion, VTVA

  1. INTRODUCTION

    Image fusion [4] is a process of merging two or more images taken from different sensors to form a fused image which is more informative and efficient compared to the source images. Its difficult to combine visual information simply by viewing multiple images separately by the human observer so we go image fusion. The pixel level image fusion performed on a pixel-by-pixel basis. It generates a fused image in which information associated with each pixel is determined. The performance of source image will be improved. In general pixel level fusion methods can be classified as linear and nonlinear methods.

    In this paper the proposed algorithm is a linear pixel level image fusion. This method improves the visual quality of the fused image. The Concealed Weapon Detection (CWD) has been introduced in this paper, which detects the weapon or metal object hidden underneath a persons clothing. The research activities are going, on improving the quality of the fused image.

  2. VECTOR VALUED TOTAL VARIATION ALGORITHM (VTVA)

    Statistical properties of multispectral data set with X * Y pixel per channel and k different channels can explored if each pixel is described by a vector whose components are the individual spectral responses to each multispectral channel

    with mean vector given by

    The mean vector is used to define the average or expected position of the pixels in the vector space.

    1. Covariance matrix

      The correlation between the multispectral bands can be defined by covariance matrix. If the off diagonal elements in covariance matrix are large then we can say that the multispectral bands are correlated with each other and diagonal elements of the covariance matrix are the variance. The covariance matrix is real symmetric and positive definite matrix. The covariance CM can be obtained using equation

      The correlation coefficient r is obtained by dividing the covariance matrix elements with the standard deviation of the corresponding multispectral component (rij=cij/ij). The correlation coefficient matrix RM has an elements the correlation coefficient between ith and jth multispectral components. The correlation coefficient matrix is shown below. The covariance matrix CM and CN are real and symmetric.

      Fig .1. Block Diagram of VTVA

      The covariance matrix CN can be obtained as the product of diagonal matrix and correlation coefficient matrix as below

      Where E is the diagonal matrix and E. Scaling

    2. Cholesky decomposition

      The scaling scales the mapped image into the range [0- 255]. The linearly transformed M must be scaled in order produce an RGB representation.

      The CM and CN can be transformed into upper triangular matrix QM and QN respectively by making use of cholesky transformation matrix. The cholesky decomposition is applicable only for the real, symmetric and positive definite matrix. A real symmetric matrix P can be decomposed by means of upper triangular matrix Q so that

      (7)

      The factorized CM and CN can be written as

    3. Transformation matrix

      The transformation matrix transforms the fused multispectral components into a RGB or color image. The transformation matrix A depends on the statistical properties of the original data set. The transformation matrix A can be obtained using the cholesky decomposition method. The relation between the covariance matrices be

      Where min (Nki) and max (Nki) are the minimum and maximum values of transformed vector Nk respectively.

  3. SIGNAL TO NOISE RATIO

    The signal to noise ratio (SNR) is calculated as the ratio of rms value of the reference input and the rms value of difference between reference input and mapped output image.

    The SNR for the fused image at the output can be calculated by as below

    (14)

    Table Head

    SNR Value Of Previous Paper

    SNR Value Of Proposed System

    12-bit

    77.20dB

    72.8407dB

    16-bit

    98.15dB

    73.1117dB

    TABLE I. SNR VALUE IN DECIBELS VERSUS BIT LENGTH OF THE FUSED IMAGE

    substitute (8) and (9) in (10)

    Thus,

    The transformation matrix can be written as

    The previous SNR values and propose method values for 12-bit and 16-bit are shown in Table.1.

  4. EXPERIMENTAL RESULTS

  1. Linear transformation

    The linear transformation is to map the input and output components.

    Fig .2. Natural color image

    Fig .3. Infrared image

    Fig .4. Fused image 1

    ACKNOWLEDGMENT

    This research work has been supported by my project guide Mr. K. S. Srinivas, Asst. Professor, Dept of ECE. I would like to thank my supervisors, friends and parents for their support.

    REFERENCES

    1. Dimitrios Besiris et.al. An FPGA-Based hardware implementation of configurable pixel level color image fusion, IEEE Tras. Geosci. Remote Sens., vol. 50, no.2, pp. 0196-2892, Feb 2012 .

    2. V. Tsagaris and V. Anastassopoulos, Multispectral image fusion for improved RGB representation based on perceptual attributes, Int. J. Remote Sens., vol. 26, no. 15, pp. 32413254, Aug. 2005.

    3. Mrityunjay Kumar, Member, IEEE, and Sarat Dass, Member, IEEE, A Total Variation-Based Algorithm for Pixel-Level Image Fusion,

      Trans. image processing, vol. 18, no. 9, Sept 2009

    4. A. Goshtaby and S. Nikolov, Image fusion: Advances in the state of the art, Inf. Fusion, vol. 8, no. 2, pp. 114118, Apr. 2007.

    5. R. S. Blum and Z. Liu, Eds., Multi-Sensor Image Fusion and Its Applications (Special Series on Signal Processing and Communications). New York: Taylor & Francis, 2006.

    6. T. Stathaki, Image Fusion: Algorithms and Applications,. New York: Academic, 2008.

Fig .5. Fused image 2

First a natural color image is taken, the RGB components are separated from color image. Next a gray scale image is taken where we can see the weapon hidden inside the clothing. This is shown in the first two figures. Last two images are fused images where the weapon hidden is clearly seen. The resultant fused images color is configurable. User can configure the color of the fused image.

Leave a Reply