Texture Analysis of Cloud for Weather Information by Neural Network

DOI : 10.17577/IJERTV1IS5183

Download Full-Text PDF Cite this Publication

Text Only Version

Texture Analysis of Cloud for Weather Information by Neural Network

1SANJU KURIL 2INDU SAINI 3 B. S. SAINI

1, 2, 3 Department of Electronics and Communication Engineering

Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, 144011, India

Abstract

Classification of different types of cloud images is the primary issue used to forecast precipitation and other weather constituent. A system is presented for cloud classification through satellite images. It involves two main stages, feature extraction and classification. The goal of feature extraction is to determine features from the available channels that make the detection of changes in cloud characteristics easier. The classifier makes the decision on the basis of these features to categorize the image pixels to different cloud types. in this paper, there are two algorithm have been used for cloud classification (i) ANN with features of haar wavelet and (ii) PNN with features of haar wavelet. The use of wavelet coefficient values makes the system more efficient in detecting the minor changes in cloud statistical properties and leading to better classification. Wavelet is used for feature extraction, and artificial neural network and probabilistic neural network used as a classifier. By using all the methods It has been found that DWT & PNN performance is better in giving high classification accuracy of 98.33% in comparison to classification accuracy 94.37% for DWT & ANN. Hence, DWT & PNN combination is very good for continuous cloud classification and monitoring.

  1. Introduction

    Classification is one of the major research areas of neural networks. Classification problems play a major role in the field of business, science, industry and medicine. Neural networks have emerged as an important tool for classification. The recent research activities which use neural networks for classification have established that neural networks are a promising alternative to various conventional classification methods. The advantage of neural networks is that it makes use of self-adaptive methods to adjust to the data without any explicit specification. The use of a Probabilistic Neural Network (PNN) to classify the clouds, based on the statistical features [1].

    Accurate and automatic detection of clouds in satellite images is a key issue for a wide range of remote sensing applications. Undetected clouds are one of the most significant sources of error in both sea and land biophysical parameter retrieval. Cloud- screening approaches, also referred to cloud detection, are generally based on the assumption that clouds present some features that can be used for their identification, clouds are usually brighter and colder than the underlying surface, the spectral response is different from that of the surface covers, and cloud height produces a shorter optical path thus lowering atmospheric absorptions. The detection and classification of clouds in meteorological satellite data with known pixel based approaches is principally based on spectral analyses and every so often simple spatial analyses are used additionally. When classifying structures performed with hundreds of pixels and relationships between them, these approaches are called conceptual methods. Object orient classification is a conceptual method that operates on groups of pixels (image objects) and defines the relationship between them. In this study we performed the classification of clouds in low resolution satellite image,

  2. Discrete wavelet transform

    Wavelets are mathematical tools for signal and image analysis. Wavelet analysis is particularly efficient where the signal being analyzed has transients or discontinuities [2] [3]. The basic concept of wavelet analysis is to select an appropriate wavelet function called mother wavelet and then perform analysis using shifted and dilated version of this wavelet.

    Discrete wavelet transform (DWT) is used in this paper, which is given by

    (1)

    Where and Dilation and translation parameters, l, m are integer variables. The most frequently used selections of

    and Are =2 and =1. (2) The images generally are connected regions of similar texture and intensity levels that combine to form objects. If the objects are small in size or low in contrast, normally they are examined at high resolutions. If both small and large objects and low and high contrast objects are present simultaneously, the images are considered as a 2-D signal, so the 2-D DWT can be applied to the images. The implementation of DWT is done by multi-resolution analysis (MRA) [4]. By MRA, a signal can be analyzed at different frequency bands with different resolutions. The DWT is computed by successive low pass and high pass filtering of the discrete time- domain signal.

    The details are the low-scale, high-frequency components. cD1 is the detail coefficients of level1 obtained by passing the signal from a high-pass filter and cA1 is the approximation coefficients of level 1 obtained by passing the signal from a low-pass filter and so on for higher levels. The MRA details on various levels contain the features (Wavelet coefficients) for detection and classification of Clouds. Owing to the unique feature of providing multiple resolutions both in time and in frequency by Wavelet, it provides useful clues for the classification of clouds in an elegant way.

  3. Artificial neural network (ANN)

    A radial basis function (RBF) is a real-valued function whose value depends only on the distance from the origin. If a function h satisfies the property h (x) =h (||x||), then it is a radial function. Their characteristic feature is that their response decreases (or increases) monotonically with distance from a central point. The center, the distance scale, and the precise shape of the radial function are parameters of the model, all fixed if it is linear [5]. A typical radial function is the Gaussian which, in the case of a scalar input, is

    H (x) =exp ((-(x-c) 2) / (r2)) (3)

    A Gaussian RBF monotonically decreases with distance from the center. In contrast, a multi quadric RBF which, in the case of scalar input, monotonically increases with distance from the center. Gaussian-like RBFs are local (give a significant response only in a neighbourhood near the centre) and are more commonly used than

    multi quadric-type RBFs which have a global response. Radial functions are simply a class of functions. In principle, they could be employed in any sort of model (linear or nonlinear) and any sort of network (single-layer or multi-layer) as shown in Figure1.RBF networks have traditionally been associated with radial functions in a single- layer network.

    Figure 1: Radial Basis Function Network

  4. Probabilistic neural network (PNN)

    The probabilistic neural network (PNN) was developed by Specht [6]. This network provides a general solution to pattern classification problems by following the approach of Bayesian classifiers, which takes into account the relative likelihood of events and uses a priori information to improve prediction. In the context of classification problems, the PNN outputs are interpreted as estimates of probability of class membership and the network learning as estimation of a probability density function (pdf) of the classes. The probabilistic neural network uses a supervised training set to develop distribution functions within a pattern layer [7]. The PNN architecture is composed of many interconnected neurons organized in successive layers. It includes four layers: input layer, pattern layer, summation layer and decision layer.

    /li>

  5. Design methodology

    A classification task begins with a data set in which

    the class assignments are known. Then t(pe2)deItssigpnaerdameters are its c

    classification model is tested by applying it to test data with known target values and comparing the predicted values with the known values. The test data must be compatible with the data used to build the model and must be prepared in the same way that the build data was prepared. In the proposed work two

    algorithms have been used for the classification purpose.

    1. Classification by ANN using DWT (haar)

    2. Classification by PNN using DWT (haar)

MATLAB 2009 is used to perform the whole methodology. The Figure 2 shows the block diagram of classification methodology.

Image Acquisition

Pre -Processing

Feature Extraction

Classification

Figure 2: Block diagram of Classification Methodology

    1. Data acquisition

      The datasets are chosen from the database from Indian Meteorological Department website [8]. Different datasets are selected. Each one represents different types of Clouds:

      Low Level Cloud

      Middle Level Cloud High Level Cloud

      In order to create a database for the classifier implemented in this research, a total of 990 samples are analyzed. The samples are divided into two groups (learning and testing)

      Group I (learning set) contains 630 samples; these samples are divided into four categories, each category represents one type of Clouds, the samples include 210 data taken from satellites with Low Level Clouds, 210 from Middle Clouds and 210 from High Level Clouds. These Cloud types are assigned an annotation indicating its class. This group of data is used for building the classifier.

      Group II (test set) contains 360 samples; 120 samples for Low Level Clouds, 120 samples for Middle Level Clouds and 120 samples for High Level Clouds. This dataset is used for testing the classification process.

    2. Pre processing for wavelet transforms

      Image pre-processing is the term for operations on images at the lowest level of abstraction. Pre processing task involves converting RGB to gray scale and rescaling with the size of 200×180 gray images. Then Smoothing is done by median filter on the image for more defined features as shown in Figure 3

      Input Image 200×180 Gray Image Figure 3: Pre-processing of image

      Median filter performs median filtering of the matrix A in two dimensions. Each output pixel contains the median value in the M-by-N neighbourhood around the corresponding pixel in the input image. Median filtering of the matrix A using the default 3-by-3 neighbourhood as shown in Figure 4

      200×180 Gray Image Smoothed Image

      Figure 4: Converting 200 x 180 gray images to Smoothed image.

    3. Feature extraction

      The goal of feature extraction is to determine features from the available channels that make the detection of changes in cloud characteristics easier

      5.3.2. Feature Extraction by Haar Wavelet

      Wavelets were used for different interesting applications like, extraction of cloud textures. [9] Before applying the wavelets, each of the images should be normalized to the same size to give some discriminating features. In this case the image is reduced to a size of 200×180, so this algorithm does not take much time to calculate the wavelet coefficients.

      The Texture of clouds is extracted from the Haar wavelet is applied and the statistical parameters are analyzed to give the best classification. All statistical parameters are measured initially and they are analyzed.

      Haar Wavelet is applied to each image to obtain statistical parameters at different levels of decomposition [10]. Each level has both horizontal

      (H) and vertical (V) detail coefficients. As mentioned earlier there are statistical variables associated at each level and for both horizontal and vertical coefficients. So for each image, parameters are obtained for horizontal and vertical coefficients at each level. In those three detail components of a Haar DWT image, we can obtain various features about the original image as follows:

      1. Average components are detected by the LL sub band.

      2. Vertical components are detected by the HL sub- band.

      3. Horizontal components are detected by the LH sub-Band.

      4. Diagonal components are detected by the HH sub- Band.

    4. Classification

5.4.1 Using ANN with wavelet features

RBF networks have traditionally been associated with radial functions in a single-layer network. In the input layer carries the outputs of FLD function. The distance between these values and central values are found and summed to form a linear combination before the neurons of the hidden layer.

Implementation of classification

  1. Decide the number of cover images.

  2. Each Image and here no. A band considered K=3

  3. Calculate the Feature pixel vector x1, x2,

  4. If M is the number of representatives, K*M centers

    are generated randomly.

  5. Each Patter is assigned to its closest cluster center by calculating the Euclidean Distance between the input vector image and each cluster in K dimensional space.

  6. The new center is generated by calculating the mean of each cluster again. The above process continues till the cluster centers are almost, within a required degree of accuracy, which is 1 in this case.

  7. Radial basis function output is calculated from the input training image to form the transform image matrix G. The weight matrix is then computed target classified image matrix is finally obtained multiplication of test image matrix to weight Matrix.

5.3.2 Using PNN with wavelet features

We have used a PNN network with input, pattern, summation and decision layers. In cloud classification applications, the PNN neural networks are regarded as a mapping from the feature to the classes. Therefore, the number of inputs of PNN neural networks is determined by the dimension of input vectors. In the proposed system, the vectors after implementing the DWT are fed to the input layer of the PNN neural networks. The number of neurons in input layer was taken according to the features being used in classification. The output layer neuron number was fixed same as the number of classes to be discriminated i.e. Try to distinguish three types of clouds

The sub-band frequencies were used as an input to the expert model network. For making input patterns, the whole feature vector set was divided into two groups, namely, training and testing with 210 and 120 of total number of features respectively. These patterns were formed by mixing from the database of clouds.

Here the architecture of PNN give the brief introduction about PNN layers [11]

Figure 5: Architecture of PNN

Step1.The first-layer input weights, IW1, 1 (net .IW

{1, 1}), are set to the transpose of the matrix formed from the Q training pairs, P'.

Step2 .When an input is presented, the || Dist || box produces a vector whose elements indicates how close the input is to the vectors of the training set.

Step 3 .These elements are multiplied, element by element, by the bias and sent to the radbas transfer function. An input vector close to a training vector is represented by a number close to 1 in the output vector a1. If an input is close to several training vectors of a single class, it is represented by several elements of a1 that are close to 1.

Step 4 The second-layer weights, LW1, 2 (net .L W

{2, 1}), are set to the matrix T of target vectors. Each vector has a 1 only in the row associated with that particular class of input, and 0s elsewhere (Use function ind2vec to create the proper vectors. ) The multiplication Ta1 sums the elements of a1 due to each of the K input classes [12].

Step 5 Finally, the seond-layer transfer function, compete, produces a 1 corresponding to the largest element of , and 0s elsewhere. Thus, the network classifies the input vector into a specific K class because that class has the maximum probability of being correct.

Syntax

Net = newpnn (P, T, spread)

Description:

Probabilistic neural networks (PNN) are a kind of radial basis network suitable for classification problems.

Net = newpnn (P, T, spread) takes two or three arguments,

PR×Q matrix of Q input vectors TS×Q matrix of Q target class vectors spread .Spread of radial basis functions (default=0.1) and returns a new Probabilistic neural network. If spread is near zero, the network acts as a nearest neighbour classifier. As spread becomes larger, the design network takes into account several nearby design vectors.

Classes are defined for target output by assigning values and the data obtained is classified according to the prior values assigned. Hence, the classification is obtained.

  1. Results

    1. ANN with wavelet features

      Experiments were performed on the cloud database. In database Testing samples are 120 taken for each type of cloud. In the experiment, DWT algorithm is applied to the low frequency images, the meaningful features are extracted. At last, we use the extracted features as the input for the classifier which designed by the RBF neural network. The performance of the designed classifier was evaluated with the help of the confusion matrix.

      (a)

      (b)

      Figure 6 (a) Classified images are generated by ANN from different feature Extracted Images and (b) Classified Image shows the colour of different type of clouds

      In the above Figure 6 (b) where it shows the different patches of colour which shows the different type of clouds like the purple colour shows the High Level clouds then Green shows Middle level Clouds and yellow shows the Low Level Clouds.

      Cloud Classes

      Low

      Mediu m

      High

      Total No. Of

      samples

      Accurac y

      (%)

      Low

      110

      7

      3

      120

      91.667

      Mediu

      m

      2

      118

      0

      120

      98.33

      High

      2

      6

      112

      120

      93.33

      Overall Testing Accuracy = 94.37%

      Table I Confusion Matrix by ANN with the features of wavelet

      Accuracy calculated for Low Level Cloud 110/120 = 91.667%

      Accuracy calculated for Middle Level Cloud 118/120

      = 98.33%

      Accuracy calculated for High Level Cloud 112/120 = 93.33%

      The confusion matrix as shown in Table I This gives the summary of cloud type classification results for the three classes of testing. In this table, the diagonal elements give the number of correctly classified which comes out to be overall accuracy of 94.37% with 91.667% for Low Level Clouds, 98.333% for Middle level Clouds and 93.33% for High Level Clouds.

    2. PNN with wavelet features

      In order to create a database for the classifier implemented in this research, total samples of 45 images are analyzed. The samples are divided into two groups.

      (a)

      (b)

      In Figure 7: (a) is the Feature Extracted image and is the classified image generated by the classifier (b) Classified Image shows the colour of different type of clouds.

      In Figure 7 (b) where it shows the different patches of colour which shows the different type of clouds like the Dark pink colour shows the High Level clouds then purple shows Middle level Clouds and yellow shows the Low Level Clouds.

      Here total no. of samples was taken 120 for each type of cloud for testing. The accuracy calculated as for the low level cloud total no of samples was taken 120

      in which 119 samples were correctly classified, Middle level cloud total no of samples was taken 120 in which 117 samples were correctly classified and High level cloud total no of samples was taken 120 in which 118 images were correctly classified.

      Table II: Confusion Matrix by PNN with the features of wavelet

      Cloud Classes

      Low

      Medium

      High

      Total No. Of

      sampl es

      Accu racy (%)

      Low

      119

      1

      0

      120

      99.16

      Mediu

      m

      1

      117

      2

      120

      97.50

      High

      0

      2

      118

      120

      98.33

      Overall Testing Accuracy = 98.33%

      Accuracy calculated for Low Level Cloud 119/120 = 99.166%

      Accuracy calculated for Middle Level Cloud 117/120

      = 97.500%

      Accuracy calculated for High Level Cloud 118/120 = 98.333%

      The diagonal values represent the samples that have been classified correctly. Large diagonal values mean better accuracy. The values of each row represent samples that come from the class of that row but are classified incorrectly to other classes, and they are called omission error. The values in each column represent samples that are classified falsely as class of that column but come from other classes, and they are called commission error.

      PNN gives the better classification accuracy and as the haar wavelet transform is fast in processing so combination of both gives better classification accuracy as well as fast computational result. This is helpful for weather information and forecasting.

    3. Comparison of proposed algorithms

      In this Table III, the PNN system provides better accuracy for Low and High level of clouds. On the other hand, the ANN system provides better accuracy for the Medium Level Clouds.

      Table III: Comparison of Proposed Algorithms

      Cloud Type

      Accuracy (%)

      DWT+ANN

      DWT+PNN

      Low Clouds

      91.66

      99.16

      Medium Clouds

      98.33

      97.50

      High Clouds

      93.33

      98.33

      Overall Accuracy

      94.37

      98.33

    4. Comparison with other methods reported in the references

      In order to test the proposed method, PCA [4], SVD [5], BPNN (Back Propagation NN) [6], NMF (Nonnegative Matrix Factorization) [7], FLNN (Fuzzy Logic Neural Networks) [8], and K-SOM (K- Self Organizing Maps) NN [9] is used to compare with PNN with the Features of Wavelet. Table IV gives the classification accuracy.

      Table IV. Comparison with other methods reported in the Literature

      References

      Techniques Used for Classification

      Accuracy (%)

      Proposed Method

      DWT+PNN

      98.3333

      [4]

      PCA

      88.85

      [5]

      SVD

      70 90

      [6]

      BPNN (Back

      Propagation NN)

      71.80

      [7]

      NMF (Nonnegative

      Matrix Factorization)

      69.94

      [8]

      FLNN (Fuzzy Logic

      Neural Networks)

      64 – 81

      [9]

      K-SOM (K-Self

      Organizing Maps) NN

      86.80

  2. References

  1. T. Santhanami, S. Radhika2,Probabilistic neural network A Better Solution for Noise Classification Journal of theoretical and applied Information and technology. vol27. No 1

  2. Level 2 Combined Radar and Lidar Cloud Scenario Classification Product Process Description and Interface Control Document, Jet Propulsion Laboratory California Institute of Technology Pasadena, California

  3. L. Gomez-Chova, J. Mouse-Mari, E. Izquierdo- Verdiguier, G. Camps-Vails, J. Calpe, and J. Moreno, Cloud screening with combined MERIS and AATSR images, in Proc. IEEE IGARSS, Cape Town, South Africa, Jul. 1217, 2009, pp. IV-761IV-764.

  4. S. Abdulhamit, EEG signal classification using wavelet feature extraction and a mixture of expert model,

    Expert system with applications 32 (2007) 1084-1093

  5. Graps, A.,Introduction to Wavelets, IEEE Computational Sciences and Engineering, 2 (2), 50-61, 1995. http://www.cis.udel.edu/~amer/CISC651/IEEEwavelet.pdf.

  6. D.F. Specht, Probabilistic neural networks, Neural Networks 3, (1990) 109118.

  7. Vincent Cheung, Kevin Cannons, An Introduction to Probabilistic Neural Networks Signal & Data Compression, Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada.

[8 ] Satellite Images and Product, http://www.imd.gov.in/section/satmet/dynamic/insat.html.

  1. Hasan Demirel and Gholamreza Anbarjafari, Discrete Wavelet Transform-Based Satellite Image Resolution Enhancement, IEEE Transactions On Geoscience And Remote Sensing, Vol. 49, No. 6, June 2011

  2. I. Daubechies, Orthornomal bases of compactly supported wavelets, Comm. On Pure and Applied Mathematics, vol. XLI, pp 909-996, 1998

  3. Reddy, M. J., And Mohanta, D. K. (2007). A wavelet neuro fuzzy combined approach for digital relaying of transmission line faults, Electric Power Components & Systems, Volume 35 (12), pp. 1385-1407.

  4. Jin-Juan Quan, Xian-Bin Wen*, Xue-Quan Xu, Multiscale probabilistic neural network method for SAR Image segmentation, Applied Mathematics and Computation 205 (2008) 578583

  5. Bajwa, I, S., Naweed, M, S., Asif, M, N., Hyder, S, I., Feature Based Image Classification by using Principal Component Analysis, ICGST-GVIP Journal, ISSN 1687- 398X, Volume (9), Issue (II), 2009.

  6. Kaur, R., and Ganju, A., Cloud Classification in NOAA AVHRR Imageries using Spectral and Textural Features, Journal of Indian Society of Remote Sensing 36:167-174, 2008.

  7. Saitwal, K., Mahmood, R., Sadjadi, A., and Donald, R., A Multichannel Temporally Adaptive System for Continuous Cloud Classification From Satellite Imagery, IEEE Transactions On Geoscience And Remote Sensing, Vol. 41, No. 5 2003.

  8. Guillamet, D., SchieleZ, B., and Vitrii, J., Analyzing Nonnegative Matrix Factorization for Image Classification, Conference on Pattern Recognition ICPR 2002, Quebec, Canada, August 2002.

  9. Baum, B, A., Tovinkere, V., and Titlow, J., Welch, R, M., Automated Cloud Classification of Global AVHRR Data Using a Fuzzy Logic Approach, Journal of Applied Meteorology. pp 1519-1539.1997.

  10. T Tian, B., Mahmood, R., Sadjadi, and Donald, R., Neural Network-Based Cloud Classification on Satellite Imagery Using Textural Features, IEEE Conference in image processing, p209-212 vol3, 1997

Leave a Reply