Remote Sensing System Classification using Hyperspectral Image

DOI : 10.17577/IJERTV3IS061017

Download Full-Text PDF Cite this Publication

Text Only Version

Remote Sensing System Classification using Hyperspectral Image

G. Nagalakshmi Professor&Hod Deparment Of Cse

Siddratha Institute Of Science And Technology, Puttur, Tirupati, A.P

  1. Jyothi Professor&Director Department Of Cs

    Spmvv Tirupati, Ap

    Abstract The main goal of this paper is to provide the fundamental concepts of hyperspectral remote image sensing based on the classifications of imaging. It deals with spectral concepts, discussions and differentiations of hyperspectral with multispectral imaging, classification of image processing in remote sensing, current and recent hyperspectral sensors and data analysis, and summarizing hyperspectral techniques based on image-processing.

    Keywords Hyperspectral, multispectral, image processing, remote sensing, image classifications, hyperspectral sensors, data analysis, image processing techniques

    1. INTRODUCTION

      The success of any GIS [1,2] application depends on the quality of the geographical data used. Collecting high-quality geographical data for input to GIS is therefore an important activity. Traditionally, environmental data can be collected directly in the field ,using in situ (ground survey) methods. This type of data collection normally makes use of an instrument that measures a phenomenon directly in contact with the ground ,such as the Ph value of soil, the temperature of the water in a lake, the angle of slope, or the height of a building.

      A. About Remote Sensing:

      The term remote sensing [3,4] was coined by geographers in the office of naval research of the United States in the 1960s to refer to the acquisition of information about an object without physical contact. The term usually refers to the gathering and processing of information about earths environment, particularly its natural and cultural resources, through the use of photographs and related data acquired from an aircraft or a satellite. Today, remote sensing is the preferred method to use if environmental data covering a large area required for a GIS application.

      Remote sensing data can be analog or digital in form as well as small or large in scale, according to the type of sensor and platform used for acquiring the data. In some usage, remote sensing refers only to imaginary acquired by sensors using electronic scanning ,which detects radiation outside the normal visible range (0.4-0.7um)of the electromagnetic spectrum, such as microwave, radar, and thermal infrared

      .The term photograph is used to refer to the picture acquired by a conventional camera in the visible region of the electromagnetic spectrum and is analog in form ,while the word image or imagery refers to non-photographic

      pictures acquired by electronic detectors operating in the invisible portion of the electromagnetic spectrum and is digital in form. One should, however, note that near infrared radiation between 0.8 ad12um is photographically actinic, which means that it can be recorded on near-infrared films using an ordinary camera.

      One notable characteristics of remote sensing is that it is not just a data collection process. Remote sensing also includes data analysis: the methods and processes of extracting meaningful spatial information from the remote sensing data for direct input to the GIS. In digital form, remote sensing data are compatible with the raster-based GIS data model and can be readily integrated with other types of raster GIS data. The advantage of remote sensing is the birds eye view or synoptic view it provides, so that environmental data covering a large area of earth can be captured instantaneously and can then be processed to generate map like products. Another advantage of remote sensing is that it can provide multispectral and multi scale data for the GIS database.

    2. PRINCIPLES OF ELECTROMAGNETIC REMOTE SENSING:

      Both photographic and non-photographic remote sensing systems record data of reflection and/or emission of electromagnetic energy from earths surface (fig 1).

      The major sources of electromagnetic energy are the sun, although earth itself can emit geothermal and man-made energy. Electromagnetic radiation is a form of energy derived from oscillating magnetic and electrostatic fields(fig 2) and is capable of transmission through empty space in a plane harmonic wave pattern at the velocity (c) of light(3*10^8ms^-1).The frequency of oscillation (f) is related

      to the wavelength( by the standard wave equation.

      C=

      Fig-1:A)The sun-source C-d)Reflected radiations D)Radar

      Electromagnetic radiation occurs as a continuum of wavelengths and frequencies from short wavelength, high- frequency cosmic waves to long-wavelength, low-frequency radio waves. This is known as the electromagnetic spectrum. Electromagnetic energy generated from the sun is seriously attenuated by its passage through the atmosphere to earth. The atmosphere contains aerosol particles and gas molecules that scatter or absorb the electromagnetic energy according to its wave length.

      Electromagnetic radiation with wavelength shorter than 0.3µm is completely absorbed by the ozone (O3) in the upper atmosphere, whereas water particles in clouds absorb and scatter electromagnetic radiation at wavelength less than about 0.3 cm. There are certain transmission windows in the atmosphere through which the electromagnetic energy of certain wave-lengths can be fully transmitted. such as the 3- 5µm and 8-14µm transmission windows for thermal infrared energy.

      Once the electromagnetic energy reaches earth, it is further modified through interacting with features on the surface of earth .the energy may be reflected, refracted, transmitted, or absorbed. Energy absorbed by an object will be given out again in the form of emitted energy by the object.

      A remote sensing system can detect reflect and emitted energy from earths surface. The reflection of the radiation energy depends on the surface roughness and the nature of the material .A very smooth surface such as a lake will give rise to total reflection away from the remote sensor (known as specular, or mirror like, reflection) .

    3. HYPERSPECTRAL REMOTE SENSING:

      The most powerful tools used in the field of remote sensing [7,8] are Hyper spectral imaging (HSI) and Multispectral Imaging (MSI)

      Since the mid 1950s some airborne sensors have recoded spectral information on the Earth surface in the wavelength region extending from 400 to 2500 nm. Starting from the early 1970s, a large number of space borne multispectral sensors have been launched, on board the LANDSAT, SPOT and Indian Remote Sensing (IRS) series of satellities, just to name a few.

      1. Difference between hyperspectral and multispectral imaging:

        As shown in the above fig:1, HIS systems collect at least of 100 spectral band of 10-20 nm width, MSI sensors are systems collecting less than 20, generally non contiguous, spectral bands. [3]

        HIS systems have a very wide capability of spectral discrimination, while MSI systems are designed to support applications by providing bands that detect information in specific combinations of desirable regions of the spectrum shown in fig: 2. The number and position of bands in each system provide a unique combination of spectral information and are tailored to the requirements the sensor was designed to support.

        Fig: 2 Comparison of Multispectral with Hyperspectral.

        Remote sensing image processing is a mature research area allowing real-life applications with clear benefits for the Society. The main goal of remote sensing is as follows:

        1. Monitoring andmodeling the processes on the Earths surface and their interaction, biological and physical variables.

        2. Measuring and estimating geographical, biological and physical variables.

        3. Identifying materials on the land cover and analyzing the spectral signatures acquired by satellite or airborne sensors.

    4. HYPERSPECTRAL IMAGE CLASSIFICATION TECHNIQUES

      Classification is used to assign corresponding levels to a group of characteristics with multiple objects from each other with in the image. These will be executed on the bases of spectral or spectrally defined features such as density, texture etc in the feature space. It also divides feature space into several classes based on a decision rule. These may use in computer with the use of mathematical classification techniques.

      1. Hyperspectral Data

        There are two main categories of extracting information method from hyperspectral remote sensing image: based on feature space and based on spectral space. Many statistics-based classification methods based on feature space have been successfully applied to multispectral remote sensing data in the past years [4].

        Although most hyperspectral sensors measure hundreds of wavelengths, it is not the number of measured wavelengths that defines a sensor as hyperspectral. Rather it is the narrowness and contiguous nature of the measurements. For example, a sensor that measured only 20 bands could be considered hyprespectral if those bands were contiguous and, say, 10nm wide. If a sensor measured 20 wavelength bands that were, say, 100 nm wide, or that were separated by non- measured 20 wavelength bands ranges, the sensor would no longer be considered hyperspectral.

        Fig: 3 Standard Hyperspectral analysis methodologies

        Standard multispectral image classification techniques were generally developed to classify multispectral images into board categories. Hyperspectral imagery provides an opportunity for more detailed image analysis. For example, using hypersepctral data, spectrally similar materials can be distinguished, and sub-pixel scale information can be developed. The below fig: 3 shows the complete details of how the hyperspectral data can be divided as sub pixels for gathering the other wanted data.

      2. Hypersepctral Remote Sensing Image Classifications

        Remote sensing system can be classified into two types: Passive and Active.

        Passive remote sensing systems sample emitted and reflected radiation from ground surfaces when the energy source is independent of the recording instrument. Good examples are the camera and TIR detectors.

        Active remote sensing systems can send out their own electromagnetic radiation at a specified wavelength to the ground and then sample the portion reflected back to the detecting devices. Good examples are imaging radar.

        These hyperspectral remote sensing image [5] classifications are of two types supervised and unsupervised.

        1. SUPERVISED CLASSIFICATION:

          The supervised classification [20, 21] was made to classify the land uses in hyper spectral remote sensing. In this classification was made to classify spectral signatures are developed from specified locations in the image. These specified locations are called as training sites and are defined by users. The training will help to develop spectral signatures for the outlined areas. Once the training sites are developed we can use this information along with the various images at different bandwidths. To create spectral signatures we have to find specified area. These signatures will be used to classify all pixels in the image. These may have different methods they are as follows:

          1. Parallelepiped.

          2. Minimum Distance.

          3. Mahalanobis Distance.

          4. Maximum likelihood.

          5. Spectral Angle Mapper.

          6. Support Vector Machine.

            1. PARALLELPIPED:

              Parallelepiped classification[18] uses a simple decision rule to classify hyper spectral images. The decision boundaries from an n-dimensional parallelepiped classification in the image data space. The dimensions of the parallelepiped classification are defined based on a standard deviation threshold from the mean of each selected class. If the pixel value lies above the low threshold and below the high threshold for all n bands being classified, it is assigned to that class where pixel falls in multiple classes.

            2. MINIMUM DISTANCE:

              The Minimum Distance method [9,10] is a supervised classification method that first analyzes the training data you provide and calculates a mean for each prototype class. It describes the class center coordinates in feature space.

              The Minimum Distance algorithm determines the Euclidean distance from each unclassified cell to the mean for each prototype class and assigns the cell to the closest class. This method has no user-defined parameters. The Minimum Distance algorithm is mathematically simple and efficient, but it does not recognize differences in the variance of classes, which determines their relative "size" in feature space. For training sets in which prototype classes with different variance lie close to each other in feature space, data points near the edge of a "larger" class may be closer to the center of a nearby "smaller" class than to their own class center, resulting in miss-classification of some unknown cells. For this reason, the Minimum Distance to Mean method works best in applications where spectral classes are dispersed in feature space and have similar variance.

            3. MAHALANOBIS:

              The Mahalanobis method [16] is a supervised classification method that is based on neural network computing techniques. Learning in neural network theory is the process of adapting connection weights in response to sets of input values and resulting sets of output values. Mahalanobis is a specific learning algorithm by which a multilayer neural network can be trained to recognize and classify spectral patterns as it processes the training set data. The goal is to adjust the network parameters so that it correctly classifies patterns from outside the training set as well.

            4. MAXIMUM LIKELIHOOD:

              The Maximum Likelihood method [11,12] is a supervised classification method. This method determines a class assignment for each cell on the basis of both the variance and correlation of the cell values in the training set classes that you provide. The Maximum Likelihood algorithm interprets the cell values in each training set class as having a Gaussian

              (normal) distribution. The distribution therefore can be described by the mean and the covariance matrix. The likelihood of a given cell value belonging to a particular training set class can be determined using these statistics. This method has no user-defined parameters.

              The Maximum Likelihood method is a refinement of the Minimum Distance method which incorporates the variability of cell values within each training class. In a two-dimensional case, the values from a training set class will define an ellipsoidal cluster, and the probability of points belonging to the class can be represented by ellipsoidal "equiprobability" contours decreasing in value away from the class mean. The Maximum Likelihood method is computation-intensive and processing time is

            5. SPECTRAL ANGLE MAPPER:

              A spectral angle mapper[19] is a successfully used in matching, filtering an hyper spectral remote sensing image. It computes the spectral angle between the pixel spectrum and the end member spectrum. This technique is used in calibrated data which is comparatively insensitive to illumination and albedoeffects. Smaller angles represent closer matches to the reference spectral. Which means pixel spectrum and reference spectrum are similar with each other and this assigned pixel is a reference spectral class.

            6. SUPPORT VECTOR MACHINE:

            Support vector machine(SVM)[17] isa supervised classification method derived from statistical learning theory that often yields good classification results from complex and noisy data. It sperates the class with a decision surface that maximizies the margin between the classes. The surface is often called the optimal hyperlane, and the data points closest to the hyperplane are called support vectors. The support vectors are the critica; elements of the training set. While SVM is a binary classifier in a simplest form can have a function of multiclass classifier by combining several binary SVM classifiers. This SVM includes parameters that allow a certain degree of misclassification which is important for non separable training sites. This parameters controls the trade-off between the training errors and margins.

        2. UNSUPERVISED CLASSIFICATION:

          The unsupervised classification[22] algorithm can analyze and classify a large number of cells. These classify various land uses in hyper spectral remotesensing.These techniques do not require any user specifications and any information about the feature container in the data. The success of the unsupervised methods is based on the premise that the input dataset includes natural statistical groups of spectral patterns that represent particular types of features. These unsupervised classification Methods are mainly of two types:

          1. K-means.

          2. ISO cluster.

      1. K MEANS:

        The K Means[14,15] is an unsupervised classification method that calculates initial class means evenly distributed in the data space,then iteratively clusters the pixels into nearest class using minimum-distance technique. Each iteration recalculates class means and reclassifies pixels with respect to the new means. All pixels are classified to the nearest class unless a standard deviation or distance threshold is specified, in these case some pixels may be unclassified if they do not meet the selected criteria. This process continues until the number of pixels in each class changes by less than the selected pixel change threshold or the maximum number of iterations reached.

      2. ISODATA:

      ISODATA[13] is an unsupervised classification which calculates class means evenly distributed in the data space then iteratively clusters the remaining pixels using minimum distance techniques. Each iteration recalculates means and reclassifies pixels with respect to the new means. This process continues until the number of pixels in each class changes by less than the selected pixel change threshold or the maximum number of iterations is reached.

    5. CONCLUSION

      This paper attempts to study and provides a brief knowledge about the different image classifications and brief discuss about hyper spectral and multispectral imaging. Most common approaches for image classification can be categories as supervised and unsupervised, or parametric and nonparametric. This survey gives different types of classification techniques in supervised and unsupervised and working knowledge about different classification methods.

    6. REFERENCES

  1. Dr.piotrjankowski. Introduction to GIS based.

  2. Lecture on Applications of geographic information system (GIS) introductory.

  3. Dr.pungatoyapatra .Remote sensing and geographical information system (GIS).

  4. A tutorial on Introduction to remote sensing & Image processing.

  5. Prof L. Bruzzone and M.Coradini. Advanced Remote Sensing System To Environment .

  6. Michacl T. Eismann. A Textbook of Hyperspectral Remote Sensing.

  7. Eyalbendor, timmalthus, antonioplaza and Daniel schlapfer 2012. A report on Hyperspectral remote sensing.

  8. Dr.nidaa f.hassan. Introduction to image processing.

  9. Jensen, John R. (1996). Introductory Digital Image Processing (2nd Ed.).Chapter 8, thematic information extraction: Image classification. Upper saddle river, NJ:Prentice-Hall.

  10. Lillesand, Thomas M. and Kiefer, Ralph W. (1994). Remote Sensing and Image Interpretation (3rd ed.). "Minimum Distance to Means Classifier" in Chapter 7, Digital Image Processing. New York: John Wiley and Sons.

  11. Jensen, John R. (1996). Introductory Digital Image Processing (2nd ed.). Chapter 8, Thematic Information Extraction: Image Classification.

  12. Lillesand, Thomas M. and kiefer,Ralph W. (1994). Remote sensing and Image Interpretation (3rd ed.). Gaussian maximum likelihood classifier in chapter 7, Digital Image Processing.

  13. Tou, Julius T. and Gonzales, Raphael C. (1974). Pattern Recognition Principles. "K-Means Algorithm" in Chapter 3, Pattern Classification By Distance Functions.

  14. Tou, Julius T. and Gonzales, Raphael C. (1974).Pattern Recognition Principles. "Isodata Algorithm" in Chapter 3,Pattern Classification By Distance Functions.

  15. Jensen, John R. (1996). Introductory Digital Image Processing (2nd ed.). Chapter 8, Thematic Information Extraction: Image Classification.

  16. Gustavo Camps-Valls, Antonio Rodrigo-Gonz´alez, Jordi Mu noz- Mar´,Luis G ´omez-Chova, and Javier Calpe-Maravilla 2007 Hyperspectral Image Classication with Mahalanobis Relevance Vector Machines.

  17. Chen-guangdai, Xlao-bohuang, Guang-jundong 2007,Support vector machine for classification of hyperspectral remotesensing imagery.

  18. M Govender, K Chetty, V Naiken and H Bulcock 2008 A comparison of satellite hyperspectral and multispectral remote sensing imagery for improved classification and mapping of vegetation.

  19. Petropouls, George, vadrevu, Krishna Prasad, Chariton 2012 Spectral Angle Mapper and Object-based classification combined with hyperspectral remote sensing imagery for obtaining land use/cover mapping in a Mediterranean region.

  20. Pouja K amavisdar, sonam saluja, sonuagrawal 2013. A survey on image classification approaches and techniques.

  21. K.Perumal and R.Bhaskaran 2010 Supervised classification performance on multispectral images.

  22. Bin Luo and Jocelyn Chanussot 2009 UNSUPERVISED CLASSIFICATION OF HYPERSPECTRAL IMAGES BY USING LINEARUNMIXING ALGORITHM

Leave a Reply