- Open Access
- Authors : Yash S Asawa , Vignesh Balaji , Tejas Helwatkar
- Paper ID : IJERTV10IS050499
- Volume & Issue : Volume 10, Issue 05 (May 2021)
- Published (First Online): 07-06-2021
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Deep Ensemble Learning for Agricultural Land Mapping and Classification from Satellite Images
Yash S Asawa1*
1School of Computer Science and Engineering, VIT University,
607, Aravali Heights, RK Circle, Pulla Bhuwana, Udaipur, Rajasthan, 313001, India
Vignesh Balaji2
2 School of Computer Science and Engineering, VIT University,
11, Nainiappan Street, Mandaveli, Chennai, Tamil Nadu, 600028, India
Tejas Helwatkar3
3 School of Computer Science and Engineering,
VIT University, 34, Chakradhar Nagar, Nagpur, Maharashtra, 440024, India
Abstract: Agricultural Land Mapping and Classification are among the most challenging tasks in the agricultural domain. Accurate prediction of agricultural land type in developing countries ahead of sowing is central to preventing famine, improving food security, and sustainable development of agriculture. Currently, leading agricultural land use prediction techniques mostly rely on locally sensed data, such as rainfall measurements and farmer surveys from field visits. Locally sensed data provide detailed information but are expensive to collect, often noisy, and extremely difficult to scale. Remote sensing and satellite imagery data, a cheap and globally- accessible resource, coupled with modern machine learning approaches offer a potential solution. In this paper, we present a framework to work with remote sensing and satellite imagery data to categorize land regions in terms of their agricultural capabilities in order to maximize efficiency and productivity. Improving existing methods, we incorporate a deep ensemble learning approach and combine multiple deep learning methodologies to work with the potentially huge search spaces and navigate them looking for optimal parametric combinations, extracting the best out of the underlying CNN model. The model is actuated on satellite images acquired from the IKONOS Dataset available in the public domain for research purposes.
Index Terms: Agricultural Land Mapping, Agricultural Land Classification, Agricultural Monitoring, Remote Sensing, Deep Learning, CNN, Satellite Imaging, ResNet, VGG-16, VGG-19.
-
INTRODUCTION
Agriculture has been at the very root of society since the beginning of human civilization. The farming revolution nearly 12,000 years ago, paved the way for modern-day colonization, allowing people to move towards a more stable, reliable and civilized lifestyle. Optimal survival conditions led to a surge in the human population and gave people the freedom to explore and develop innovations.
Agricultural Land Mapping describes how much of a global or regional area is surrounded by agricultural resources and human ventures, such as forests, agriculture, or other land types. With the recent advancement of satellites and remote sensing data providing constant access to such inputs,
researchers have started focusing on land cover to better understand the features of Earth's surface.
Land cover information is presently utilized for various applications, some examples are forecasting weather, renewable energy, water control and supply, environment analysis, agriculture and its monitoring. Furthermore, it can be beneficial for disease control and disaster management. Likewise, the authorities of land management mainly use land cover to perceive and inspect the use of land, which is why this research can be useful for the society. Also, the human population is continuing to expand at unprecedented rates, which has brought the issues of food security and the impact of climate change to the forefront.
Given these real-world applications of agricultural land mapping, it is no wonder that a huge amount of research has been performed to produce accurate land cover datasets in many geographical locations and varying scales. The recent advancements in satellite imaging techniques have made it easier for data scientists to create more efficient and accurate prediction models. There has been a corresponding surge in the application of newer, more advanced and complex machine learning models to the domain.
Geospatial technologies like Remote Sensing acquire samples of electromagnetic radiation emanated and reflected by the earths terrestrial, atmospheric and marine ecosystems. This sampling allows them to survey and spot physical attributes of a region without the need for any physical exposure. These techniques are actuated through the use of satellite-based sensors, which can be active or passive depending on their operational requirements.
Remote Sensing has made it possible for researchers to predict land type, crop yield and other meteorological and geological activities. These were previously unpredictable as a result of inaccessibility, the risk to human life and feasibility related complications in the manual collection of data from such locations. Several machine learning models have been implemented for land use classification research but very few of them have used remote sensing data. Remote sensing is the future of the global climate analytics spectrum and that ideology has formed the crux of our paper.
This paper presents an ensemble framework consisting of multiple deep learning techniques to classify land utility and cover type. The model is realised on satellite image data acquired from the IKONOS Dataset and we calculate Precision Score, Recall and F-Score metrics and Support values to quantify the efficiency of the models.
-
LITERATURE REVIEW
Land cover classification techniques for satellite imagery have been created and validated in many remote sensing researches. One such study [1] proposes a unique, fully automatic and cheap land cover classification (ALCC) method. This approach does not need knowledge of the land or the assignment of training classes beforehand. The ALCC technique is founded on unsupervised grouping algorithms, that is carried out over the six Landsat-8 30m spatial resolution bands and spectral indices rasters. The main limitation of this model is the predetermined number of samples. Another paper [2] introduces a research on improved use of polarization signatures for optimal land classification in mixed sample situations. A decision tree is made based on the class boundaries optimal to provide land cover classification. Although this method works relatively well for mixed class scenarios, the accuracy is only around 75% which is not impressive. A novel method [3] called multimodal bilinear fusion network (MBFNet) was introduced which merged the SAR and optical features for land use classification. In MBFNet, the fusion features extracted have strong discrimination to advancing classfying land-cover. However, this research wont group land types according to the type of crops that can be planted. A research [4] aimed on using artificial intelligence along with CNN to propose a new approach for land cover mapping. First a CNN model is trained with a broad range of images to get the land cover model. The model is then directly feeded satellite images which are split into pictures that are identical in size with the training ones. The results of the model are not satisfactory. This is because of the fact that it mixes up forested areas and water. Another framework [5] was based on Spatial-Spectral Schroedinger Eigen maps (SSSE) for automatic land cover classification optimized using Cuckoo-Search (CS) method. Support-Vector Machine (SVM) was used for the final map generation after clustering and dimensionality reduction. The greatest drawback of this way of classification is that an increased classification accuracy is obtained at the cost of computational efficieny. A supervised technique [6] for classification of land cover needs prior information of the terrain and training classes to classify the satellite imagery. Several people have studied and researched a supervised approach for this problem, like the Maximum-Likelihood Classifier (MLC). The limitation of this approach is the requirement for operator intervention, which slows down the processing chain. In remote sensing, a framework [7] for quick and precise monitoring and classification of various land cover, various spectral indices have been used. Spectral indices were also applied to ascertain areas where certain crop lands are prevalent. The drawback is that this method saturates at high class content making it hard to differentiate relatively large plant cover from very large plant cover. The
data from the PALSAR-2 dataset was used to classify land [8] in a project based on the use of a probability distribution function (PDF). The best PDF function is chosen by using separability index criteria and Chi-Squared test. This classification approach has good accuracy. However, these three PDFs (log-normal, weibull and normal) do not provide a good dissociated position result for splitting tall plant growth and modern samples. Most of the present land cover categorizing techniques only use single-modal remote sensing pictures, for instance, this one approach using optical images has faced the spectral confusion issue which lowers the accuracy of classification [9]. This author [10] discovered that CNN performs classification with higher accuracy than Random Forest with Landsat image dataset. In this research CNN was implemented utilizing the Keras open-source library. It is necessary to find an optimal framework that works well with any given data since there are many ways to build the CNN architecture. However, classification using CNN for land cover faces a lot of limitations, such as the requirement for huge training image datasets. Multi-spectral Light Detection & Ranging, LiDAR, proposed by Suoyan Pan et. al. in [11] can yield point clouds advancing from several channels with variable wavelengths. Airborne LiDAR data produces relatively thorough and consistent spatial and spectral geometric data, which contributes to the classification of land cover and land use. This is classified into six different cover types – building, tree, water, grass, soil and road. The CNN model used the arguments are split into non-hyper-parameters and hyper-parameters; the training process aims to find the most suitable combination of model parameters. The performance of this CNN was better than the tradition CNN models eg. AlexNet and the deep Boltzmann machine. [12] proposes a deep CNN architecture which has an input layer, 5 convolutional layers and 5 successive fully connected layers which automatically classifies outdoor laser mobile survey information. The architecture uses the spatial pyramid (SP) concept during voxelization of MLS samples to overcome the problem of several points within a voxel assigned to high point density. This model tested on 5 variable combinations of classes consisting of tree, non-tree and electric pole classes from the dataset. This [13] paper tests and evaluates a new approach in the classification of multispectral airborne lidar points. A 2-step method is employed to classify over 5 million points. These are the return points that are then classified using their three-channel intensities and height data. The SVM classifier performs exceptionally well, but the rule-based classification of multi-return points was not as successful because the recorded intensities are not reliable.
[14] has put forth a method to train transferable deep models, which allows the use of land-cover classification by using unlabeled multi-source remote sensing data and creating an amalgam of sorting land-use that concurrently extracts accurate class and edge data. A scale sequence joint deep learning method is proposed in [15]. This innovative joint deep learning (JDL) method involves MLP and Object-based CNN that replaces the previous paradigm of scale selection, by predicting land use using an object-based CNN and predicting land cover via Multilayer Perceptron (MLP). This clearly models the relationship between the predicted LU and LC variables as a joint distribution. -
DATASET DESCRIPTION
In accordance with the idea of relying on remote sensing data, our dataset comprises raw satellite images acquired through the IKONOS Dataset. Based on insights from previous work we chose to use the Digital Globe operated high-resolution satellite. The sensors on the satellite are capable of capturing 4m multispectral and 1m Near-Infrared panchromatic resolution which can be combined in a wide range of ways to actuate various high-resolution imagery projects. The Optical Sensor Assembly for IKONOS was built by Kodak and has a 70cm primary mirror aperture, 10m fiber optical focal length with 5 mirrors and a honeycomb design for the main mirror to reduce mass. The dataset contains about 16,000 satellite images of land area which are categorized into the following 6 classes as shown in Fig.1, 2.
Fig 1. IKONOS Dataset Images
Fig 2. IKONOS Class Distribution
-
PROPOSED METHOD
-
PROBLEM SETTING
The agricultural problem we aim to address is the classification and mapping of the agricultural land for a specific region of interest related to a series of satellite and remotely sensed pictures taken before the crops harvest. In particular, we attempt to predict the nature of agricultural land based on the unit area in a given geological location. Accurate classification of crop land facilitates essential tasks like determining the optimal profile of crops to plant, allocating government resources, effective planning and preparation of aid distribution, and decision-making about imports and exports in more commercialized systems making the existence of relevant systems imperative. The problem were specifically after is handling the scarcity of training data.
We do that by employing a dimensionality reduction technique. Specifically, we use deep learning architectures namely Inception CNN, ResNet and VGG to achieve significant results.
The goal is to create a framework that maps a series of such inputs to the type of agricultural land. This will be possible because factors identified with the growth of the crop are naturally caught in the input pictures.
The combination of the deep learning models will allow appropriate optimization and enhance accuracy in scenarios where scarce labelled training data is available.
-
PREPROCESSING
Owing to the inadequacy of appropriate data for training, the unmediated application of deep learning models is an unviable task. The use of multi-spectral images also rules out employing conventional computer vision techniques for pre- training. We employ the image pillow library to prepare raw images before they are feeded into the main model. Pillow is a fork of the PIL, short for Python Imaging Library. The Pillow is a library that offers many standard techniques for manipulating pictures.
2.1 PROCESSING RAW IMAGES
A vegetation mask is applied to raw satellite images and informative pixel values are clipped in line with the framework requirements. The permutation invariance assumption allows the inference that only distinct pixel types from an image provide illuminating and essential insights. This in turn eliminates the possibility of information loss when high-dimensional images are mapped to pixel count data matrices. Individual data matrices are re-shaped, padded and altered within the boundaries of logical reasoning to ensure mathematical consistency and appropriate data supply to the various natural networks in our ensemble system.
-
ENSEMBLE FRAMEWORK
We start by dividing the dataset into training and testing subsets in a 70:30 ratio which gives us 12800 images for training purposes from the original dataset. We define an image geneator and produce additional images through simple operations on the original image files to optimize training. The training and testing image generator modules also ensure that the images being feeded to the module refrain from following a preset pattern and overfitting is avoided. Thereafter the generated image sets are reshaped and broken down into smaller subsets for computational reasons.
First, we start by training the foundational CNN model. The 3 main layers (Convolutional, Pooling and Fully-Connected) are created and we integrate the inception module as a dimensionality reduction technique and to allow the CNN model to propagate backwards. The results generated and the weightage values are stored in a separate file. Then we define the ResNet 50, ResNet 50V2 and ResNet152V2 neural networks to combine multiple perspectives across levels. The three models and the results generated by them are also stored in the same file. These deep learning models are trained multiple times to enhance their accuracy and we store the results from each iteration so that we can visualize the progressive improvement in accuracy. We plot the accuracy history for the resnet models. We also calculate Precision Score, Recall and F-Score metrics and Support values to quantify the efficiency of the models.
We repeat the same process with the VGG-16 and VGG-19 models, implementing them, evaluating them, visualizing their progressive performance augmentation and storing the models as well as their results and predicted weights in the same file. The storage of generated weights and the models in the same file is essential to reduce computational cost when the ensemble framework gets deployed to the web interface for end user convenience.
Each model contributes to the ensemble system. The predictions made by individual models are stored and an array holds the number of occurrences belonging to each predicted class. The framework then displays the classification label corresponding to the highest value in the array, thereby following a majority-based system. The entire framework has been represented in the form of a diagram in Fig 3.
Fig 3. Ensemble Framework for Agricultural Land Mapping and Classification
VI. RESULTS AND DISCUSSION
-
RESULTS
-
RESNET 50
Class / Metric
Precision
Recall
F-Score
Support
Forest Land
0.00000
0.00
0.0000
600.0
Herbaceous Vegetation Land
0.19084
1.00
0.3205
600.0
High Yield Crops Land
0.00000
0.00
0.0000
400.0
Permanent Crop
0.00000
0.00
0.0000
500.0
Unsuitable Farm Land
0.00000
0.00
0.0000
500.0
Vegetation Crop
0.75000
0.06
0.1111
600.0
Class / Metric
Precision
Recall
F-Score
Support
Forest Land
0.00000
0.00
0.0000
600.0
Herbaceous Vegetation Land
0.19084
1.00
0.3205
600.0
High Yield Crops Land
0.00000
0.00
0.0000
400.0
Permanent Crop
0.00000
0.00
0.0000
500.0
Unsuitable Farm Land
0.00000
0.00
0.0000
500.0
Vegetation Crop
0.75000
0.06
0.1111
600.0
Fig 4. Accuracy vs Epoch and Loss vs Epoch Plots for ResNet 50 Model
Table 1. Evaluation Metrics for ResNet 50 Model
Fig 4 and Table 1 represent the accuracy and loss progression corresponding to the number of epochs and evaluation metric values for the ResNet 50 Model actuated on the IKONOS Dataset.
-
RESNET50V2
Fig 5. Accuracy vs Epoch and Loss vs Epoch Plots for ResNet 50V2 Model
Class / Metric
Precision
Recall
F-Score
Support
Forest Land
0.90744
0.997
0.950
600.0
Herbaceous Vegetation Land
0.94746
0.902
0.924
600.0
High Yield Crops Land
0.95153
0.932
0.942
400.0
Permanent Crop
0.89391
0.910
0.902
500.0
Unsuitable Farm Land
0.95951
0.948
0.954
500.0
Vegetation Crop
0.97044
0.930
0.950
600.0
Table 2. Evaluation Metrics for ResNet 50V2 Model
Fig 5 and Table 2 represent the accuracy and loss progression corresponding to the number of epochs and evaluation metric values for the ResNet 50V2 Model actuated on the IKONOS Dataset.
-
RESNET 152V2
Fig 6. Accuracy vs Epoch and Loss vs Epoch Plots for ResNet 152V2 Model
Class / Metric
Precision
Recall
F-
Score
Support
Forest Land
0.8237
0.997
0.902
600.0
Herbaceous Vegetation Land
0.9496
0.878
0.913
600.0
High Yield Crops Land
0.9753
0.890
0.931
400.0
Permanent Crop
0.8867
0.892
0.889
500.0
Unsuitable Farm Land
0.9155
0.932
0.924
500.0
Vegetation Crop
0.9705
0.877
0.921
600.0
Table 3. Evaluation Metrics for ResNet 152V2 Model
Fig 6 and Table 3 represent the accuracy and loss progression corresponding to the number of epochs and evaluation metric values for the ResNet 152V2 Model actuated on the IKONOS Dataset.
-
VGG 16
Fig 7. Accuracy vs Epoch and Loss vs Epoch Plots for VGG 16 Model
values for the VGG16 Model actuated on the IKONOS Dataset.
-
VGG 19
Fig 8. Accuracy vs Epoch and Loss vs Epoch Plots for
VGG 19 Model
Class / Metric
Precision
Recall
F-
Score
Support
Forest Land
0.8961
0.992
0.942
600.0
Herbaceous Vegetation Land
0.9537
0.927
0.941
600.0
High Yield Crops Land
0.9781
0.893
0.933
400.0
Permanent Crop
0.9347
0.916
0.925
500.0
Unsuitable Farm Land
0.9763
0.988
0.982
500.0
Vegetation Crop
0.9713
0.958
0.965
600.0
Table 5. Evaluation Metrics for VGG 19 Model
Fig 8 and Table 5 represent the accuracy and loss progression corresponding to the number of epochs and evaluation metric values for the VGG19 Model actuated on the IKONOS Dataset.
Class / Metric
Precision
Recall
F-
Score
Support
Forest Land
0.9583
0.995
0.976
600.0
Herbaceous Vegetation Land
0.9681
0.962
0.965
600.0
High Yield Crops Land
0.9311
0.980
0.955
400.0
Permanent Crop
0.9319
0.958
0.945
500.0
Unsuitable Farm Land
0.9898
0.972
0.981
500.0
Vegetation Crop
0.9946
0.920
0.956
600.0
Class / Metric
Precision
Recall
F-
Score
Support
Forest Land
0.9583
0.995
0.976
600.0
Herbaceous Vegetation Land
0.9681
0.962
0.965
600.0
High Yield Crops Land
0.9311
0.980
0.955
400.0
Permanent Crop
0.9319
0.958
0.945
500.0
Unsuitable Farm Land
0.9898
0.972
0.981
500.0
Vegetation Crop
0.9946
0.920
0.956
600.0
Table 4. Evaluation Metrics for VGG 16 Model
Fig 7 and Table 4 represent the accuracy and loss progression corresponding to the number of epochs and evaluation metric
-
-
-
COMPARISON WITH EXISTING SYSTEMS
MODEL |
ACCURACY |
RANDOM FOREST |
0.66 |
RESNET 50 |
0.19875 |
RESNET 50V2 |
0.9371875 |
RESNET 152V2 |
0.9121875 |
VGG 16 |
0.9634375 |
VGG 19 |
0.9484375 |
Maximum Likelihood |
0.762 |
Neuro-Fuzzy |
0.856 |
K-Means Clustering |
0.78 |
Hölder exponent (HE)-Variance (VAR) and panchromatic (PAN) images |
0.7275 |
Multi-circular local binary pattern (MCLBP) and Variance (VAR) |
0.7884 |
PROPOSED ENSEMBLE FRAMEWORK |
0.9778375 |
Table 6. Comparison of Accuracy for Models
Individually VGG16 achieves the highest accuracy in constrained circumstances but these models fail to retain their high accuracy levels beyond a certain epoch configuration and consolidate after attaining their peak. Realistic prediction is more than likely to cross that, which is why we use our proposed ensemble system, which guarantees accuracy levels over 95% irrespective of the parameters presented.
-
WEB INTERFACE
We design a web interface for deploying our model where the user can provide an input satellite image and the ensemble framework works in the backend, classifying it into one of the six land types. This result is then used to display the type of crops best suited to be grown on the corresponding land type. The interface design and model deployment are done using the C#, HTML and CSS through Visual Studio. SQL Database is used to store user credentials and previously obtained results to enhance usability. We have also added additional security features to ensure safety of user data. The system allows users to manipulate and download their generated results along with unlimited attempts for future researchers trying to understand the working of the model in different use-cases. The interface comprises the Homepage, Registration, Login, Results and Account Management Pages. Fig.12, 13, present different screens designed for the end-user as part of the web-interface.
Fig 12. Homepage of the Web Interface
Fig 13. Response Database
Fig 14. Interface for Image Addition
-
CONCLUSION
In this research, with the aim of improving present classification methods, we present an ensemble framework using deep learning techniques namely CNN, ResNet and VGG to classify land utility and cover type. The proposed system employs the Inception module for dimensionality reduction technique within the ensemble architecture and is actuated on 16,000 satellite images of land area obtained from the IKONOS dataset which are categorized into six classes of croplands. In terms of accuracy evaluation, the proposed model proved to be superior to the previous approaches. The proposed method is easy to implement, and has great application capability and practical usefulness.
-
FUTURE WORK
Remote Sensing has enormous potential as the primary tool for data procurement to resolve some of the most acute and pressing environmental agitations of our time. Pairing the acquired data with deep learning models that can be applied to new, untouched territories seamlessly, will allow researchers to derive significant insights and save millions of lives as well as prevent further harm to our mother nature. Drawing inferences from this paper, data engineers and researchers can employ other deep learning algorithms within the ensemble system to achieve breakthrough performance in their specific domain. Going forward, the inception module can be replaced with a newer, more efficient dimensionality reduction technique. Researchers can also figure out a different approach to determine the final result of the ensemble system, instead of the majority-based approach, according to the domain and the problem theyve chosen. Remote Sensing is evolving and providing more capable instruments with time. Researchers can think of involving additional features and develop corresponding extraction
techniques to allow the models to make more informed decisions with multiple relevant features being considered.
REFERENCES
-
Gaparovi, M., Zrinjski, M., & Gudelj, M. (2019). Automatic cost-effective method for land cover classification (ALCC). Computers, Environment and Urban Systems, 76, 1-10.
-
Phartiyal, G. S., Kumar, K., & Singh, D. (2020). An improved land cover classification using polarization signatures for PALSAR 2 data. Advances in Space Research, 65(11), 2622- 2635.
-
Li, X., Lei, L., Sun, Y., Li, M., & Kuang, G. (2020). Multimodal bilinear fusion network with second-order attention-based channel selection for land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, 1011-1026.
-
Emparanza, P. R., Hongkarnjanakul, N., Rouquette, D., Schwob, C., & Mezeix, L. (2020). Land cover classification in Thailand's Eastern Economic Corridor (EEC) using convolutional neural networks on satellite images. Remote Sensing Applications: Society and Environment, 20, 100394.
-
Suresh, S., & Lal, S. (2020). A metaheuristic framework based automated Spatial-Spectral gaph for land cover classification from multispectral and hyperspectral satellite images. Infrared Physics & Technology, 105, 103172.
-
Gaparovi, M., & Jogun, T. (2018). The effect of fusing Sentinel-
-
bands on land-cover classification. International journal of remote sensing, 39(3), 822-841.
-
-
Estoque, R. C., & Murayama, Y. (2015). Classification and change detection of built-up lands from Landsat-7 ETM+ and Landsat-8 OLI/TIRS imageries: A comparative assessment of various spectral indices. Ecological indicators, 56, 205-217.
-
Jain, A., & Singh, D. (2019). An optimal selection of probability distribution functions for unsupervised land cover classification of PALSAR-2 data. Advances in Space Research, 63(2), 813-825.
-
L. Xu, H. Zhang, C. Wang, and M. Liu, Crop classification based on temporal information using sentinel-1 SAR time-series data, Remote Sens., vol. 11, no. 1, pp. 723731, 2019.
-
Yoo, C., Han, D., Im, J., & Bechtel, B. (2019). Comparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using Landsat images. ISPRS Journal of Photogrammetry and Remote Sensing, 157, 155-170.
-
Suoyan Pan, Haiyan Guanb,c, Yating Chen, Yongtao Yud, Wesley Nunes Gonçalvese, José Marcato Junior, Jonathan Li, (2020) Land-cover classification of multispectral LiDAR data using CNN with optimized hyper-parameters. 0924-2716/ © 2020 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
-
Bhavesh Kumar, Bharat Lohani & Gaurav Pandey (2018): Development of deep learning architecture for automatic classification of outdoor mobile LiDAR data, International Journal of Remote Sensing. INTERNATIONAL JOURNAL OF REMOTE SENSING
https://doi.org/10.1080/01431161.2018.1547929
-
Nima Ekhtari, Craig Glennie, and Juan Carlos Fernandez-Diaz (2018). Classification of Airborne Multispectral Lidar Point Clouds for Land Cover Mapping. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1939-1404 © 2018 IEEE
-
Xin-Yi Tonga, Gui-Song Xiaa,b, , Qikai Luc , Huanfeng Shend
, Shengyang Lie , Shucheng Youf , Liangpei Zhanga (2019). Land-cover classification with high-resolution remote sensing images using transferable deep models.
-
Ce Zhanga,b, Paula A. Harrisonb , Xin Panc,d , Huapeng Lie , Isabel Sargentf , Peter M. Atkinsong (2019). Scale Sequence Joint Deep Learning (SS-JDL) for land use and land cover classification
AUTHOR BIOGRAPHIES
Yash S Asawa is currently pursuing his Bachelor's degree in Computer Science and Engineering from Vellore Institute of Technology, India. He has previously published papers on Multi-Document Text Summarization Techniques and a User-Specific Safe Route Recommendation System. His research interests include Machine Learning, Artificial Intelligence, Natural Language Processing and Automated Financial Systems.
Vignesh Balaji is currently pursuing his Bachelor's degree in Computer Science and Engineering from Vellore Institute of Technology, Vellore, India. His previous works include a paper on Multi-Document Text Summarization Techniques. His research interests include Image Processing, Artificial Intelligence and Natural Language Processing. Tejas V Helwatkar is currently pursuing his Bachelors degree in Computer Science and Engineering from Vellore Institute of Technology, India. He has previously worked on various projects in the Software Engineering and Machine Learning domains. His research interests include Machine Learning, Distributed Systems, Software Engineering and Web Security.