Crop (Cotton) Health Monitoring Using Computer Vision and Ai/ml

DOI : 10.17577/IJERTV13IS010040

Download Full-Text PDF Cite this Publication

Text Only Version

Crop (Cotton) Health Monitoring Using Computer Vision and Ai/ml

Sagar Gokhale Department of Information Technology Pune Institute Of Computer Technology

Pune, India

Riya Pendse

Department of Information Technology Pune Institute Of Computer Technology Pune, India

Harsh Chaudhari

Department of Information Technology Pune Institute Of Computer Technology

Pune, India

Rohit Kulkarni Department of Information

Technology Pune Institute Of Computer Technology Pune, India

Dr.Jayashree Jagdale Department of Information Technology Pune Institute Of Computer Technology

Pune, India

AbstractWith cotton being an essential cash crop in the worldwide market, crop health monitoring is essential for guaranteeing the productivity and sustainability of agricultural practises. This abstract presents a novel method of monitoring the health of cotton crops through the use of deep learn- ing techniques, particularly picture classification. Convolutional Neural Networks (CNNs), one type of deep learning model, are utilised by the suggested system to analyse high-resolution pictures of cotton fields. These models are trained on large datasets of both damaged and healthy cotton plants, which helps the system become more adept at identifying between various health situations. This makes it possible to identify illnesses, pests, nutrient shortages, and environmental stresses that have an impact on cotton crops early on. Farmers are empowered to make well- informed decisions regarding focused interventions by using the models ability to identify specific diseases and provide severity assessments. There are numerous benefits of using deep learning for cotton crop health monitoring. Precision agriculture practises are facilitated, assessments are made quickly and accurately, and the need for manual labour for visual examination is reduced. Consequently, producers are able to reduce crop loss, optimise resource allocation, and support sustainable cotton agriculture. Deep learning approaches are being incorporated into crop health monitoring to improve cottonyield while also supporting the larger objectives of data-driven, ecologically sustainable agriculture. This method creates a path towards effective, contemporary farming practises and gives farmers the power to make data-driven, well- informed decisions.

Index TermsCrop health monitoring, cotton, Artificial intel- ligence (AI), Machine Learning (ML), Diseases

  1. INTRODUCTION

    Effective cotton crop health monitoring is imperative for ensuring optimal yield and sustainable agriculture practices. The amalgamation of Computer Vision and Artificial Intelligence/Machine Learning (AI/ML) has

    emerged as a transformative force in revolutionizing this critical process. Through the integration of sophisticated image processing techniques, these technologies provide a platform for real-time assessment of crop conditions. This survey delves into the synergy of Computer Vision and AI/ML in cotton crop health monitoring, shedding light on cutting-edge methodologies that propel agricultural innovation. Resonating with the dynamic nature of the field, it addresses inherent challenges while envisioning future prospects. Key algorithms, including Convolutional Neural Networks (CNNs), ResNet, and ensemble methods such as Random Forest and Gradient Boosting, take center stage. This seamless fusion of technological prowess and agricultural wisdom not only amplifies productivity but also charts a course towards precision farming, fostering a more resilient and efficient agricultural landscape poisedfor sustainable growth. In an era where the intersection of advanced technology and agricultural science is pivotal, this interdisciplinary approach holds the promise of addressing global food security challenges and enhancing the livelihoods of farmers worldwide.

  2. LITERATURE SURVEY

    1. Existing Methodologies

      1. CNNs: Convolutional Neural Networks (CNNs) playa pivotal role as these deep learning models excel in recognizing intricate patterns within images, offer- ing superior accuracy compared to traditional methods. CNNs enhance the efficiency and reliability of disease detection, contributing to the projects overarching goal of optimizing cotton crop health.

        Fig. 1. CNN Architecture

        The image shows the process of feature extraction in a convolutional neural network (CNN). CNNs are a type of deep learning model that are particularly well-suited for image recognition tasks. CNNs extract features from images using filters, downsample feature maps, and combine features for classification.

      2. Neuro-fuzzy: Neuro-fuzzy approaches combine the adaptability of neural networks with the interpretability of fuzzy logic. In cotton disease identification, this method integrates data-driven learning with linguistic rules, providing a nuanced understanding of complex image patterns. The neuro- fuzzy approach enhances pre- cision, contributing to accurate and insightful diagnoses in agricultural applications.

        Fig. 2. Neuro-Fuzzy Architecture

        The image shows a basic architecture of a neuro-fuzzy system. Neuro-fuzzy systems combine the strengths of neural networks and fuzzy logic to create intelligent systems that can learn and adapt to new data. Neuro- fuzzy systems combine neural networks and fuzzy logic to learn and adapt to new data. The system fuzzifies the input data, generates a fuzzy output, and defuzzifies the output to produce a crisp value.

      3. YOLOv3: YOLOv3, a real-time object detection sys- tem, is a game-changer in identifying cotton diseases. Processing images in a single pass, it swiftly and ac- curately detects lesions on cotton leaves. Using a Con- volutional Neural Network (CNN) backbone, YOLOv3

        efficiently analyzes image patterns, predicting bound- ing boxes and class probabilities simultaneously. This enhances the projects monitoring and management of cotton crop health.

        Fig. 3. YOLOv3 Architecture

        The image shows the pipeline of the YOLO object detection algorithm. YOLO is a single-shot object detection algorithm, which means that it can detect objects in a single pass through the image. YOLO divides image into grid, predicts bounding boxes and confidence scores per cell, selects boxes with highest scores and removes overlapping boxes.

      4. SVM: Support Vector Machines (SVM) contribute to cotton crop health by classifying intricate patterns in leaf images. In the context of disease identification, SVM

      analyzes visual features, aiding in precise detection of

      abnormalities. Its application enhances overall crop health monitoring, providing a valuable tool for swift and accurate assessment in agricultural settings.

      Fig. 4. SVM Architecture

      The image shows the pipeline of a reinforcement learning (RL) agent. RL is a type of machine learning where the agent learns to behave in an environment by trial and error. The agent receives rewards for taking actions that lead to desired outcomes and penalties for taking actions that lead to undesired outcomes. RL agents learn by trial and error. They perceive the state of the environment, select an action, take the action, observe the new state, receive a reward, and updatetheir policy.

    2. Research Gap Analysis

    1. Image Acquisition and Processing: The system should be able to capture high-resolution images of cotton plants in the field. It must process images to extract relevant features for disease identification.

    2. Disease Detection and Classification: The system should employ comuter vision and AI to detect and classify diseases in cotton leaves. It must differentiate between healthy and diseased plants.

    3. Real-Time Monitoring: The application should provide real-time monitoring of the cotton field, allowing con- tinuous disease assessment.

    4. Disease Severity Assessment: The system should assess the severity of disease infestations, providing insights into the extent of damage.

    5. Limited Focus on Cotton: Most existing studies pri- marily concentrate on other crops, such as fruits and vegetables. Theres a need for more research that specif- ically addresses cotton disease identification.

    6. Integration of Multiple Technologies: While computer vision and AI are used, the integration of other emerging technologies like robotics and drones for real-time data collection and treatment application remains an under- explored area.

    7. Efficiency and Real-Time Solutions: Many studies have established the effectiveness of AI in disease identification, but theres a gap in the development of real-time, on-field solutions that can provide immediate insights to farmers for timely disease management.

    8. Scalability: Scalability of these AI-based solutions in real- world, large-scale cotton farming environments is another critical gap that needs to be addressed. Thetransition from lab-based experiments to practical field applications poses unique challenges.

    9. Data Privacy and Security: As these systems involve the collection and analysis of sensitive agricultural data, theres a research gap in addressing data privacy and security concerns in this context.

  3. METHODOLOGY

    1. Image Acquisition

      High-quality image acquisition is crucial for the accuracy of subsequent image processing steps in assessing the quality of cotton crop. We prioritize the selection of a suitable camera or imaging device based on criteria such as resolution,focal length, and color capabilities, tailored to the nuanced requirements of capturing detailed images of cotton crops. The strategic positioning of the camera ensures an unobstructed view of the cotton plants, while controlled lighting conditions are implemented to eliminate harsh shadows, reflections, or overexposed areas that could compromise accuracy. Calibra- tion of the camera corrects for lens distortions and other optical imperfections. The image capture is configured to be triggered either manually or synchronized with the motion of

      the cotton plants, depending on the setup. Rigorous quality control checks are integral, detecting issues like blurriness or distortion. Metadata and timestamps are meticulously attachedto each image, offering traceability and facilitating data analy-sis. Continuous monitoring and maintenance uphold consistent image quality, forming the reliable foundation for subsequent image processing and analysis in our research.

    2. Image Preprocessing

      1. Artifact Removal and Noise Reduction: OpenCV functions, such as morphological operations and Gaus- sian blur, are employed to remove artifacts and reduce noise in the acquired images. This ensures that the data is clean and free from unwanted distortions.

      2. Color Normalization: Color manipulation functions are utilized for normalizing color variations within the im- ages, ensuring consistent representation of cotton crops across the dataset.

      3. Data Augmentation: Implementing techniques such as rotation, flipping, and slight variations in lighting conditions. This augments the dataset, enhancing the models robustness.

    3. Feature Extraction

      Based on the research objectives and the characteristics of cotton, a set of relevant features is selected for extraction. These features may include size, shape, colour, texture, and potential defects. Size and Shape: Geometric characteristics, such as the size and shape of the cotton, are quantified. Features like area, perimeter, circularity, and aspect ratio may be calculated. Colour Features: Colour attributes are assessed to determine colour uniformity and deviations from the ref- erence image. Metrics like colour histograms, mean colour values, or colour distribution patterns may be used. Texture Analysis: Texture features are extracted to evaluate the surface characteristics of the cottons. This can include texture patterns, coarseness, or smoothness. Defect Detection: Features related to potential defects are identified and quantified. These featuresmay include the number, size, and location of defects such as bruises, spots, or discolorations. Texture Analysis: Texture features are extracted to evaluate the surface characteristics of the cottons. This can include texture patterns, coarseness, or smoothness. The extracted features are organised into a featurevector, which serves as a comprehensive representation of the attributes of the cottons. This feature vector is used for further analysis and quality assessment.

    4. Residual Networks (ResNets)

      Residual Networks (ResNets) are deep neural networks renowned for their effectiveness in training very deep mod- els. These networks introduce skip connections, or residual connections, to address the vanishing gradient problem en- countered in training deep networks. The key concept involves learning residual functions, allowing information to flow through the network more effectively. ResNets, available in various depths such as ResNet50, ResNet101, and ResNet152

      excel in capturing intricate patterns in images. In our project, they are proposed for identifying complex patterns in cotton leaves to facilitate disease identification.

      Fig. 5. ResNet Architecture

      The image shows the architecture of a residual block in ResNet- 50-vd. A residual block is a building block of ResNet networks. It is designed to help the network learn more complex features by adding the input of the block to theoutput of the block. This allows the network to learn residual connections, which are shortcuts that bypass some of the layers in the network.

      1. ResNet50

        Depth: ResNet50 is a shallower variant of the ResNet architecture and consists of 50 layers. These layers are composed of convolutional layers, pooling layers, fully connected layers, and skip connections (residual blocks). Architecture: It uses residual blocks with shortcuts that facilitate the flow of information, allowing gradients to propagate more effectively in deeper layers. Applica- tions: ResNet50 is widely employed in various image recognition tasks due to its balance between depth and computational efficiency. Its suitable for tasks where a moderately deep network is required.

      2. ResNet152

      Depth: ResNet152 is a deeper variant of ResNet and comprises 152 layers. The increased depth allows for a more intricate learning of features from data. Ar- chitecture: Similar to ResNet50, it employs residual blocks with a greater number of layers, enabling it to capture more intricate patterns in images. Applications: ResNet152, being deeper, tends to capture more complex features and nuances in images, making it suitable for more demanding tasks where a more comprehensive understanding of image details is necessary.

    5. Transfer Learning

      Transfer learning is a technique that involves leveraging knowledge acquired from solving one problem to address a related problem. For our project, transfer learning with pre- trained ResNet models from datasets like ImageNet offers a shortcut to expedite model training. By using a pre-trained

      ResNet model and fine-tuning it with a smaller dataset con- taining cotton leaf images, we aim to harness its learned features for specific disease identification in cotton crops. This approach minimizes the need to train the model from scratch, saving computational resources and time while enhancing the models accuracy and performance.

    6. Qualtiy Assessment

      Quality assessment is a critical facet of our methodology, ensurig the reliability of acquired images for cotton crophealth analysis. Rigorous checks are implemented to detect andrectify issues such as blurriness, distortion, or overexposure. Metrics like image sharpness and clarity are computed to quan- tify visual quality. OpenCVs edge detection algorithms aid in identifying and addressing blurriness, while histogram analysis is employed to ensure optimal exposure levels. Additionally, contrast and brightness metrics are assessed to maintain con- sistent visual characteristics. Regular quality control checks involve both visual inspection and quantitative assessments, ensuring that the dataset remains free from anomalies. This meticulous quality assessment not only upholds the integrity of the dataset but also contributes to the overall robustness of subsequent deep learning models in accurately gauging the health of cotton crops.

    7. Feedback, Reporting And Continuous Improvement

    To maintain the accuracy and effectiveness of the quality assessment and grading process, a feedback mechanism is established. Operators or users of the system have the oppor- tunity to provide feedback on the results and any discrepancies or issues they may encounter. Feedback can be sourced from human operators involved in the process, automated systems, or expert evaluators. Users are encouraged to report any inconsistencies or errors identified during quality assessment or grading. The results of quality assessment and grading, along with any feedback received, are reported comprehen- sively. Reports can include statistics on the distribution of quality grades, trends in quality deviations, and details on issues or defects observed. Visual representations of data, such as histograms, charts, and images, are often included in reports to provide a clear overview of the quality assessment outcomes. Process parameters, such as lighting conditions,camera settings, or feature extraction methods, can be adjustedand optimized to enhance the overall quality assessment. In industrial settings, feedback can trigger quality control actions, such as the adjustment of sorting parameters, recalibration of equipment, or corrective actions to address specific issues.

  4. DATASETS

    The dataset employed in this study is structured into three main folders: train, val, and test, reflecting the conventional division for training, validation, and testing purposes. Within each of these folders, two subfolders are meticulously organized, representing distinct categories fresh cotton and diseased cotton. Each category encompasses a curated collection of 50 to 100 images,

    capturing a diverse range of visual features pertinent to the health assessment of cotton crops. This balanced distribution ensures a comprehensive representation of both healthy and diseased instances, fostering robust training and evaluation of the deep learning models. The meticulous organization of the dataset aligns with best practices in machine learning, facilitating effective model training, validation, and testing, ultimately contributing to the reliability and generalization of our studys findings.

    Link to Dataset: https://drive.google.com/drive/folders

    /1vdr9CC9ChYVW2iXp6PlfyMOGD-4Um1ue

    Furthermore, to enrich the dataset, images captured directly by the monitoring camera have been seamlessly integrated. These additional images provide real-world authenticity, reflecting the variations and nuances encountered during actual crop health monitoring. The inclusion of these camera-captured images enhances the datasets representativeness, ensuring that the deep learning models are exposed to a broader range of scenarios encountered in the field. This thoughtful expansion of the dataset contributes to the overall robustness and realism of our studys approach to cotton crop healthmonitoring.

  5. RESULTS

    In the implementation phase of our study, an integrated camera system was deployed to run the trained deep learning model. The primary objective of this phase was to perform real- time classification of cotton crops in the field, effectively distinguishing between fresh cotton and diseased cotton. The success of this classification task constitutes a crucial component of our results.

    The integrated camera system, working in tandem with the deep learning model, autonomously assessed the health status of cotton plants as they were encountered during monitoring. The models capability to identify and discard diseased cot- ton in real time was a significant achievement. This practical application validated the models accuracy and efficiency, ultimately contributing to the realization of a functional tool for cotton crop health monitoring in a real-world context.

  6. CONCLUSION

In conclusion, this survey paper has embarked on a com- prehensive exploration of the convergence of deep learning techniques and their applications in the realm of cotton crop health monitoring. We have delved into the forefront of agri- cultural technology, uncovering state-of-the-art methodologies while acknowledging the inherent challenges. Throughout this survey, the pivotal role of Convolutional Neural Networks (CNNs) has been underscored, emphasizing their unparalleled capacity to discern intricate patterns within images. Their deployment promises remarkable accuracy in the detection of diseases and anomalies in cotton crops, advancing the agricultural industry. We have also meticulously outlined image acquisition methodologies, placing a strong emphasis

on high-quality imaging to ensure the datasets integrity. The subsequent data preprocessing pipeline, empowered by OpenCV, has been fine-tuned to enhance the robustness of our analyses. Quality assessment measures have been rigorously implemented, further reinforcing the trustworthiness of our dataset. As we navigate the evolving landscape of precision agriculture, the synthesis of diverse methodologies presented in this survey sets the stage for the advancement of sustainableand data-driven practices. By embracing innovative technolo- gies, we collectively pave the way for a resilient agricultural future, where the synergy of deep learning and cotton crop health monitoring optimizes productivity and cultivates a more sustainable and efficient agricultural landscape.

REFERENCES

[1] K. Sai Susheel, R Rajkumar, An Analysis of Cotton Crop for De- tection of Pests and Diseases Using ML and DL Techniques, 2023 International Conference on Intelligent and Innovative Technologies in Computing,

Electrical and Electronics (IITCEE), 2023

[2] Hasibul Islam Peyal, Md. Abu Hanif Pramanik, Md. Nahiduzzaman, Pollob Goswami, Uttam Mahapatra, Jobiera Jahan Atusi, Cotton Leaf Disease Detection and Classification Using Lightweight CNN Architec- ture, 2022 12th International Conference on Electrical and Computer Engineering (ICECE), 2022

[3] Nisar Ahmad, Samayveer Singh, Comparative Study of Disease Detec- tion in Plants using Machine Learning and Deep Learning, 2021 2nd International Conference on Secure Cyber Computing and Communica- tions (ICSCCC), 2021

[4] Chitranjan Kumar Rai, Roop Pahuja, Classification of Diseased Cotton Leaves and Plants Using Improved Deep Convolutional Neural Net- work, Multimedia Tools and Applications, 2023

[5] Maulik Verma, Anshu S. Anand, Anjil Srivastava, Plant Disease De- tection Using CNN Through Segmentation and Balancing Techniques,

Advances in Distributed Computing and Machine Learning, 2022

[6] Maryam Saba Tahir, Ayesha Yaqoob, Haiqa Hamid, Rana M. Amir Latif, A Methodology of Customized Dataset for Cotton Disease Detection Using Deep Learning Algorithms, International Conference on Frontiers of Information Technology (FIT), 2022

[7] Bihari Nandan Pandey, Raghvendra Pratap Singh, Mahima Shanker Pandey, Sachin Jain, Cotton Leaf Disease Classification Using Deep Learning based Novel Approach, International Conference on Disrup- tive Technologies (ICDT), 2023