Leaf Recognition Using Feature Point Extraction and Artificial Neural Network

DOI : 10.17577/IJERTV2IS1377

Download Full-Text PDF Cite this Publication

Text Only Version

Leaf Recognition Using Feature Point Extraction and Artificial Neural Network

S. R. Deokar

P. H. Zope

S. R. Suralkar

Rajarshi Shahu

S. S. B. T, College

S. S. B. T, College

College of Engg. Buldana

& Tech. Jalgaon

& Tech. Jalgaon

Abstract

The proposed system focuses on Leaf Recognition System using feature point extraction and artificial neural network (ANN). The leaf recognition system is based on feature point extraction. The feature points extraction is base on geometric centre; it compares the input leaf image with already trained leaf image. The numbers of feature points of input leaf images are matched with the already trained leaf feature points. If matched, name of plant is display otherwise system shows detection fail. The objective of this project is to identify the accurate input leaf for feature extraction; two schemes are 28 and 60 feature point extraction. As Feature points are increases recognition rate decreases because of complexity and time require for training and testing is more. Comparative analysis has been made with the three schemes. The first scheme comparison with 28 and 60 feature point extraction with respect to recognition rate, the second scheme is comparison of time require for feature extraction and training time and third scheme is comparison with hidden layers. Results obtained by this algorithm are quite impressive. Unknown leaf samples are also eliminated in greater extent.

Keywords: Image Preprocessing, Geometric Centre, Feature Points Extraction, Artificial Neural Network.

  1. Introduction

    The proposed leaf recognition system uses the feature point extraction and artificial neural network. The feature point extraction method is one of the most important technique, use in research work in the field of personal identification. This proposed system uses the feature point extraction method for leaf recognition. Plants are an integral part of all natural life. In nature different types of plant are present. Many of plants carry significant information for the human society development. The plants can be recognizing by the use

    of leaf, fruits and flowers. Plant leaf classification finds application in botany, in tea, coffee, cotton, tobacco, turmeric, health and other industries. However, it is an important and difficult task to recognize plant species on earth [2]. In India Ayurveda is one of the great gifts of ancient India to mankind. It is one of the oldest scientific medical systems in the world, with a long record of clinical experience. The plants having leaves, fruits, flowers are use as medicine. These plants mostly grows in forest and very difficult to recognize. If wrong plant is chosen for the medical treatment it will cause serious problem. Computer vision technique can use to solve this problem in a similar way to human experts by analyzing the leaves.

  2. Literature Survey

    Object shape matching functions, color-based classifiers, reflectance-based classifiers and texture based classifiers are some of the common methods that have been tried in the past. Many researchers have tried to identify plant leaves by applying several techniques that are briefly reviewed below. Tian et al. developed a machine vision system to detect and locate tomato seedlings and weed plants in a commercial agricultural environment [20]. Guyer et al. implemented an algorithm to extract plant/leaf shape features using information gathered from critical points along object borders, such as the location of angles along the border (and/or) local maxima and minima from the plant leaf centroid [24]. Woebbecke et al. developed a vision system using shape features for identifying young weeds. Franz et al. identified plants based on individual leaf shape described by curvature of the leaf boundary at two growth stages [25]. Thompson et al. suggested that plant shape features might be necessary to distinguish between monocots and dicots for intermittent or spot spraying [27]. All the above researcher uses the methods are based on the object shape matching function. The following researcher uses the colour based techniques. Kataoka et al. developed an automatic detection system for detecting

    apples ready for harvest, for the application of robotic fruit harvesting [19]. Woebbecke et al. developed a vision system using color indices for weed identification under various soil, residue and lighting conditions [30]. Franz et al. identified plants based on individual leaf shape described by curvature of the leaf boundary at two growth stages [25]. Ninoyama and Shigemori analyzed binary images of whole soybean plants viewed from the side [26]. A weed detection system for Kansas wheat was developed using color filters by Zhang and Chaisattapagon. The following researcher uses methods are texture based technique. Haralick et al. used gray level co occurrence features to analyze remotely sensed images [33]. Tang et al. developed a texture-based weed classification method using Gabor wavelets and neural networks for real-time selective herbicide application.

  3. System Architecture:

    The methodology of propose system has been shown pictorially in figure 3.1. Very first images are capturing using digital camera. These images are gone through the different pre-processing steps like conversion of color image to gray image; convert to black and white and Binarized image, separates the leaf area from leaf background.

    Figure 3.1: Block diagram of propose Leaf recognition System.

    For feature extraction, feature point extraction method is use. For feature point extraction two schemes are use, 28 and 60 feature point extraction. These two schemes are comparing with each other. After image preprocessing feature points are extracted from leaf image. For feature points extraction is done by slitting leaf image into vertically and horizontally. The feature point extraction method is based on geometric center. For 28 feature point extraction scheme 28 points are extracted from the leaf image as well as 60 feature points for 60 feature point extraction scheme. These feature point then subtracted from geometric center. The feature points are the input to the artificial neural network which is use as classifier. For obtaining better result artificial neural network is used by many. The feature points of the input leaf image are comparing

    with the feature points in the database. The proposed system displays the leaf image of that species, if match. It shows the name of input leaf image and recognition time otherwise shows detection fails, if input leaf not matched.

  4. Database:

    The leaf images from database are used for training artificial neural network as well as for testing of input leaf image. The formation of leaf image database is clearly dependent on the application. The leaf images are capture in specific manner. The background use for the leaf images is of uniformed colored. This proposes work uses the white colored as a background. The leaf image database in the propose work consisting of 250 leaves images. Twenty different type of leaf species are use for database and each type having ten leaves samples for training and five leaf samples for testing. The following table 4.1 shows the details about the leaf image data base.

    Table 4.1: Details about leaf image database

    Sample use for

    No of species

    No of Samples

    Total no of samples

    Training

    20

    10

    200

    Testing known

    5

    10

    50

    Total

    250

    Figure 3.2: Input leaf image samples from database.

  5. Image Pre-Processing:

    The input color leaf image is converted to grayscale. After conversion to grayscale, the leaf image is goes through thresholding. After thresholding the segmented and binarized leaf image is obtained. The following figure 5.1 shows the segmented and binarized leaf image.

    Figure 5.1: Segmented and Binarized samples of leaf images.

  6. Feature Extraction method:

    This section discuss about the feature extraction method use in proposed system. For feature extraction the feature point extraction method is use for proposed system. In this method feature points are extracted from the leaf image based on the geometric centre of the image. The system is tested over two type of feature extraction first one is the 28 feature point extraction and second one is the 60 feature point extraction. Very first system is design for 28 feature point extraction, in 28 feature point extraction, system obtained 28 feature Points, 14 from vertical splitting and 14 by horizontal splitting. In second method system is design for 60 feature point extraction, in 60 feature point extraction, system obtained 60 feature Points, 30 from vertical splitting and 30 by horizontal splitting. In feature extraction we use two type of splitting (Vertical and Horizontal) based on geometric center. Classifier used here is the artificial neural network model which is suitable for the features proposed.

    1. Geometric Center:

      It is a point about which image is perfectly symmetric number of pixel wise. So we can say that it is the center point of an image. It is a vital parameter use in the feature extraction. If a vertical line drawn from the feature point then the number of pixel at the right side must be equal to the left side and similarly a line drawn horizontally then number of pixel at top side must be equal to the bottom side. The following figure 6.1 shows the geometric center for leaf image.

      Figure 6.1: Leaf image geometric centre for leaf image.

    2. 28 Feature Point Extraction Method:

      The geometric features are based on two sets of points in 2- dimensional plane. These 2-diamentional planes are obtained by vertical splitting and horizontal splitting. The vertical splitting of the leaf image results fourteen feature points (v1, v2, v3 v14) and the horizontal splitting results thirty feature points (p,p,p,.,p4).These feature points are obtained with relative to a central geometric point of the leaf image. Here the centered leaf image is scanned from left to right and calculate the total number of white pixels as well as from top to bottom and calculate the total number of white pixels. Then divide the leaf image into two halves with respect to the number of white pixels by two lines vertically and horizontally which intersects at a point called the geometric centre. With reference to this point we extracted 28 feature points: 14 vertical and 14 horizontal feature points of each leaf image.

      1. Feature points based on Vertical Splitting:

        Fourteen feature points are obtained based on vertical splitting with respect to the central feature point. The leaf image is split into two planes by splitting leaf image vertically by vertical line. The leaf image is divided into two plane right and left plane. The procedure for finding vertical feature points is given below:

        1. Algorithm 1

          Input: Segmented and Binarized leaf image.

          Output: Vertical feature points: v1, v2, v3, v4v13, v14.

          The steps are:

          1. Split the leaf image with a vertical line passing through the geometric centre (v0) which divides the leaf image into two halves that is left part and right part.

          2. Find geometric centers v1 and v2 for left and right parts correspondingly.

          3. Split the left and right part with horizontal lines through v1 and v2 to divide the two parts into four parts that is top left part, bottom left part ,top left part and bottom right part. From above part we obtain v3, v4 and v5, v6.

          4. We again split each part of the image through their geometric centers to obtain feature points v7, v8, v9

            .v13, v14.

          5. Now we obtained all the fourteen vertical feature points.

      2. Feature points based on Horizontal Splitting:

        The very next step after fourteen vertical feature point extraction is the fourteen feature points are obtained based on horizontal splitting with respect to the central feature point. For finding the feature point by horizontal splitting the leaf image is split horizontally by horizontal line. The feature points that are extracted are based on geometric centre. The procedure for finding horizontal feature points is given below:

        1. Algorithm 2

          Input: Segmented and Binarized leaf image.

          Output: Horizontal feature points: p, p, p, h4.p3, p4.

          The steps are:

          1. Split the leaf image with a horizontal line passing through the geometric centre (h0) which divides the leaf image into two halves that is top part and bottom part.

          2. Find geometric centers p and p for top and bottom parts correspondingly.

          3. Split the top and bottom part with vertical lines through p and p to divide the two parts into four parts left top part, right top part, left bottom part and right bottom part. From which we obtain p, h4 and p, p.

          4. We again split each part of the leaf image through their geometric centers to obtain feature points h7, h8, h9 .p3, p4.

          5. All the fourteen horizontal feature points are obtained.

        The following figure 6.2 shows the all 28 feature points.

        Figure 6.2: All 28 horizontal and vertical feature points.

    3. 60 Feature Point Extraction Method

The geometric features are based on two sets of points in 2-dimensional plane. The vertical splitting of the image results thirty feature points (v1, v2, v3 v30) and the horizontal splitting results thirty feature points (p,p,p,.,p0).These feature points are obtained with relative to a central geometric point of the image. Here the leaf image is scanned from left to right and calculate the total number of white pixels. Then again leaf image is scanned from top to bottom and calculate the total number of white pixels. Then divide the image into two halves with respect to the number of white pixels by two lines vertically and horizontally which intersects at a point called the geometric centre. With reference to this point we extracted 60 feature points:

30 vertical and 30 horizontal feature points of each signature image.

      1. Feature points based on Vertical Splitting

        Thirty feature points are obtained based on vertical splitting with respect to the central feature point. The procedure for finding vertical feature points is given below:

        a) Algorithm 1

        Input: Segmented and Binarized leaf image.

        Output: Vertical feature points: v1, v2, v3, v4v29, v30.

        The steps are:

        1. Split the leaf image with a vertical line passing through the geometric centre (v0) which divides the leaf image into two halves that is Left part and Right part.

        2. Find geometric centers v1 and v2 for left and right parts correspondingly.

        3. Split the left and right part with horizontal lines through v1 and v2 to divide the two parts into four parts: Top-left, Bottom-left and Top-right, Bottom- right parts from which system obtain v3, v4 and v5, v6.

        4. We again split each part of the image through their geometric centers to obtain feature points v7 v8, v9 .v13, v14.

        5. Then we split each part once again to obtain all the thirty vertical feature points.

      2. Feature points based on Horizontal Splitting

Thirty feature points are obtained based on horizontal splitting with respect to the central feature point. The procedure for finding horizontal feature points is given below.

Algorithm 2

Input: Segmented and Binarized leaf image.

Output: Horizontal feature points: p, p, p, h4p9, p0.

The steps are:

  1. Split the image with a horizontal line passing through the geometric centre (h0) which divides the image into two halves that is Top part and Bottom part.

  2. Find geometric centers p and p for top and bottom parts correspondingly.

  3. Split the top and bottom part with vertical lines through p and p to divide the two parts into four parts that is Left-top, Right-top and Left-bottom, Right- bottom parts from which we obtain p, h4 and p, p.

  4. We again split each part of the image through their geometric centers to obtain feature points h7, h8, h9

    .p3, p4.

  5. Then we split each part once again to obtain all the thirty vertical feature points.

The following figure 6.3 shows the all 60 feature points.

Figure 6.3: All 60 vertical as well as horizontal feature points extraction.

  1. Classification

    Classification is the final stage of leaf recognition system design. This is the stage where an automated system declares that the inputted leaf image belongs to a particular category. The classifier here we have used is a feed forward back propagation neural network. To accomplish the task of leaf recognition system classification, the multi-layer feed forward artificial neural network was considered with nonlinear differentiable function sigmoid in all processing units of output and hidden layers. The neurons in the input layer have linear activation function. The number of output units corresponds to the number of distinct classes in the pattern classification. A method has been developed, so that network can be trained to capture the mapping implicitly in the set of input output pattern pair collected during an experiment and simultaneously expected to modal the unknown system to function from which the predictions can be made for the new or untrained set of database. The possible output pattern class would be approximately an interpolated version of the output pattern class corresponding to the input learning pattern close to the given test input pattern. This method involved the back propagation learning rule based on the principle of gradient descent along the error surface in the negative direction. The following figure shows feed forward neural network.

    Figure 7.1: Feed forward neural network

    The network has 28 input neurons for 28 feature point extraction and 60 input neurons for 60 feature point extraction. The number of neurons in the output layer was one because there is a one leaf image which is to be recognized. The number of hidden neurons is directly proportional to the system resources. The bigger the number more the resources are required. The number of neurons in a hidden layer was kept 56 for optimal results.

  2. Experimental Result and Performance Analysis:

    1. Comparison of 28 and 60 feature point extraction with respect to recognition rate

      The following table 8.1 shows the comparison of 28 feature point extraction and 60 feature point extraction. The recognition rate obtained for the 28 feature point extraction is better than that of 60 feature point extraction. The overall performance of the 28 feature point extraction is very good over the 60 feature point extraction. For all dataset the recognition rate obtained are better for 28 feature point extraction.

      Sr. No

      Leaf data base set

      60 feature point extraction (%)

      Accuracy

      Accuracy

      1

      50

      90

      86

      2

      100

      86

      82

      3

      150

      72.66

      78.66

      4

      200

      72.00

      65.5

      The following chart shows the graphical representation for comparison of 28 and 60 feature point extraction methods.

      Chart 8.1: Graphical representation of comparison of 28 and 60 feature point extraction with respect to recognition rate

        1. Comparison on the basis of Time

          The comparison is done the basis of time require for feature extraction and training the neural network for both the scheme that is 28 and 60 feature point extraction. The time require for 28 feature point extraction is relatively less while time require for 60 feature point is more than that of 28 feature point extraction scheme. In 28 feature extraction scheme system find 28 feature points while the for 60 feature point extraction system find 60 feature points thats why the time require to extract 60 feature from leaf image takes more time while the for 28 feature points system takes less time. The following table 8.2 shows the time require for feature extraction and training the neural network.

          Table 8.2: Comparison of 28 and 60 feature point extraction with respect to time

          Sr. No

          No of Samples

          Time required for 28 feature point extraction and training

          Time required for 60 feature point extraction and training

          1

          5 x 10

          0.65 min

          6.27 min

          2

          10 x 10

          4.32 min

          7.66 min

          3

          15 x 10

          6.03 min

          11.70 min

          4

          20 x10

          8.36 min

          14.77 min

          The following chart 8.2 shows the graphical representation time require for feature extraction and training for 28 and 60 feature point extraction.

          Chart8.2: Graphical representation of time for feature point extraction and training.

        2. Comparison of basis different Hidden layer

      The results of different hidden layer are taken. The results of three different types of hidden layer that is 28, 56, and 84 are comparing with each other. The recognition rate obtained from 56 hidden layers is better than the result obtained from 28 and 84 hidden layers. The following table shows comparison of results of test leaf image with different hidden layer.

      Table 8.3: Result of test leaf image with different Hidden layer

      Chart 8.3: Graphical representation of recognition rate for different hidden layer.

  3. Conclusion

    S N.

    Leaf data set

    Recognition Rate For

    28 Hidden layer

    Recognition Rate for

    56 Hidden layer

    Recognition Rate for

    84 Hidden layer

    1

    20

    80

    80

    80

    2

    30

    70

    83

    63

    3

    40

    65

    77.50

    57.50

    4

    50

    56

    68

    56

    5

    60

    61.66

    58.33

    58.33

    S N.

    Leaf data set

    Recognition Rate For

    28 Hidden layer

    Recognition Rate for

    56 Hidden layer

    Recognition Rate for

    <>84 Hidden layer

    1

    20

    80

    80

    80

    2

    30

    70

    83

    63

    3

    40

    65

    77.50

    57.50

    4

    50

    56

    68

    56

    5

    60

    61.66

    58.33

    58.33

    The proposed leaf recognition system is implemented for recognition of leaf image. The concept of propose leaf recognition system can be useful for many those are find difficulties to recognize correct leaf. The leaf recognition system is developed by using feature point extraction and artificial neural network. For feature extraction, feature point extraction method is use, and for classification feed forward neural network is use. The proposed leaf recognition system implements the two feature point extraction method that is 28 and 60 feature extraction. The 28 feature point extraction method provides the better result than the 60 feature point extraction. The performance of leaf recognition system is evaluated on the basis of three comparison, for 28 feature point extraction the recognition rate obtained on different data set is better than 60 feature point extraction, the time require for feature point extraction and training is comparatively less than that of 60 feature point extraction because to extract the 60 point system require more time than 28 feature point extraction and 56 hidden layer providing better recognition rate than that 28, 56 hidden layer. Hence the 28 feature point extraction method is very efficient technique for proposed leaf recognition system.

  4. Future Scope

    In further work the proposed leaf recognition system can be modified to increase the recognition rate for 28 feature point extraction. To increase recognition rate some morphological feature should be added with existing feature point as well as use of principle

    component analysis. To reduce the dimension of neural network, PCA is to be use to orthogonalize 28 feature points. The purpose of PCA is to present the information of original data as the linear combination of certain linear irrelevant variables.

  5. REFERENCES

  1. Anand H. Kulkarni, Ashwin Patil R. K. Applying Image Processing Technique To Detect Plant Diseases, International Journal Of Modern Engineering Research (IJMER), Sep-Oct. 2012, Vol.2, Issue.5,pp-3661-3664.

  2. Prof. Meeta Kumar, Mrunali Kamble, Shubhada Pawar, Prajakta Patil, Neha Bonde, Survey On Techniques For Plant Leaf Classification, 2012Vol.1, Issue.2, pp-538-544 -International Journal Of Modern Engineering Research (IJMER)

  3. Chomtip Pornpanomchai, Supolgaj Rimdusit, Piyawan Tanasap And Chutpong Chaiyod, Thai Herb Leaf Image Recognition System (THLIRS), Kasetsart J. (Nat. Sci.), 2011, pp-45: 551 562.

  4. Suhail M. Odeh And Manal Khalil, Off-Line Signature Verification And Recognition, Neural Network Approach IEEE 2011.

  5. Rahul Sharma And Manish Shrivastav, An Offline Signature Verification System Using Neural Network Based On Angle Feature And Energy Density, International Journal On Emerging Technologies, 2011, pp 84-89.

  6. M. Z. Rashad , B.S.El-Desouky, And Manal S

    .Khawasik, Plants Images Classification Based On Textural Features Using Combined Classifier, International Journal Of Computer Science & Information Technology (IJCSIT), August 2011, Vol 3, No 4.

  7. SM Ushaa , M. Madhavilatha, G. Madhusudhan Rao, Modified Neural Network Architecture based Expert System for Automated Disease Classification and Detection using PCA Algorithm, International Journal of Engineering Science and Technology (IJEST), 9 September 2011, ISSN : 0975-5462 Vol. 3 No.

  8. Stephen Gang Wu, Forrest Sheng Bao, Eric You Xu, Yu-Xuan Wang, Yi-Fan Chang And Qiao-Liang Xiang, Leaf Recognition Algorithm For Plant Classification Using Probabilistic Neural Network, 2007 IEEE International Symposium On Signal Processing And Information Technology.

  9. Du, J.X., Wang, X.F., Zhang, And G.J., Leaf Shape Based Plant Species Recognition. Applied Mathematics and Computation, 2007. Pp-185, 883 893.

  10. Ji-Xiang Du, De-Shuang Huang, Xiao-Feng Wang And Xiao Gu, Computer-Aided Plant Species

    Identification (CAPSI) Based On Leaf Shape Matching Technique, Transactions Of The Institute Of Measurement And Control 28, 2006, pp. 275_284.

  11. Qingfeng Wu, Changle Zhou And Chaonan Wang,

    Feature Extraction And Automatic Recognition Of Plant Leaf Using Artificial Neural Network Advances En Ciencias De La Computación, 2006, pp. 5-12.

  12. P. Pattanasethanon And B. Attachoo, Thai Botanical Herbs And Its Characteristics: Using Artificial Neural Network, African Journal Of Agricultural Research Vol. 7(2), 12 January, 2012

    Pp. 344-351.

  13. Pillati, M., Viroli, C, Supervised Locally Linear Embedding For Classification, An Application To Gene Expression Data Analysis. Proceedings Of 29th Annual Conference Of The German Classification Society, 2005, pp. 1518 (2005).

[14]M. Ferrer, J. Alonso, And C. Travieso, Offline Geometric Parameters For Automatic Signature Verification Using Fixed Point Arithmetic. IEEE Transactions on Pattern Analysis And Machine Intelligence,2005, pp-993 997.

  1. X.-F. Wang, J.-X, Du, And G. -J. Zhang, Recognition Of Leaf Images Based On Shape Features Using A Hyper Sphere Classifier, In Proceedings Of International Conference On Intelligent Computing 2005, Springer 2005 Ser. LNCS 3644.

  2. Yan Li, Zheru Chi, David D. Feng, Leaf Vein Extraction Using Independent Component Analysis, October 8-11, 2006, 2006 IEEE Conference On Systems, Man, And Cybernetics Taipei, Taiwan.

  3. Kshitij Sisodia And S. Mahesh Anand, Off-Line Handwritten Signature Verification Using Artificial Neural Network Classifier, International Journal Of Recent Trends In Engineering, November 2009 Vol 2, No. 2.

  4. Burks, T.F. Early Detection Of Citrus Diseases Using Machine Vision. Presentation at ASAE Conference. 2002. Chicago, USA.

  5. Kataoka, T., O. Hiroshi, And H. Shun-Ichi,

    Automatic Detecting System Of Apple Harvest Season For Robotic Apple Harvesting. Presented At The 2001 ASAE Annual International Meeting, Sacramento, California. 2001 pp- 01-3132.

  6. Tian, L., D.C. Slaughter, And R.F. Norris,

    Machine Vision Identification Of Tomato Seedlings For Automated Weed Control. Transactions of ASAE, 2000.Volume 40(6), pp-1761:1768.

  7. Im, C., Nishida, H., Kunii, T.L. D. Warren, Automated Leaf Shape Description For Variety Testing In Chrysanthemums, In Proceedings of The 6th IEEE International Conference Image Processing And Its Applications, 1997.

  8. Woebbecke, D.M., G.E. Meyer, K. Von Bargen, And D.A. Mortensen, Shape Features For Identifying Young Weeds Using Image Analysis. Transactions Of ASAE, 1995a, Sacramento, California Volume 38(1), pp- 271:281.

  9. Abdul Kadir, Lukito Edi Nugroho, Adhi Susanto, Paulus Insap Santosa, Leaf Classification Using Shape, Color, And Texture Features, International Journal Of Computer Trends And Technology- July To Aug Issue 2011, ISSN: 2231-2803, Pp-225.

  10. Guyer, D.E., G.E. Miles, D.L. Gaultney, And

    M.M. Schreiber, Application Of Machine Vision To Shape Analysis In Leaf And Plant Identification. Transactions Of ASAE, 1993, Volume 36(1), pp- 163:171.

  11. Franz, E., M.R. Gebhardt, And K.B. Unklesbay,

    The Use Of Local Spectral Properties Of Leaves As An Aid For Identifying Weed Seedlings In Digital Images. Transactions Of ASAE, 1991. Volume 34(20), pp- 682:687.

  12. Ninoyoma, S And I. Shigemori, Quantitative Evalution Of Soybean Plant Shape By Image Analysis. Japan. J. Breed, 1991, Volume 41, pp- 485: 497.

  13. Thompson, J.F., J.V. Stafford, And P.C.H. Miller,

    Potential For Automatic Weed Detection And Selective Herbicide Application. Crop Protection.1991. Volume 10, pp- 254- 259.

  14. Guyer, D.E., G.E. Miles, M.M. Schreiber, O.R. Mitchell, And V.C. Vanderbilt, Machine Vision And Image Processing For Plant Identification. Transactions of ASAE, 1986, Volume 29(6), pp- 1500:1507.

  15. Slaughter, D.C. A, Color Vision For Robotic Orange Harvesting, Phd Dissertation Submitted To The University Of Florida, Gainesville. 1987.

  16. Woebbecke, D.M., G.E. Meyer, K. Von Bargen, And D.A, MortensenColor Indices For Weed Identification Under Various Soil, Residue, And Lighting Conditions. Transactions of ASAE, 1995b, Volume 38(1), pp- 259:269.

  17. Zhang, N., and C. Chaisattapagon, Effective Criteria For Weed Identification In Wheat Fields Using Machine Vision. Transactions of ASAE, 1995. Volume 38(3), pp- 965:974.

  18. Coggins, J. M., A, Framework for Texture Analysis Based On Spatial Filtering: Ph.D. Dissertation, Computer Science Department, Michigan State University, East Lansing, Michigan, 1982.

  19. Haralick, R., K. Shanmugam, I. Dinstein,

    Texture Features for Image Classification, IEEE Transactions On Systems, Man, And Cybernetics, 1973. Volume(3), pp- 610:621.

  20. Ampazis N. Introduction to Neural Networks, Artificial Neural Networks Laboratory, reece, 1999, http://www.iit.demokritos.gr/neural/intro (Accessed July 14, 2004).

  21. William K. Pratt (1991). Digital Image Processing. New York: John Wiley & Sons.

  22. D. Guillevic and C. Y. Suen, Cursive script recognition: A sentence level recognition scheme, Proceedings of the 4th International Workshop on the Frontiers of Handwriting Recognition,1994, pp.216 223.

  23. S. N. Sivanandam, S. N. Deepa, Principals of Soft Computing, Wiley-India, New Delhi, India.2008, pp. 71-83.

  24. L. Tang, L.F. Tian, B.L. Steward and J.F. Reid, Texture-Based Weed Classification Using Gabor Wavelets and Neural Network for Real-time Selective Herbicide Applications. 2003, American Society of agricultural Engineers ISSN 0001-2351,vol.46(4): 1247-1254.

Leave a Reply