Matches in SemOpenAlex for { <https://semopenalex.org/work/W3136376090> ?p ?o ?g. }
Showing items 1 to 96 of
96
with 100 items per page.
- W3136376090 endingPage "106081" @default.
- W3136376090 startingPage "106081" @default.
- W3136376090 abstract "Knowing precise location and having accurate information about weed species is a prerequisite for developing an effective site-specific weed management (SSWM) system. Due to the effectiveness of deep learning techniques for vision-based tasks such as image classification and object detection, its use for discriminating between weeds and crops is gaining acceptance among the agricultural research community. However, limited studies have used deep learning for identifying multiple weeds in a single image and most of the studies have not compared the effectiveness of deep learning based image classification and object detection by using a common, annotated imagery dataset of early season weeds under field conditions. This study addresses the research gap by evaluating comparative performance of three different pre-trained image classification models for classifying weed species and also assesses the performance of an object detection model for locating and identifying weed species. The image classification models were trained on two commonly used deep learning frameworks i.e., Keras and PyTorch, to assess any performance differential due to the choice of framework. An annotated dataset comprising of RGB images of four, early season weeds, found in corn and soybean production system in the Midwest US, namely, cocklebur (Xanthium strumarium), foxtail (Setaria viridis), redroot pigweed (Amaranthus retroflexus), and giant ragweed (Ambrosia trifida) was used in this study. VGG16, ResNet50, and InceptionV3 pre-trained models were used for image classification. The object detection model, based on the You Only Look Once (YOLOv3) library, was trained to locate and identify different weed species within an image. The performance of image classification models was assessed using testing accuracy and F1-score metrics. Average precision (AP) and mean average precision (mAP) were used to assess the performance of the object detection model. The best performing image classification model was VGG16 with an accuracy of 98.90% and an F1-score of 99%. Faster training times and higher accuracies were observed with PyTorch. The detection model helped locate and identify multiple weeds within an image with AP scores of 43.28%, 26.30%, 89.89%, and 57.80% for cocklebur, foxtail, redroot pigweed, and giant ragweed respectively and an overall mAP score of 54.3%. The results suggest that under field conditions, use of pre-trained models for image classification and YOLOv3 for object detection are promising for identifying single and multiple weeds, respectively, given that sufficient data is available. Additionally, unlike image classification, the localization capabilities of object detection are desirable for developing a system for SSWM." @default.
- W3136376090 created "2021-03-29" @default.
- W3136376090 creator A5013751231 @default.
- W3136376090 creator A5014166484 @default.
- W3136376090 creator A5025951776 @default.
- W3136376090 creator A5046464635 @default.
- W3136376090 creator A5055555925 @default.
- W3136376090 date "2021-05-01" @default.
- W3136376090 modified "2023-10-18" @default.
- W3136376090 title "Performance of deep learning models for classifying and detecting common weeds in corn and soybean production systems" @default.
- W3136376090 cites W1536680647 @default.
- W3136376090 cites W1973788747 @default.
- W3136376090 cites W2031489346 @default.
- W3136376090 cites W2108598243 @default.
- W3136376090 cites W2143672596 @default.
- W3136376090 cites W2183341477 @default.
- W3136376090 cites W2405097228 @default.
- W3136376090 cites W2768584694 @default.
- W3136376090 cites W2794828875 @default.
- W3136376090 cites W2899951262 @default.
- W3136376090 cites W2908783980 @default.
- W3136376090 cites W2909494862 @default.
- W3136376090 cites W2915011392 @default.
- W3136376090 cites W2915017432 @default.
- W3136376090 cites W2937598226 @default.
- W3136376090 cites W2944014569 @default.
- W3136376090 cites W2945215471 @default.
- W3136376090 cites W2962953743 @default.
- W3136376090 cites W2963144738 @default.
- W3136376090 cites W2963150697 @default.
- W3136376090 cites W2971706909 @default.
- W3136376090 cites W3010345596 @default.
- W3136376090 cites W3010677011 @default.
- W3136376090 cites W3024443975 @default.
- W3136376090 cites W3034580072 @default.
- W3136376090 cites W3039926407 @default.
- W3136376090 cites W3082499117 @default.
- W3136376090 doi "https://doi.org/10.1016/j.compag.2021.106081" @default.
- W3136376090 hasPublicationYear "2021" @default.
- W3136376090 type Work @default.
- W3136376090 sameAs 3136376090 @default.
- W3136376090 citedByCount "53" @default.
- W3136376090 countsByYear W31363760902021 @default.
- W3136376090 countsByYear W31363760902022 @default.
- W3136376090 countsByYear W31363760902023 @default.
- W3136376090 crossrefType "journal-article" @default.
- W3136376090 hasAuthorship W3136376090A5013751231 @default.
- W3136376090 hasAuthorship W3136376090A5014166484 @default.
- W3136376090 hasAuthorship W3136376090A5025951776 @default.
- W3136376090 hasAuthorship W3136376090A5046464635 @default.
- W3136376090 hasAuthorship W3136376090A5055555925 @default.
- W3136376090 hasBestOaLocation W31363760901 @default.
- W3136376090 hasConcept C108583219 @default.
- W3136376090 hasConcept C115961682 @default.
- W3136376090 hasConcept C119857082 @default.
- W3136376090 hasConcept C153180895 @default.
- W3136376090 hasConcept C154945302 @default.
- W3136376090 hasConcept C166085705 @default.
- W3136376090 hasConcept C2775891814 @default.
- W3136376090 hasConcept C2776151529 @default.
- W3136376090 hasConcept C41008148 @default.
- W3136376090 hasConcept C6557445 @default.
- W3136376090 hasConcept C75294576 @default.
- W3136376090 hasConcept C86803240 @default.
- W3136376090 hasConceptScore W3136376090C108583219 @default.
- W3136376090 hasConceptScore W3136376090C115961682 @default.
- W3136376090 hasConceptScore W3136376090C119857082 @default.
- W3136376090 hasConceptScore W3136376090C153180895 @default.
- W3136376090 hasConceptScore W3136376090C154945302 @default.
- W3136376090 hasConceptScore W3136376090C166085705 @default.
- W3136376090 hasConceptScore W3136376090C2775891814 @default.
- W3136376090 hasConceptScore W3136376090C2776151529 @default.
- W3136376090 hasConceptScore W3136376090C41008148 @default.
- W3136376090 hasConceptScore W3136376090C6557445 @default.
- W3136376090 hasConceptScore W3136376090C75294576 @default.
- W3136376090 hasConceptScore W3136376090C86803240 @default.
- W3136376090 hasLocation W31363760901 @default.
- W3136376090 hasOpenAccess W3136376090 @default.
- W3136376090 hasPrimaryLocation W31363760901 @default.
- W3136376090 hasRelatedWork W1964497565 @default.
- W3136376090 hasRelatedWork W2366419750 @default.
- W3136376090 hasRelatedWork W2390687941 @default.
- W3136376090 hasRelatedWork W2394364083 @default.
- W3136376090 hasRelatedWork W2800068602 @default.
- W3136376090 hasRelatedWork W2970686063 @default.
- W3136376090 hasRelatedWork W3034745255 @default.
- W3136376090 hasRelatedWork W3210378990 @default.
- W3136376090 hasRelatedWork W4200328050 @default.
- W3136376090 hasRelatedWork W4254103348 @default.
- W3136376090 hasVolume "184" @default.
- W3136376090 isParatext "false" @default.
- W3136376090 isRetracted "false" @default.
- W3136376090 magId "3136376090" @default.
- W3136376090 workType "article" @default.