Matches in SemOpenAlex for { <https://semopenalex.org/work/W3169387543> ?p ?o ?g. }
- W3169387543 endingPage "106226" @default.
- W3169387543 startingPage "106226" @default.
- W3169387543 abstract "Close range spectra imaging of agricultural plants is widely performed to support digital plant phenotyping, a task where physicochemical changes in plants are monitored in a non-destructive way. A major step before analyzing the spectral images of plants is to distinguish the plant from the background. Usually, this is an easy task and can be performed using mathematical operations on the combinations of selected spectral bands, such as estimating the normalized difference vegetative index (NDVI). However, when the background of plants contains objects with similar spectral properties as plant then the segmentation based on the threshold of NDVI images can suffer. Another common approach is to train pixel classifiers on spectra extracted from selected locations in the spectral image, but such an approach does not take the spatial information about the plant structure into account. From a technical perspective, plant spectral imaging for digital phenotyping applications usually involves imaging several plants together for a comparative purpose, hence, the imaging scene is relatively big in terms of memory. To solve the challenge of plant segmentation and handling the memory challenge, this study proposes a novel approach, which combines chemometrics with advanced deep learning (DL) based semantic segmentation. The approach has four key steps. As a first step, the spectral image is pre-processed to reduce illumination effects present in the close-range spectral images of plants resulting from the interaction of light with complex plant geometry. Different chemometric pre-processing methods were explored to find possible improvements in the segmentation performance of the DL model. The second step was to perform a principal components analysis (PCA) to reduce the dimensionality of the images, thus drastically reducing their size so that they can be handled more easily using the available computer memory during the training of the DL model. As the third step, small random images (128 × 128) were subsampled from the tall and wide image matrices to generate the training and validation sets for training the DL models. In the last step, a U-net based deep semantic segmentation model was trained and validated on the sub-sampled spectral images. The results showed that the proposed approach allowed efficient handling and training of the DL segmentation model. The intersection over union (IoU) scores for the segmentation was 0.96 for the independent test set image. The segmentation based on variable sorting for normalization and standard normal variate pre-processed data achieved the highest IoU scores. A combination of chemometrics and DL led to an efficient segmentation of tall and wide spectral images which otherwise would have given out-of-memory errors. The developed method can facilitate digital phenotyping tasks where close-range spectral imaging is used to estimate the physicochemical properties of plants." @default.
- W3169387543 created "2021-06-22" @default.
- W3169387543 creator A5012411281 @default.
- W3169387543 creator A5036880268 @default.
- W3169387543 creator A5051416895 @default.
- W3169387543 creator A5058459201 @default.
- W3169387543 creator A5074289176 @default.
- W3169387543 creator A5088154532 @default.
- W3169387543 creator A5090561471 @default.
- W3169387543 date "2021-07-01" @default.
- W3169387543 modified "2023-09-26" @default.
- W3169387543 title "Complementary chemometrics and deep learning for semantic segmentation of tall and wide visible and near-infrared spectral images of plants" @default.
- W3169387543 cites W157577423 @default.
- W3169387543 cites W1901129140 @default.
- W3169387543 cites W2016090370 @default.
- W3169387543 cites W2048976066 @default.
- W3169387543 cites W2051006834 @default.
- W3169387543 cites W2066612219 @default.
- W3169387543 cites W2088487511 @default.
- W3169387543 cites W2109606373 @default.
- W3169387543 cites W2288553003 @default.
- W3169387543 cites W2412782625 @default.
- W3169387543 cites W2611517298 @default.
- W3169387543 cites W2763148304 @default.
- W3169387543 cites W2766043371 @default.
- W3169387543 cites W2766610839 @default.
- W3169387543 cites W2793665374 @default.
- W3169387543 cites W2804860796 @default.
- W3169387543 cites W2884506556 @default.
- W3169387543 cites W2903707524 @default.
- W3169387543 cites W2910987047 @default.
- W3169387543 cites W2920018594 @default.
- W3169387543 cites W2925510773 @default.
- W3169387543 cites W2946706963 @default.
- W3169387543 cites W2954236833 @default.
- W3169387543 cites W2954326153 @default.
- W3169387543 cites W2962770389 @default.
- W3169387543 cites W2965346923 @default.
- W3169387543 cites W2969154550 @default.
- W3169387543 cites W2969944843 @default.
- W3169387543 cites W2982589334 @default.
- W3169387543 cites W2991616716 @default.
- W3169387543 cites W2992632189 @default.
- W3169387543 cites W2993860801 @default.
- W3169387543 cites W2996230344 @default.
- W3169387543 cites W3002058427 @default.
- W3169387543 cites W3013448249 @default.
- W3169387543 cites W3013927267 @default.
- W3169387543 cites W3015087400 @default.
- W3169387543 cites W3047447152 @default.
- W3169387543 cites W3085020035 @default.
- W3169387543 cites W3095491581 @default.
- W3169387543 doi "https://doi.org/10.1016/j.compag.2021.106226" @default.
- W3169387543 hasPublicationYear "2021" @default.
- W3169387543 type Work @default.
- W3169387543 sameAs 3169387543 @default.
- W3169387543 citedByCount "13" @default.
- W3169387543 countsByYear W31693875432021 @default.
- W3169387543 countsByYear W31693875432022 @default.
- W3169387543 countsByYear W31693875432023 @default.
- W3169387543 crossrefType "journal-article" @default.
- W3169387543 hasAuthorship W3169387543A5012411281 @default.
- W3169387543 hasAuthorship W3169387543A5036880268 @default.
- W3169387543 hasAuthorship W3169387543A5051416895 @default.
- W3169387543 hasAuthorship W3169387543A5058459201 @default.
- W3169387543 hasAuthorship W3169387543A5074289176 @default.
- W3169387543 hasAuthorship W3169387543A5088154532 @default.
- W3169387543 hasAuthorship W3169387543A5090561471 @default.
- W3169387543 hasBestOaLocation W31693875431 @default.
- W3169387543 hasConcept C114700698 @default.
- W3169387543 hasConcept C119857082 @default.
- W3169387543 hasConcept C12713177 @default.
- W3169387543 hasConcept C127413603 @default.
- W3169387543 hasConcept C146978453 @default.
- W3169387543 hasConcept C151304367 @default.
- W3169387543 hasConcept C153180895 @default.
- W3169387543 hasConcept C1549246 @default.
- W3169387543 hasConcept C154945302 @default.
- W3169387543 hasConcept C160633673 @default.
- W3169387543 hasConcept C173163844 @default.
- W3169387543 hasConcept C18903297 @default.
- W3169387543 hasConcept C204323151 @default.
- W3169387543 hasConcept C205649164 @default.
- W3169387543 hasConcept C25989453 @default.
- W3169387543 hasConcept C31972630 @default.
- W3169387543 hasConcept C41008148 @default.
- W3169387543 hasConcept C62649853 @default.
- W3169387543 hasConcept C86803240 @default.
- W3169387543 hasConcept C89600930 @default.
- W3169387543 hasConceptScore W3169387543C114700698 @default.
- W3169387543 hasConceptScore W3169387543C119857082 @default.
- W3169387543 hasConceptScore W3169387543C12713177 @default.
- W3169387543 hasConceptScore W3169387543C127413603 @default.
- W3169387543 hasConceptScore W3169387543C146978453 @default.
- W3169387543 hasConceptScore W3169387543C151304367 @default.
- W3169387543 hasConceptScore W3169387543C153180895 @default.
- W3169387543 hasConceptScore W3169387543C1549246 @default.
- W3169387543 hasConceptScore W3169387543C154945302 @default.