Matches in SemOpenAlex for { <https://semopenalex.org/work/W4308158131> ?p ?o ?g. }
- W4308158131 endingPage "108739" @default.
- W4308158131 startingPage "108739" @default.
- W4308158131 abstract "Visible and near-infrared imaging spectroscopy is an efficient method for non-destructive estimation of crop leaf area index (LAI), in which effective features extraction is a key factor affecting model accuracy. Vegetation index like normalized difference vegetation index (NDVI) is a typical parameter extracted from spectral images for LAI estimation. However, it always faces challenges of values saturation and sensitivity decreasing due to the canopy coverage changes and complex environment influences. Therefore, the study aims to propose a method to explore deep features based on deep learning model (ResNet50) and vegetation index (VI) map, by which contributes to improve the accuracy of maize LAI estimation. In two-years’ experiments of 2020 and 2021, the multi-spectral imaging sensor carried by the unmanned aerial vehicle (UAV) was used to collected remote sensing images of the maize canopy during multi-growth stages under different nitrogen fertilizer treatments, and a total of 792 LAI values were collected. In terms of feature extraction, 10 VIs were extracted based on the spectral images of maize canopy, and comprehensive features (texture and deep features) were extracted based on the VI map. In terms of model construction, partial least squares regression (PLS) models were constructed based on three feature types. The results showed that normalized difference red edge (NDRE) VI was the best indicator for monitoring maize LAI among all the VIs investigated. Meanwhile, LAI multivariate linear model constructed based on texture features extracted from NDRE images had better accuracy. Deep features of NDRE images were further extracted by ResNet50 model. And the maize LAI estimation models based on PLS were constructed with VI, texture features and deep features of two-year data, respectively. The results showed that compared with VI and texture features, the LAI estimation model based on deep features showed optimal accuracy in two-year data (R2 = 0.827, greater than 0.781 and 0.780; RMSEP = 0.405, less than 0.455 and 0.455; MAE = 0.320, less than 0.366 and 0.363). In addition, the LAI estimation model based on deep features was successfully evaluated with 2020 and 2021 validation data (2020: R2 = 0.872, RMSEP = 0.379, MAE = 0.294; 2021: R2 = 0.733, RMSEP = 0.433, MAE = 0.348). The experimental results demonstrate the effectiveness of the proposed deep feature extraction method based on deep learning model and VI images in LAI estimation, which provides a feasible method for monitoring crop growth information based on UAV platform." @default.
- W4308158131 created "2022-11-08" @default.
- W4308158131 creator A5005009205 @default.
- W4308158131 creator A5006333473 @default.
- W4308158131 creator A5010465278 @default.
- W4308158131 creator A5020769112 @default.
- W4308158131 creator A5023363049 @default.
- W4308158131 creator A5031131275 @default.
- W4308158131 creator A5034576990 @default.
- W4308158131 creator A5067512214 @default.
- W4308158131 creator A5087266467 @default.
- W4308158131 date "2022-12-01" @default.
- W4308158131 modified "2023-10-04" @default.
- W4308158131 title "Estimating maize LAI by exploring deep features of vegetation index map from UAV multispectral images" @default.
- W4308158131 cites W1998943389 @default.
- W4308158131 cites W2023720029 @default.
- W4308158131 cites W2079842406 @default.
- W4308158131 cites W2086347597 @default.
- W4308158131 cites W2090790364 @default.
- W4308158131 cites W2116635928 @default.
- W4308158131 cites W2145197939 @default.
- W4308158131 cites W2194775991 @default.
- W4308158131 cites W2587709058 @default.
- W4308158131 cites W2646675373 @default.
- W4308158131 cites W2891337622 @default.
- W4308158131 cites W2903772126 @default.
- W4308158131 cites W2920653747 @default.
- W4308158131 cites W2943316090 @default.
- W4308158131 cites W2952266823 @default.
- W4308158131 cites W2967868553 @default.
- W4308158131 cites W2988241847 @default.
- W4308158131 cites W3007765580 @default.
- W4308158131 cites W3015562698 @default.
- W4308158131 cites W3035997048 @default.
- W4308158131 cites W3036085849 @default.
- W4308158131 cites W3040221110 @default.
- W4308158131 cites W3109140836 @default.
- W4308158131 cites W3185180736 @default.
- W4308158131 cites W3205772352 @default.
- W4308158131 cites W4200273939 @default.
- W4308158131 cites W4200406763 @default.
- W4308158131 cites W4214611056 @default.
- W4308158131 cites W4220922157 @default.
- W4308158131 cites W4221038349 @default.
- W4308158131 doi "https://doi.org/10.1016/j.fcr.2022.108739" @default.
- W4308158131 hasPublicationYear "2022" @default.
- W4308158131 type Work @default.
- W4308158131 citedByCount "7" @default.
- W4308158131 countsByYear W43081581312022 @default.
- W4308158131 countsByYear W43081581312023 @default.
- W4308158131 crossrefType "journal-article" @default.
- W4308158131 hasAuthorship W4308158131A5005009205 @default.
- W4308158131 hasAuthorship W4308158131A5006333473 @default.
- W4308158131 hasAuthorship W4308158131A5010465278 @default.
- W4308158131 hasAuthorship W4308158131A5020769112 @default.
- W4308158131 hasAuthorship W4308158131A5023363049 @default.
- W4308158131 hasAuthorship W4308158131A5031131275 @default.
- W4308158131 hasAuthorship W4308158131A5034576990 @default.
- W4308158131 hasAuthorship W4308158131A5067512214 @default.
- W4308158131 hasAuthorship W4308158131A5087266467 @default.
- W4308158131 hasConcept C101000010 @default.
- W4308158131 hasConcept C105795698 @default.
- W4308158131 hasConcept C121332964 @default.
- W4308158131 hasConcept C142724271 @default.
- W4308158131 hasConcept C1549246 @default.
- W4308158131 hasConcept C154945302 @default.
- W4308158131 hasConcept C166957645 @default.
- W4308158131 hasConcept C173163844 @default.
- W4308158131 hasConcept C183852935 @default.
- W4308158131 hasConcept C205649164 @default.
- W4308158131 hasConcept C22354355 @default.
- W4308158131 hasConcept C25989453 @default.
- W4308158131 hasConcept C2776133958 @default.
- W4308158131 hasConcept C33390570 @default.
- W4308158131 hasConcept C33923547 @default.
- W4308158131 hasConcept C39432304 @default.
- W4308158131 hasConcept C41008148 @default.
- W4308158131 hasConcept C62520636 @default.
- W4308158131 hasConcept C62649853 @default.
- W4308158131 hasConcept C6557445 @default.
- W4308158131 hasConcept C71924100 @default.
- W4308158131 hasConcept C86803240 @default.
- W4308158131 hasConceptScore W4308158131C101000010 @default.
- W4308158131 hasConceptScore W4308158131C105795698 @default.
- W4308158131 hasConceptScore W4308158131C121332964 @default.
- W4308158131 hasConceptScore W4308158131C142724271 @default.
- W4308158131 hasConceptScore W4308158131C1549246 @default.
- W4308158131 hasConceptScore W4308158131C154945302 @default.
- W4308158131 hasConceptScore W4308158131C166957645 @default.
- W4308158131 hasConceptScore W4308158131C173163844 @default.
- W4308158131 hasConceptScore W4308158131C183852935 @default.
- W4308158131 hasConceptScore W4308158131C205649164 @default.
- W4308158131 hasConceptScore W4308158131C22354355 @default.
- W4308158131 hasConceptScore W4308158131C25989453 @default.
- W4308158131 hasConceptScore W4308158131C2776133958 @default.
- W4308158131 hasConceptScore W4308158131C33390570 @default.
- W4308158131 hasConceptScore W4308158131C33923547 @default.
- W4308158131 hasConceptScore W4308158131C39432304 @default.