Matches in SemOpenAlex for { <https://semopenalex.org/work/W4311800904> ?p ?o ?g. }
Showing items 1 to 63 of
63
with 100 items per page.
- W4311800904 endingPage "3681" @default.
- W4311800904 startingPage "3681" @default.
- W4311800904 abstract "Introduction. Delineation of retinotopic map boundaries in human visual cortex is a time-consuming task. Automated methods based on anatomy (cortical folding pattern; Benson et al., 2014; DOI:10.1016/j.cub.2012.09.014) or a combination of anatomy and retinotopic mapping measurements (Benson & Winawer, 2018; DOI:10.7554/eLife.40224) exist, but human experts are more accurate than these methods (Benson et al., 2021; DOI:10.1101/2020.12.30.424856). Convolutional Neural Networks (CNNs) are powerful tools for image processing, and recent work has shown they can predict polar angle and eccentricity maps in individual subjects based on anatomy (Ribiero et al., 2021; DOI:10.1016/j.neuroimage.2021.118624). We hypothesize that a CNN could predict V1, V2, and V3 boundaries in individual subjects with greater accuracy than existing methods. Methods. We used the expert-drawn V1-V3 boundaries from Benson et al. (2021) of the subjects in the Human Connectome Project 7 Tesla Retinotopy Dataset (Benson et al., 2018; DOI:10.1167/18.13.23) as training (N=135) and test data (N=32). We constructed a U-Net CNN with a ResNet-18 backbone and trained it with either anatomical (curvature, thickness, surface area, and sulcal depth) or functional (retinotopic) maps as input. Results. CNN predictions out-performed other methods. The median dice coefficients between predicted and expert-drawn labels from the test dataset for the CNNs trained using anatomical and functional data were 0.77 and 0.90, respectively. In comparison, coefficients for existing methods based on anatomical or anatomical plus functional data were 0.70 and 0.72, respectively. These results demonstrate that even with a small training dataset, CNNs excel at accurately labeling visual areas on human brains in an automated fashion. This method can facilitate vision science neuroimaging experiments by making an otherwise difficult and subjective process fast, precise and reliable." @default.
- W4311800904 created "2022-12-28" @default.
- W4311800904 creator A5003436423 @default.
- W4311800904 creator A5007748507 @default.
- W4311800904 creator A5050219758 @default.
- W4311800904 creator A5068104181 @default.
- W4311800904 date "2022-12-05" @default.
- W4311800904 modified "2023-10-14" @default.
- W4311800904 title "Accurate and automated delineation of V1-V3 boundaries by a CNN" @default.
- W4311800904 doi "https://doi.org/10.1167/jov.22.14.3681" @default.
- W4311800904 hasPublicationYear "2022" @default.
- W4311800904 type Work @default.
- W4311800904 citedByCount "0" @default.
- W4311800904 crossrefType "journal-article" @default.
- W4311800904 hasAuthorship W4311800904A5003436423 @default.
- W4311800904 hasAuthorship W4311800904A5007748507 @default.
- W4311800904 hasAuthorship W4311800904A5050219758 @default.
- W4311800904 hasAuthorship W4311800904A5068104181 @default.
- W4311800904 hasBestOaLocation W43118009041 @default.
- W4311800904 hasConcept C153180895 @default.
- W4311800904 hasConcept C154945302 @default.
- W4311800904 hasConcept C169760540 @default.
- W4311800904 hasConcept C205649164 @default.
- W4311800904 hasConcept C2779345533 @default.
- W4311800904 hasConcept C2779528209 @default.
- W4311800904 hasConcept C3018011982 @default.
- W4311800904 hasConcept C41008148 @default.
- W4311800904 hasConcept C58640448 @default.
- W4311800904 hasConcept C81363708 @default.
- W4311800904 hasConcept C86803240 @default.
- W4311800904 hasConcept C97820695 @default.
- W4311800904 hasConceptScore W4311800904C153180895 @default.
- W4311800904 hasConceptScore W4311800904C154945302 @default.
- W4311800904 hasConceptScore W4311800904C169760540 @default.
- W4311800904 hasConceptScore W4311800904C205649164 @default.
- W4311800904 hasConceptScore W4311800904C2779345533 @default.
- W4311800904 hasConceptScore W4311800904C2779528209 @default.
- W4311800904 hasConceptScore W4311800904C3018011982 @default.
- W4311800904 hasConceptScore W4311800904C41008148 @default.
- W4311800904 hasConceptScore W4311800904C58640448 @default.
- W4311800904 hasConceptScore W4311800904C81363708 @default.
- W4311800904 hasConceptScore W4311800904C86803240 @default.
- W4311800904 hasConceptScore W4311800904C97820695 @default.
- W4311800904 hasIssue "14" @default.
- W4311800904 hasLocation W43118009041 @default.
- W4311800904 hasOpenAccess W4311800904 @default.
- W4311800904 hasPrimaryLocation W43118009041 @default.
- W4311800904 hasRelatedWork W2767651786 @default.
- W4311800904 hasRelatedWork W2799049669 @default.
- W4311800904 hasRelatedWork W2807558808 @default.
- W4311800904 hasRelatedWork W2907880177 @default.
- W4311800904 hasRelatedWork W2912288872 @default.
- W4311800904 hasRelatedWork W3103167480 @default.
- W4311800904 hasRelatedWork W3119127442 @default.
- W4311800904 hasRelatedWork W4298127894 @default.
- W4311800904 hasRelatedWork W4306719982 @default.
- W4311800904 hasRelatedWork W4385827921 @default.
- W4311800904 hasVolume "22" @default.
- W4311800904 isParatext "false" @default.
- W4311800904 isRetracted "false" @default.
- W4311800904 workType "article" @default.