Matches in SemOpenAlex for { <https://semopenalex.org/work/W2926670599> ?p ?o ?g. }
Showing items 1 to 59 of
59
with 100 items per page.
- W2926670599 abstract "Current automated fetal ultrasound (US) analysis methods employ local descriptors and machine learning frameworks to identify salient image regions. This ‘bottom-up’ approach has limitations, as structures identified by local descriptors are not necessarily anatomically salient. In contrast, the human visual system employs a 'top-down' approach to image analysis guided primarily by image context and prior knowledge. This thesis attempts to bridge the gap between top-down and bottom-up approaches to US image analysis. We conduct eye tracking experiments to determine which local descriptors and global constraints guide the visual attention of human observers interpreting fetal US images. We then implement machine learning frameworks which mimic observers’ visual search strategies for anatomical landmark localisation, standardised image plane selection, and video classification. We first developed a framework for landmark localisation in 2-D fetal abdominal US images. Informed by the eye movements of observers searching for anatomical landmarks in images, we derived a pictorial structures model which achieved mean detection accuracies of 87.2% and 83.2% for the stomach bubble and umbilical vein. We extended this framework to automate standardised imaging plane detection in 3-D fetal abdominal US volumes, achieving a mean standardised plane detection accuracy of 92.5%. We then implemented a bag-of-visual-words model for 2-D+t fetal US video clip classification. We recorded the eye movements of observers tasked with classifying videos, and trained a feed-forward neural network directly on eye tracking data to predict visually salient regions in unseen videos. This perception inspired spatiotemporal interest point operator was used within a framework for the classification of fetal US video clips, achieving 80.0% mean accuracy. This work constitutes the first demonstration that high-level constraints and visual saliency models obtained through eye tracking experiments can improve the accuracy of machine learning frameworks for US image analysis." @default.
- W2926670599 created "2019-04-11" @default.
- W2926670599 creator A5065592025 @default.
- W2926670599 date "2017-01-01" @default.
- W2926670599 modified "2023-09-28" @default.
- W2926670599 title "Eye tracking to aid fetal ultrasound image analysis" @default.
- W2926670599 hasPublicationYear "2017" @default.
- W2926670599 type Work @default.
- W2926670599 sameAs 2926670599 @default.
- W2926670599 citedByCount "0" @default.
- W2926670599 crossrefType "dissertation" @default.
- W2926670599 hasAuthorship W2926670599A5065592025 @default.
- W2926670599 hasConcept C153180895 @default.
- W2926670599 hasConcept C154945302 @default.
- W2926670599 hasConcept C166957645 @default.
- W2926670599 hasConcept C205649164 @default.
- W2926670599 hasConcept C2779343474 @default.
- W2926670599 hasConcept C2780297707 @default.
- W2926670599 hasConcept C2780719617 @default.
- W2926670599 hasConcept C31972630 @default.
- W2926670599 hasConcept C41008148 @default.
- W2926670599 hasConcept C56461940 @default.
- W2926670599 hasConceptScore W2926670599C153180895 @default.
- W2926670599 hasConceptScore W2926670599C154945302 @default.
- W2926670599 hasConceptScore W2926670599C166957645 @default.
- W2926670599 hasConceptScore W2926670599C205649164 @default.
- W2926670599 hasConceptScore W2926670599C2779343474 @default.
- W2926670599 hasConceptScore W2926670599C2780297707 @default.
- W2926670599 hasConceptScore W2926670599C2780719617 @default.
- W2926670599 hasConceptScore W2926670599C31972630 @default.
- W2926670599 hasConceptScore W2926670599C41008148 @default.
- W2926670599 hasConceptScore W2926670599C56461940 @default.
- W2926670599 hasLocation W29266705991 @default.
- W2926670599 hasOpenAccess W2926670599 @default.
- W2926670599 hasPrimaryLocation W29266705991 @default.
- W2926670599 hasRelatedWork W2011870623 @default.
- W2926670599 hasRelatedWork W2015888549 @default.
- W2926670599 hasRelatedWork W2037835380 @default.
- W2926670599 hasRelatedWork W2116284372 @default.
- W2926670599 hasRelatedWork W213983614 @default.
- W2926670599 hasRelatedWork W2294172275 @default.
- W2926670599 hasRelatedWork W2295490424 @default.
- W2926670599 hasRelatedWork W2296549548 @default.
- W2926670599 hasRelatedWork W2374346174 @default.
- W2926670599 hasRelatedWork W2521139121 @default.
- W2926670599 hasRelatedWork W2587786196 @default.
- W2926670599 hasRelatedWork W2803798328 @default.
- W2926670599 hasRelatedWork W2891288863 @default.
- W2926670599 hasRelatedWork W2986056979 @default.
- W2926670599 hasRelatedWork W2992319611 @default.
- W2926670599 hasRelatedWork W3004177837 @default.
- W2926670599 hasRelatedWork W3080349177 @default.
- W2926670599 hasRelatedWork W3164802727 @default.
- W2926670599 hasRelatedWork W3165615883 @default.
- W2926670599 hasRelatedWork W2122996330 @default.
- W2926670599 isParatext "false" @default.
- W2926670599 isRetracted "false" @default.
- W2926670599 magId "2926670599" @default.
- W2926670599 workType "dissertation" @default.