Matches in SemOpenAlex for { <https://semopenalex.org/work/W2896650426> ?p ?o ?g. }
- W2896650426 abstract "Visual knowledge plays an important role in many highly skilled applications, such as medical diagnosis, geospatial image analysis and pathology diagnosis. Medical practitioners are able to interpret and reason about diagnostic images based on not only primitive-level image features such as color, texture, and spatial distribution but also their experience and tacit knowledge which are seldom articulated explicitly. This reasoning process is dynamic and closely related to real-time human cognition. Due to a lack of visual knowledge management and sharing tools, it is difficult to capture and transfer such tacit and hard-won expertise to novices. Moreover, many mission-critical applications require the ability to process such tacit visual knowledge in real time. Precisely how to index this visual knowledge computationally and systematically still poses a challenge to the computing community. My dissertation research results in novel computational approaches for high-throughput visual knowledge analysis and retrieval from large-scale databases using latest technologies in big data ecosystems. To provide a better understanding of visual reasoning, human gaze patterns are qualitatively measured spatially and temporally to model observers' cognitive process. These gaze patterns are then indexed in a NoSQL distributed database as a visual knowledge repository, which is accessed using various unique retrieval methods developed through this dissertation work. To provide meaningful retrievals in real time, deep-learning methods for automatic annotation of visual activities and streaming similarity comparisons are developed under a gaze-streaming framework using Apache Spark. This research has several potential applications that offer a broader impact among the scientific community and in the practical world. First, the proposed framework can be adapted for different domains, such as fine arts, life sciences, etc. with minimal effort to capture human reasoning processes. Second, with its real-time visual knowledge search function, this framework can be used for training novices in the interpretation of domain images, by helping them learn experts' reasoning processes. Third, by helping researchers to understand human visual reasoning, it may shed light on human semantics modeling. Finally, integrating reasoning process with multimedia data, future retrieval of media could embed human perceptual reasoning for database search beyond traditional content-based media retrievals." @default.
- W2896650426 created "2018-10-26" @default.
- W2896650426 creator A5088559100 @default.
- W2896650426 date "2021-04-14" @default.
- W2896650426 modified "2023-09-26" @default.
- W2896650426 title "High-throughput visual knowledge analysis and retrieval in big data ecosystems" @default.
- W2896650426 cites W1283708 @default.
- W2896650426 cites W1480736306 @default.
- W2896650426 cites W1497599070 @default.
- W2896650426 cites W1503061071 @default.
- W2896650426 cites W1509235435 @default.
- W2896650426 cites W1525136198 @default.
- W2896650426 cites W1570719805 @default.
- W2896650426 cites W1571907546 @default.
- W2896650426 cites W1580745864 @default.
- W2896650426 cites W1602376808 @default.
- W2896650426 cites W1934019294 @default.
- W2896650426 cites W1972978214 @default.
- W2896650426 cites W1975711037 @default.
- W2896650426 cites W1976821017 @default.
- W2896650426 cites W1977783431 @default.
- W2896650426 cites W1981420413 @default.
- W2896650426 cites W1982003698 @default.
- W2896650426 cites W1985535423 @default.
- W2896650426 cites W1995689975 @default.
- W2896650426 cites W2004604272 @default.
- W2896650426 cites W2024342127 @default.
- W2896650426 cites W2024787595 @default.
- W2896650426 cites W2025033911 @default.
- W2896650426 cites W2053874005 @default.
- W2896650426 cites W2063178829 @default.
- W2896650426 cites W2065290179 @default.
- W2896650426 cites W2066771327 @default.
- W2896650426 cites W2067857362 @default.
- W2896650426 cites W2078799042 @default.
- W2896650426 cites W2082490348 @default.
- W2896650426 cites W2089442574 @default.
- W2896650426 cites W2089964152 @default.
- W2896650426 cites W2096544401 @default.
- W2896650426 cites W2098550303 @default.
- W2896650426 cites W2099893215 @default.
- W2896650426 cites W2100726628 @default.
- W2896650426 cites W2102804490 @default.
- W2896650426 cites W2102862543 @default.
- W2896650426 cites W2105947650 @default.
- W2896650426 cites W2107002436 @default.
- W2896650426 cites W2109227373 @default.
- W2896650426 cites W2110086534 @default.
- W2896650426 cites W2112200469 @default.
- W2896650426 cites W2114615169 @default.
- W2896650426 cites W2119567691 @default.
- W2896650426 cites W2119738171 @default.
- W2896650426 cites W2130009882 @default.
- W2896650426 cites W2131166445 @default.
- W2896650426 cites W2131975293 @default.
- W2896650426 cites W2132353975 @default.
- W2896650426 cites W2144546400 @default.
- W2896650426 cites W2147880316 @default.
- W2896650426 cites W2154894831 @default.
- W2896650426 cites W2155701725 @default.
- W2896650426 cites W2160067530 @default.
- W2896650426 cites W2173213060 @default.
- W2896650426 cites W2250892821 @default.
- W2896650426 cites W2543258082 @default.
- W2896650426 cites W3022697243 @default.
- W2896650426 cites W85152245 @default.
- W2896650426 doi "https://doi.org/10.32469/10355/63919" @default.
- W2896650426 hasPublicationYear "2021" @default.
- W2896650426 type Work @default.
- W2896650426 sameAs 2896650426 @default.
- W2896650426 citedByCount "0" @default.
- W2896650426 crossrefType "dissertation" @default.
- W2896650426 hasAuthorship W2896650426A5088559100 @default.
- W2896650426 hasBestOaLocation W28966504262 @default.
- W2896650426 hasConcept C111919701 @default.
- W2896650426 hasConcept C124101348 @default.
- W2896650426 hasConcept C154945302 @default.
- W2896650426 hasConcept C17305859 @default.
- W2896650426 hasConcept C205649164 @default.
- W2896650426 hasConcept C23123220 @default.
- W2896650426 hasConcept C2522767166 @default.
- W2896650426 hasConcept C2779561248 @default.
- W2896650426 hasConcept C36464697 @default.
- W2896650426 hasConcept C41008148 @default.
- W2896650426 hasConcept C56739046 @default.
- W2896650426 hasConcept C58640448 @default.
- W2896650426 hasConcept C59732488 @default.
- W2896650426 hasConcept C75684735 @default.
- W2896650426 hasConcept C9770341 @default.
- W2896650426 hasConcept C98045186 @default.
- W2896650426 hasConceptScore W2896650426C111919701 @default.
- W2896650426 hasConceptScore W2896650426C124101348 @default.
- W2896650426 hasConceptScore W2896650426C154945302 @default.
- W2896650426 hasConceptScore W2896650426C17305859 @default.
- W2896650426 hasConceptScore W2896650426C205649164 @default.
- W2896650426 hasConceptScore W2896650426C23123220 @default.
- W2896650426 hasConceptScore W2896650426C2522767166 @default.
- W2896650426 hasConceptScore W2896650426C2779561248 @default.
- W2896650426 hasConceptScore W2896650426C36464697 @default.
- W2896650426 hasConceptScore W2896650426C41008148 @default.