Matches in SemOpenAlex for { <https://semopenalex.org/work/W2235444918> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W2235444918 abstract "The rationale and motivation of this PhD thesis is in the diagnosis, assessment,maintenance and promotion of self-independence of people with dementia in their InstrumentalActivities of Daily Living (IADLs). In this context a strong focus is held towardsthe task of automatically recognizing IADLs. Egocentric video analysis (cameras worn by aperson) has recently gained much interest regarding this goal. Indeed recent studies havedemonstrated how crucial is the recognition of active objects (manipulated or observedby the person wearing the camera) for the activity recognition task and egocentric videospresent the advantage of holding a strong differentiation between active and passive objects(associated to background). One recent approach towards finding active elements in a sceneis the incorporation of visual saliency in the object recognition paradigms. Modeling theselective process of human perception of visual scenes represents an efficient way to drivethe scene analysis towards particular areas considered of interest or salient, which, in egocentricvideos, strongly corresponds to the locus of objects of interest. The objective of thisthesis is to design an object recognition system that relies on visual saliency-maps to providemore precise object representations, that are robust against background clutter and, therefore,improve the recognition of active object for the IADLs recognition task. This PhD thesisis conducted in the framework of the Dem@care European project.Regarding the vast field of visual saliency modeling, we investigate and propose a contributionin both Bottom-up (gaze driven by stimuli) and Top-down (gaze driven by semantics)areas that aim at enhancing the particular task of active object recognition in egocentricvideo content. Our first contribution on Bottom-up models originates from the fact thatobservers are attracted by a central stimulus (the center of an image). This biological phenomenonis known as central bias. In egocentric videos however this hypothesis does not alwayshold. We study saliency models with non-central bias geometrical cues. The proposedvisual saliency models are trained based on eye fixations of observers and incorporated intospatio-temporal saliency models. When compared to state of the art visual saliency models,the ones we present show promising results as they highlight the necessity of a non-centeredgeometric saliency cue. For our top-down model contribution we present a probabilisticvisual attention model for manipulated object recognition in egocentric video content. Althougharms often occlude objects and are usually considered as a burden for many visionsystems, they become an asset in our approach, as we extract both global and local featuresdescribing their geometric layout and pose, as well as the objects being manipulated. We integratethis information in a probabilistic generative model, provide update equations thatautomatically compute the model parameters optimizing the likelihood of the data, and designa method to generate maps of visual attention that are later used in an object-recognitionframework. This task-driven assessment reveals that the proposed method outperforms thestate-of-the-art in object recognition for egocentric video content. [...]" @default.
- W2235444918 created "2016-06-24" @default.
- W2235444918 creator A5020451475 @default.
- W2235444918 date "2015-11-30" @default.
- W2235444918 modified "2023-09-23" @default.
- W2235444918 title "Perceptual object of interest recognition : application to the interpretation of instrumental activities of daily living for dementia studies" @default.
- W2235444918 hasPublicationYear "2015" @default.
- W2235444918 type Work @default.
- W2235444918 sameAs 2235444918 @default.
- W2235444918 citedByCount "0" @default.
- W2235444918 crossrefType "dissertation" @default.
- W2235444918 hasAuthorship W2235444918A5020451475 @default.
- W2235444918 hasConcept C107457646 @default.
- W2235444918 hasConcept C121687571 @default.
- W2235444918 hasConcept C127413603 @default.
- W2235444918 hasConcept C154945302 @default.
- W2235444918 hasConcept C15744967 @default.
- W2235444918 hasConcept C166957645 @default.
- W2235444918 hasConcept C169760540 @default.
- W2235444918 hasConcept C180747234 @default.
- W2235444918 hasConcept C201995342 @default.
- W2235444918 hasConcept C205649164 @default.
- W2235444918 hasConcept C26760741 @default.
- W2235444918 hasConcept C2779343474 @default.
- W2235444918 hasConcept C2779916870 @default.
- W2235444918 hasConcept C2780451532 @default.
- W2235444918 hasConcept C2781238097 @default.
- W2235444918 hasConcept C31972630 @default.
- W2235444918 hasConcept C41008148 @default.
- W2235444918 hasConcept C64876066 @default.
- W2235444918 hasConceptScore W2235444918C107457646 @default.
- W2235444918 hasConceptScore W2235444918C121687571 @default.
- W2235444918 hasConceptScore W2235444918C127413603 @default.
- W2235444918 hasConceptScore W2235444918C154945302 @default.
- W2235444918 hasConceptScore W2235444918C15744967 @default.
- W2235444918 hasConceptScore W2235444918C166957645 @default.
- W2235444918 hasConceptScore W2235444918C169760540 @default.
- W2235444918 hasConceptScore W2235444918C180747234 @default.
- W2235444918 hasConceptScore W2235444918C201995342 @default.
- W2235444918 hasConceptScore W2235444918C205649164 @default.
- W2235444918 hasConceptScore W2235444918C26760741 @default.
- W2235444918 hasConceptScore W2235444918C2779343474 @default.
- W2235444918 hasConceptScore W2235444918C2779916870 @default.
- W2235444918 hasConceptScore W2235444918C2780451532 @default.
- W2235444918 hasConceptScore W2235444918C2781238097 @default.
- W2235444918 hasConceptScore W2235444918C31972630 @default.
- W2235444918 hasConceptScore W2235444918C41008148 @default.
- W2235444918 hasConceptScore W2235444918C64876066 @default.
- W2235444918 hasLocation W22354449181 @default.
- W2235444918 hasOpenAccess W2235444918 @default.
- W2235444918 hasPrimaryLocation W22354449181 @default.
- W2235444918 hasRelatedWork W1006670917 @default.
- W2235444918 hasRelatedWork W1548544675 @default.
- W2235444918 hasRelatedWork W1955738399 @default.
- W2235444918 hasRelatedWork W2012703318 @default.
- W2235444918 hasRelatedWork W2074205307 @default.
- W2235444918 hasRelatedWork W2084098959 @default.
- W2235444918 hasRelatedWork W2104532524 @default.
- W2235444918 hasRelatedWork W2114558148 @default.
- W2235444918 hasRelatedWork W2149276562 @default.
- W2235444918 hasRelatedWork W2217598536 @default.
- W2235444918 hasRelatedWork W2295786901 @default.
- W2235444918 hasRelatedWork W2298287476 @default.
- W2235444918 hasRelatedWork W2492134165 @default.
- W2235444918 hasRelatedWork W2504374417 @default.
- W2235444918 hasRelatedWork W2610537166 @default.
- W2235444918 hasRelatedWork W2612979588 @default.
- W2235444918 hasRelatedWork W2916883341 @default.
- W2235444918 hasRelatedWork W2971824625 @default.
- W2235444918 hasRelatedWork W3125189040 @default.
- W2235444918 hasRelatedWork W621239334 @default.
- W2235444918 isParatext "false" @default.
- W2235444918 isRetracted "false" @default.
- W2235444918 magId "2235444918" @default.
- W2235444918 workType "dissertation" @default.