Matches in SemOpenAlex for { <https://semopenalex.org/work/W2611183257> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W2611183257 abstract "With the increasing need for automated video analysis, visual object tracking became an important task in computer vision. Object tracking is used in a wide range of applications such as surveillance, human-computer interaction, medical imaging or vehicle navigation. A tracking algorithm in unconstrained environments faces multiple challenges : potential changes in object shape and background, lighting, camera motion, and other adverse acquisition conditions. In this setting, classic methods of background subtraction are inadequate, and more discriminative methods of object detection are needed. Moreover, in generic tracking algorithms, the nature of the object is not known a priori. Thus, off-line learned appearance models for specific types of objects such as faces, or pedestrians can not be used. Further, the recent evolution of powerful machine learning techniques enabled the development of new tracking methods that learn the object appearance in an online manner and adapt to the varying constraints in real time, leading to very robust tracking algorithms that can operate in non-stationary environments to some extent. In this thesis, we start from the observation that different tracking algorithms have different strengths and weaknesses depending on the context. To overcome the varying challenges, we show that combining multiple modalities and tracking algorithms can considerably improve the overall tracking performance in unconstrained environments. More concretely, we first introduced a new tracker selection framework using a spatial and temporal coherence criterion. In this algorithm, multiple independent trackers are combined in a parallel manner, each of them using low-level features based on different complementary visual aspects like colour, texture and shape. By recurrently selecting the most suitable tracker, the overall system can switch rapidly between different tracking algorithms with specific appearance models depending on the changes in the video. In the second contribution, the scene context is introduced to the tracker selection. We designed effective visual features, extracted from the scene context to characterise the different image conditions and variations. At each point in time, a classifier is trained based on these features to predict the tracker that will perform best under the given scene conditions. We further improved this context-based framework and proposed an extended version, where the individual trackers are changed and the classifier training is optimised. Finally, we started exploring one interesting perspective that is the use of a Convolutional Neural Network to automatically learn to extract these scene features directly from the input image and predict the most suitable tracker." @default.
- W2611183257 created "2017-05-12" @default.
- W2611183257 creator A5031369530 @default.
- W2611183257 date "2016-11-03" @default.
- W2611183257 modified "2023-09-24" @default.
- W2611183257 title "Exploiting scene context for on-line object tracking in unconstrained environments" @default.
- W2611183257 hasPublicationYear "2016" @default.
- W2611183257 type Work @default.
- W2611183257 sameAs 2611183257 @default.
- W2611183257 citedByCount "0" @default.
- W2611183257 crossrefType "dissertation" @default.
- W2611183257 hasAuthorship W2611183257A5031369530 @default.
- W2611183257 hasConcept C151730666 @default.
- W2611183257 hasConcept C153180895 @default.
- W2611183257 hasConcept C154945302 @default.
- W2611183257 hasConcept C15744967 @default.
- W2611183257 hasConcept C160633673 @default.
- W2611183257 hasConcept C19417346 @default.
- W2611183257 hasConcept C202474056 @default.
- W2611183257 hasConcept C2775936607 @default.
- W2611183257 hasConcept C2776151529 @default.
- W2611183257 hasConcept C2779343474 @default.
- W2611183257 hasConcept C2781238097 @default.
- W2611183257 hasConcept C31972630 @default.
- W2611183257 hasConcept C32653426 @default.
- W2611183257 hasConcept C41008148 @default.
- W2611183257 hasConcept C56461940 @default.
- W2611183257 hasConcept C57501372 @default.
- W2611183257 hasConcept C86803240 @default.
- W2611183257 hasConcept C97931131 @default.
- W2611183257 hasConceptScore W2611183257C151730666 @default.
- W2611183257 hasConceptScore W2611183257C153180895 @default.
- W2611183257 hasConceptScore W2611183257C154945302 @default.
- W2611183257 hasConceptScore W2611183257C15744967 @default.
- W2611183257 hasConceptScore W2611183257C160633673 @default.
- W2611183257 hasConceptScore W2611183257C19417346 @default.
- W2611183257 hasConceptScore W2611183257C202474056 @default.
- W2611183257 hasConceptScore W2611183257C2775936607 @default.
- W2611183257 hasConceptScore W2611183257C2776151529 @default.
- W2611183257 hasConceptScore W2611183257C2779343474 @default.
- W2611183257 hasConceptScore W2611183257C2781238097 @default.
- W2611183257 hasConceptScore W2611183257C31972630 @default.
- W2611183257 hasConceptScore W2611183257C32653426 @default.
- W2611183257 hasConceptScore W2611183257C41008148 @default.
- W2611183257 hasConceptScore W2611183257C56461940 @default.
- W2611183257 hasConceptScore W2611183257C57501372 @default.
- W2611183257 hasConceptScore W2611183257C86803240 @default.
- W2611183257 hasConceptScore W2611183257C97931131 @default.
- W2611183257 hasLocation W26111832571 @default.
- W2611183257 hasOpenAccess W2611183257 @default.
- W2611183257 hasPrimaryLocation W26111832571 @default.
- W2611183257 hasRelatedWork W1551429860 @default.
- W2611183257 hasRelatedWork W1979686512 @default.
- W2611183257 hasRelatedWork W1991534903 @default.
- W2611183257 hasRelatedWork W1997310483 @default.
- W2611183257 hasRelatedWork W2038430018 @default.
- W2611183257 hasRelatedWork W2127838520 @default.
- W2611183257 hasRelatedWork W2285041431 @default.
- W2611183257 hasRelatedWork W2320541657 @default.
- W2611183257 hasRelatedWork W2387051238 @default.
- W2611183257 hasRelatedWork W2405129404 @default.
- W2611183257 hasRelatedWork W2603251214 @default.
- W2611183257 hasRelatedWork W2890224976 @default.
- W2611183257 hasRelatedWork W2915025431 @default.
- W2611183257 hasRelatedWork W2962989418 @default.
- W2611183257 hasRelatedWork W2963520445 @default.
- W2611183257 hasRelatedWork W3021138684 @default.
- W2611183257 hasRelatedWork W3132576305 @default.
- W2611183257 hasRelatedWork W3163467117 @default.
- W2611183257 hasRelatedWork W3173878319 @default.
- W2611183257 hasRelatedWork W2333645783 @default.
- W2611183257 isParatext "false" @default.
- W2611183257 isRetracted "false" @default.
- W2611183257 magId "2611183257" @default.
- W2611183257 workType "dissertation" @default.