Matches in SemOpenAlex for { <https://semopenalex.org/work/W2525226171> ?p ?o ?g. }
Showing items 1 to 78 of
78
with 100 items per page.
- W2525226171 abstract "The growing usage of mobile camera phones has led to proliferation of many mobile applications, such as mobile city guide, mobile shopping, personalized mobile service, and personal album management. Mobile visual systems have been developed which analyze images taken by mobile devices to enable these mobile applications. Amongst these applications, there are two important ones: 1) mobile image recognition which provides relevant information for the scene/landmark images, and 2) mobile image annotation that uses camera phones to capture images and annotate them. Mobile image recognition and annotation are closely related, and are based on mobile visual analysis. In order to enhance the performance of mobile visual system, it is natural to incorporate the mobile domain-specific context information to the conventional visual content analysis. The context information in this work includes location and direction information on mobile devices, mobile user interaction, etc. However, context information is underutilized in most of the existing mobile visual systems. Existing mobile visual systems mainly use location information provided by GPS (Global Positioning System) to obtain the candidate images located near the current location of the query image, and then carry out content analysis within the shortlisted candidates to obtain the final recognition/annotation results. This is insufficient since (i) GPS is not that reliable due to its large errors in dense build-up areas, and (ii) other context information such as direction (recorded by digital compass on mobile device) is not utilized to further improve recognition. For mobile image recognition, we proposed several approaches based on content analysis with possible incorporation of context information: 1) A new approach for scene image recognition is proposed by combining generative models and discriminative models. A new image signature is proposed based on Gaussian Mixture Model (GMM), and its soft relevance value is incorporated into training of Fuzzy Support Vector Machine (FSVM). By using the proposed GMM-FSVM approach, the recognition performance is shown to be superior to state-of-the-art Bag-of-Words (BoW) methods. 2) A new landmark image recognition method is proposed that can incorporate saliency information of images to the state-of-the-art Scalable Vocabulary Tree (SVT) approach. Since the saliency information emphasizes the foreground landmark object and ignores the cluttered background, recognition performance of the proposed Saliency-Aware Vocabulary Tree (SAVT) algorithm is improved relative to the baseline SVT approach. 3) We propose a real-valued multi-class adaboost algorithm using exponential loss function (RMAE), which can integrate visual content and two types of mobile context: location and direction. RMAE generates SVTs based on content and context analysis, respectively, and then constructs weak classifiers based on them, followed by the final strong classifier construction based on the weak classifiers which contains both content and context…" @default.
- W2525226171 created "2016-10-07" @default.
- W2525226171 creator A5010444377 @default.
- W2525226171 date "2019-10-03" @default.
- W2525226171 modified "2023-09-22" @default.
- W2525226171 title "Context-aware mobile image recognition and annotation" @default.
- W2525226171 doi "https://doi.org/10.32657/10356/55100" @default.
- W2525226171 hasPublicationYear "2019" @default.
- W2525226171 type Work @default.
- W2525226171 sameAs 2525226171 @default.
- W2525226171 citedByCount "0" @default.
- W2525226171 crossrefType "dissertation" @default.
- W2525226171 hasAuthorship W2525226171A5010444377 @default.
- W2525226171 hasConcept C107457646 @default.
- W2525226171 hasConcept C127353759 @default.
- W2525226171 hasConcept C136764020 @default.
- W2525226171 hasConcept C144543869 @default.
- W2525226171 hasConcept C154945302 @default.
- W2525226171 hasConcept C166957645 @default.
- W2525226171 hasConcept C186967261 @default.
- W2525226171 hasConcept C205649164 @default.
- W2525226171 hasConcept C207029474 @default.
- W2525226171 hasConcept C2776321320 @default.
- W2525226171 hasConcept C2779343474 @default.
- W2525226171 hasConcept C31972630 @default.
- W2525226171 hasConcept C41008148 @default.
- W2525226171 hasConcept C49774154 @default.
- W2525226171 hasConcept C516764902 @default.
- W2525226171 hasConcept C60952562 @default.
- W2525226171 hasConcept C64754055 @default.
- W2525226171 hasConcept C68649174 @default.
- W2525226171 hasConcept C76155785 @default.
- W2525226171 hasConceptScore W2525226171C107457646 @default.
- W2525226171 hasConceptScore W2525226171C127353759 @default.
- W2525226171 hasConceptScore W2525226171C136764020 @default.
- W2525226171 hasConceptScore W2525226171C144543869 @default.
- W2525226171 hasConceptScore W2525226171C154945302 @default.
- W2525226171 hasConceptScore W2525226171C166957645 @default.
- W2525226171 hasConceptScore W2525226171C186967261 @default.
- W2525226171 hasConceptScore W2525226171C205649164 @default.
- W2525226171 hasConceptScore W2525226171C207029474 @default.
- W2525226171 hasConceptScore W2525226171C2776321320 @default.
- W2525226171 hasConceptScore W2525226171C2779343474 @default.
- W2525226171 hasConceptScore W2525226171C31972630 @default.
- W2525226171 hasConceptScore W2525226171C41008148 @default.
- W2525226171 hasConceptScore W2525226171C49774154 @default.
- W2525226171 hasConceptScore W2525226171C516764902 @default.
- W2525226171 hasConceptScore W2525226171C60952562 @default.
- W2525226171 hasConceptScore W2525226171C64754055 @default.
- W2525226171 hasConceptScore W2525226171C68649174 @default.
- W2525226171 hasConceptScore W2525226171C76155785 @default.
- W2525226171 hasLocation W25252261711 @default.
- W2525226171 hasOpenAccess W2525226171 @default.
- W2525226171 hasPrimaryLocation W25252261711 @default.
- W2525226171 hasRelatedWork W106564149 @default.
- W2525226171 hasRelatedWork W1539866415 @default.
- W2525226171 hasRelatedWork W1585779673 @default.
- W2525226171 hasRelatedWork W1971566873 @default.
- W2525226171 hasRelatedWork W1985975773 @default.
- W2525226171 hasRelatedWork W202126672 @default.
- W2525226171 hasRelatedWork W2052753938 @default.
- W2525226171 hasRelatedWork W2073061769 @default.
- W2525226171 hasRelatedWork W2075862596 @default.
- W2525226171 hasRelatedWork W2076585681 @default.
- W2525226171 hasRelatedWork W2081872368 @default.
- W2525226171 hasRelatedWork W2087754359 @default.
- W2525226171 hasRelatedWork W2591455741 @default.
- W2525226171 hasRelatedWork W2766976632 @default.
- W2525226171 hasRelatedWork W2891743675 @default.
- W2525226171 hasRelatedWork W2965433310 @default.
- W2525226171 hasRelatedWork W3132415251 @default.
- W2525226171 hasRelatedWork W3204082522 @default.
- W2525226171 hasRelatedWork W3205173052 @default.
- W2525226171 hasRelatedWork W2182393350 @default.
- W2525226171 isParatext "false" @default.
- W2525226171 isRetracted "false" @default.
- W2525226171 magId "2525226171" @default.
- W2525226171 workType "dissertation" @default.