Matches in SemOpenAlex for { <https://semopenalex.org/work/W1557127310> ?p ?o ?g. }
- W1557127310 abstract "In this work we study how we can use a novel model of spatial saliency (visual attention) combined with image features to significantly accelerate a scene recognition application and, at the same time, preserve recognition performance. To do so, we use a mobile robotlike application where scene recognition is carried out through the use of image features to characterize the different scenarios, and the Nearest Neighbor rule to carry out the classification. SIFT and SURF are two recent and competitive alternatives to image local featuring that we compare through extensive experimental work. Results from the experiments show that SIFT features perform significantly better than SURF features achieving important reductions in the size of the database of prototypes without significant losses in recognition performance, and thus, accelerating scene recognition. Also, from the experiments it is concluded that SURF features are less distinctive when using very large databases of interest points, as it occurs in the present case. Visual attention is the process by which the Human Visual System (HVS) is able to select from a given scene regions of interest that contain salient information, and thus, reduce the amount of information to be processed (Treisman, 1980; Koch, 1985). In the last decade, several computational models biologically motivated have been released to implement visual attention in image and video processing (Itti, 2000; Garcia-Diaz, 2008). Visual attention has also been used to improve object recognition and scene analysis (Bonaiuto, 2005; Walther, 2005). In this chapter, we study the utility of using a novel model of spatial saliency to improve a scene recognition application by reducing the amount of prototypes needed to carry out the classification task. The application is based on mobile robot-like video sequences taken in indoor facilities formed by several rooms and halls. The aim is to recognize the different scenarios in order to provide the mobile robot system with general location data. The visual attention approach is a novel model of bottom-up saliency that uses local phase information of the input data where the statistic information of second order is deleted to achieve a Retinoptical map of saliency. The proposed approach joints computational mechanisms of the two hypotheses largely accepted in early vision: first, the efficient coding" @default.
- W1557127310 created "2016-06-24" @default.
- W1557127310 creator A5018082691 @default.
- W1557127310 creator A5056902042 @default.
- W1557127310 creator A5067026608 @default.
- W1557127310 creator A5072735859 @default.
- W1557127310 date "2011-04-01" @default.
- W1557127310 modified "2023-10-01" @default.
- W1557127310 title "Scene Recognition through Visual Attention and Image Features: A Comparison between SIFT and SURF Approaches" @default.
- W1557127310 cites W104847522 @default.
- W1557127310 cites W1497599070 @default.
- W1557127310 cites W1533072162 @default.
- W1557127310 cites W1560922264 @default.
- W1557127310 cites W1600886747 @default.
- W1557127310 cites W1962010357 @default.
- W1557127310 cites W1983024748 @default.
- W1557127310 cites W2002260191 @default.
- W1557127310 cites W2048180049 @default.
- W1557127310 cites W2054802006 @default.
- W1557127310 cites W2119605622 @default.
- W1557127310 cites W2128427846 @default.
- W1557127310 cites W2135269154 @default.
- W1557127310 cites W2142748282 @default.
- W1557127310 cites W2149095485 @default.
- W1557127310 cites W2151035455 @default.
- W1557127310 cites W2151103935 @default.
- W1557127310 cites W2170869852 @default.
- W1557127310 cites W2177274842 @default.
- W1557127310 cites W81409850 @default.
- W1557127310 doi "https://doi.org/10.5772/14343" @default.
- W1557127310 hasPublicationYear "2011" @default.
- W1557127310 type Work @default.
- W1557127310 sameAs 1557127310 @default.
- W1557127310 citedByCount "11" @default.
- W1557127310 countsByYear W15571273102015 @default.
- W1557127310 countsByYear W15571273102016 @default.
- W1557127310 countsByYear W15571273102018 @default.
- W1557127310 countsByYear W15571273102019 @default.
- W1557127310 countsByYear W15571273102020 @default.
- W1557127310 crossrefType "book-chapter" @default.
- W1557127310 hasAuthorship W1557127310A5018082691 @default.
- W1557127310 hasAuthorship W1557127310A5056902042 @default.
- W1557127310 hasAuthorship W1557127310A5067026608 @default.
- W1557127310 hasAuthorship W1557127310A5072735859 @default.
- W1557127310 hasBestOaLocation W15571273101 @default.
- W1557127310 hasConcept C115961682 @default.
- W1557127310 hasConcept C138885662 @default.
- W1557127310 hasConcept C153180895 @default.
- W1557127310 hasConcept C154945302 @default.
- W1557127310 hasConcept C158495155 @default.
- W1557127310 hasConcept C160086991 @default.
- W1557127310 hasConcept C169760540 @default.
- W1557127310 hasConcept C26760741 @default.
- W1557127310 hasConcept C2776401178 @default.
- W1557127310 hasConcept C2780719617 @default.
- W1557127310 hasConcept C2781238097 @default.
- W1557127310 hasConcept C2986089797 @default.
- W1557127310 hasConcept C31972630 @default.
- W1557127310 hasConcept C36464697 @default.
- W1557127310 hasConcept C41008148 @default.
- W1557127310 hasConcept C41895202 @default.
- W1557127310 hasConcept C61265191 @default.
- W1557127310 hasConcept C64876066 @default.
- W1557127310 hasConcept C86803240 @default.
- W1557127310 hasConcept C9417928 @default.
- W1557127310 hasConceptScore W1557127310C115961682 @default.
- W1557127310 hasConceptScore W1557127310C138885662 @default.
- W1557127310 hasConceptScore W1557127310C153180895 @default.
- W1557127310 hasConceptScore W1557127310C154945302 @default.
- W1557127310 hasConceptScore W1557127310C158495155 @default.
- W1557127310 hasConceptScore W1557127310C160086991 @default.
- W1557127310 hasConceptScore W1557127310C169760540 @default.
- W1557127310 hasConceptScore W1557127310C26760741 @default.
- W1557127310 hasConceptScore W1557127310C2776401178 @default.
- W1557127310 hasConceptScore W1557127310C2780719617 @default.
- W1557127310 hasConceptScore W1557127310C2781238097 @default.
- W1557127310 hasConceptScore W1557127310C2986089797 @default.
- W1557127310 hasConceptScore W1557127310C31972630 @default.
- W1557127310 hasConceptScore W1557127310C36464697 @default.
- W1557127310 hasConceptScore W1557127310C41008148 @default.
- W1557127310 hasConceptScore W1557127310C41895202 @default.
- W1557127310 hasConceptScore W1557127310C61265191 @default.
- W1557127310 hasConceptScore W1557127310C64876066 @default.
- W1557127310 hasConceptScore W1557127310C86803240 @default.
- W1557127310 hasConceptScore W1557127310C9417928 @default.
- W1557127310 hasLocation W15571273101 @default.
- W1557127310 hasLocation W15571273102 @default.
- W1557127310 hasOpenAccess W1557127310 @default.
- W1557127310 hasPrimaryLocation W15571273101 @default.
- W1557127310 hasRelatedWork W1497599070 @default.
- W1557127310 hasRelatedWork W1510835000 @default.
- W1557127310 hasRelatedWork W1934890906 @default.
- W1557127310 hasRelatedWork W1982851385 @default.
- W1557127310 hasRelatedWork W1991857366 @default.
- W1557127310 hasRelatedWork W2032007016 @default.
- W1557127310 hasRelatedWork W2041719651 @default.
- W1557127310 hasRelatedWork W2057205720 @default.
- W1557127310 hasRelatedWork W2100470808 @default.
- W1557127310 hasRelatedWork W2128272608 @default.
- W1557127310 hasRelatedWork W2133589685 @default.