Matches in SemOpenAlex for { <https://semopenalex.org/work/W2625813776> ?p ?o ?g. }
Showing items 1 to 67 of
67
with 100 items per page.
- W2625813776 abstract "Visual Similarity Effects in Categorical Search Robert G. Alexander 1 (rgalexander@notes.cc.sunysb.edu), Wei Zhang (weiz@microsoft.com) 2,3 Gregory J. Zelinsky 1,2 (Gregory.Zelinsky@stonybrook.edu) Department of Psychology, Stony Brook University Department of Computer Science, Stony Brook University Microsoft Corporation Abstract The factors affecting search guidance to categorical targets are largely unknown. We asked how visual similarity relationships between random-category distractors and two target classes, teddy bears and butterflies, affects search guidance. Experiment 1 used a web-based task to collect visual similarity rankings between these target classes and random objects, from which we created search displays having either high-similarity distractors, low-similarity distractors, or “mixed” displays with high, medium, and low- similarity distractors. Subjects made faster manual responses and fixated fewer distractors on low-similarity displays compared to high. On mixed trials, first fixations were more frequent on high-similarity distractors (bear=49%; butterfly=58%) than low-similarity distractors (bear=9%; butterfly=12%). Experiment 2 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer-vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search is indeed guided by visual similarity. Keywords: Visual search; eye movements; categorical guidance; visual similarity; object class detection Introduction You have probably had the experience of searching for your car in a parking lot and finding several other vehicles of the same color or model before finally finding your car. This is an example of visual similarity affecting search; the presence of these target-similar distractors made it harder to find the actual target of your search. Such visual similarity effects have been extensively studied in the context of search, with the main finding from this effort being that search is slower when distractors are similar to the target (e.g., Duncan & Humphreys, 1989; Treisman, 1991). Models of search have also relied extensively on these visual similarity relationships (e.g., Pomplun, 2006; Treisman & Sato, 1990; Wolfe, 1994; Zelinsky, 2008). Despite their many differences, all of these models posit a very similar process for how similarity relationships are computed and used; the target and scene are represented by visual features (color, orientation, etc.), which are compared to generate a signal used to guide search to the target and to target-like distractors in a display. In general, the more similar an object is to the target, the more likely that object will be fixated. All of these models, however, assume knowledge of the target’s specific appearance in the creation of this guidance signal. This assumption is problematic, as it is often violated in the real world. Descriptions of search targets are often incomplete and lacking in visual detail; exact knowledge of a target’s appearance is an artificial situation that typically exists only in the laboratory. Particularly interesting are cases in which a target is defined categorically, as from a text label or an instruction (i.e., no picture preview of the target). Given the high degree of variability inherent in most categories of common objects, search under these conditions would have few visual features of the target that could be confidently compared to a scene to generate a guidance signal. Indeed, a debate exists over whether categorical search is guided at all, with some labs finding that it is (Schmidt & Zelinsky, 2009; Yang & Zelinsky, 2009) and others suggesting that it is not (e.g., Castelhano et al., 2008; Wolfe et al., 2004). The present study enters this debate on the existence of categorical guidance, focusing it on the relationship between target-distractor visual similarity and guidance to categorically-defined realistic targets. Guidance from a pictorial preview is known to decrease with increasing visual similarity between a target and distractors; does this same relationship hold for categorically-defined targets? Given that the representation of categorical targets is largely unknown, it may be the case that target descriptions are dominated by non-visual features, such as semantic or functional properties of the target category. If this is the case, guidance to the target may be weak or even nonexistent, potentially explaining the discrepant findings. To the extent that categorical search does use non-visual features, effects of target-distractor visual similarity would therefore not be expected. However, if target categories are represented visually, one might expect the same target- distractor similarity relationships demonstrated for target- specific search to extend to categorical search. It is unclear how best to manipulate visual similarity in the context of categorical search. Traditional methods of manipulating target-distractor similarity by varying only a single target feature are clearly suboptimal, as realistic objects are composed of many features and it is impossible to know a priori which are the most important. This problem is compounded by the categorical nature of the task; the relevance of a particular target feature would almost certainly depend on the specific category of distractor to which it is compared. It is not even known how best to derive specific target features for such a comparison; should an average be obtained from many target exemplars or should features be extracted from a particular exemplar that is representative of the target class? In light of the difficulties associated with directly manipulating the specific features underlying visual" @default.
- W2625813776 created "2017-06-23" @default.
- W2625813776 creator A5076699095 @default.
- W2625813776 creator A5084197706 @default.
- W2625813776 creator A5086225225 @default.
- W2625813776 date "2010-01-01" @default.
- W2625813776 modified "2023-09-28" @default.
- W2625813776 title "Visual Similarity Effects in Categorical Search" @default.
- W2625813776 hasPublicationYear "2010" @default.
- W2625813776 type Work @default.
- W2625813776 sameAs 2625813776 @default.
- W2625813776 citedByCount "1" @default.
- W2625813776 crossrefType "journal-article" @default.
- W2625813776 hasAuthorship W2625813776A5076699095 @default.
- W2625813776 hasAuthorship W2625813776A5084197706 @default.
- W2625813776 hasAuthorship W2625813776A5086225225 @default.
- W2625813776 hasConcept C103278499 @default.
- W2625813776 hasConcept C115961682 @default.
- W2625813776 hasConcept C119857082 @default.
- W2625813776 hasConcept C153180895 @default.
- W2625813776 hasConcept C154945302 @default.
- W2625813776 hasConcept C15744967 @default.
- W2625813776 hasConcept C158495155 @default.
- W2625813776 hasConcept C23123220 @default.
- W2625813776 hasConcept C2781238097 @default.
- W2625813776 hasConcept C41008148 @default.
- W2625813776 hasConcept C5274069 @default.
- W2625813776 hasConceptScore W2625813776C103278499 @default.
- W2625813776 hasConceptScore W2625813776C115961682 @default.
- W2625813776 hasConceptScore W2625813776C119857082 @default.
- W2625813776 hasConceptScore W2625813776C153180895 @default.
- W2625813776 hasConceptScore W2625813776C154945302 @default.
- W2625813776 hasConceptScore W2625813776C15744967 @default.
- W2625813776 hasConceptScore W2625813776C158495155 @default.
- W2625813776 hasConceptScore W2625813776C23123220 @default.
- W2625813776 hasConceptScore W2625813776C2781238097 @default.
- W2625813776 hasConceptScore W2625813776C41008148 @default.
- W2625813776 hasConceptScore W2625813776C5274069 @default.
- W2625813776 hasIssue "32" @default.
- W2625813776 hasLocation W26258137761 @default.
- W2625813776 hasOpenAccess W2625813776 @default.
- W2625813776 hasPrimaryLocation W26258137761 @default.
- W2625813776 hasRelatedWork W1971065734 @default.
- W2625813776 hasRelatedWork W2008647399 @default.
- W2625813776 hasRelatedWork W2011631075 @default.
- W2625813776 hasRelatedWork W2027626284 @default.
- W2625813776 hasRelatedWork W2053498743 @default.
- W2625813776 hasRelatedWork W2056551713 @default.
- W2625813776 hasRelatedWork W2073124229 @default.
- W2625813776 hasRelatedWork W2108808672 @default.
- W2625813776 hasRelatedWork W2118540298 @default.
- W2625813776 hasRelatedWork W2130738254 @default.
- W2625813776 hasRelatedWork W2331338586 @default.
- W2625813776 hasRelatedWork W241051229 @default.
- W2625813776 hasRelatedWork W2511782054 @default.
- W2625813776 hasRelatedWork W2598835163 @default.
- W2625813776 hasRelatedWork W2624888890 @default.
- W2625813776 hasRelatedWork W2892537778 @default.
- W2625813776 hasRelatedWork W2997060121 @default.
- W2625813776 hasRelatedWork W3011114493 @default.
- W2625813776 hasRelatedWork W3198283801 @default.
- W2625813776 hasRelatedWork W2771793476 @default.
- W2625813776 hasVolume "32" @default.
- W2625813776 isParatext "false" @default.
- W2625813776 isRetracted "false" @default.
- W2625813776 magId "2625813776" @default.
- W2625813776 workType "article" @default.