Matches in SemOpenAlex for { <https://semopenalex.org/work/W1145659577> ?p ?o ?g. }
- W1145659577 endingPage "151" @default.
- W1145659577 startingPage "129" @default.
- W1145659577 abstract "Systems for still-to-video face recognition (FR) seek to detect the presence of target individuals based on reference facial still images or mug-shots. These systems encounter several challenges in video surveillance applications due to variations in capture conditions (e.g., pose, scale, illumination, blur and expression) and to camera inter-operability. Beyond these issues, few reference stills are available during enrollment to design representative facial models of target individuals. Systems for still-to-video FR must therefore rely on adaptation, multiple face representation, or synthetic generation of reference stills to enhance the intra-class variability of face models. Moreover, many FR systems only match high quality faces captured in video, which further reduces the probability of detecting target individuals. Instead of matching faces captured through segmentation to reference stills, this paper exploits Adaptive Appearance Model Tracking (AAMT) to gradually learn a track-face-model for each individual appearing in the scene. The Sequential Karhunen–Loeve technique is used for online learning of these track-face-models within a particle filter-based face tracker. Meanwhile, these models are matched over successive frames against the reference still images of each target individual enrolled to the system, and then matching scores are accumulated over several frames for robust spatiotemporal recognition. A target individual is recognized if scores accumulated for a track-face-model over a fixed time surpass some decision threshold. The main advantage of AAMT over traditional still-to-video FR systems is the greater diversity of facial representation that may be captured during operations, and this can lead to better discrimination for spatiotemporal recognition. Compared to state-of-the-art adaptive biometric systems, the proposed method selects facial captures to update an individual׳s face model more reliably because it relies on information from tracking. Simulation results obtained with the Chokepoint video dataset indicate that the proposed method provides a significantly higher level of performance compared state-of-the-art systems when a single reference still per individual is available for matching. This higher level of performance is achieved when the diverse facial appearances that are captured in video through AAMT correspond to that of reference stills." @default.
- W1145659577 created "2016-06-24" @default.
- W1145659577 creator A5002366419 @default.
- W1145659577 creator A5006937759 @default.
- W1145659577 creator A5026340083 @default.
- W1145659577 creator A5065359946 @default.
- W1145659577 creator A5089476507 @default.
- W1145659577 date "2016-01-01" @default.
- W1145659577 modified "2023-10-02" @default.
- W1145659577 title "Adaptive appearance model tracking for still-to-video face recognition" @default.
- W1145659577 cites W1481420047 @default.
- W1145659577 cites W1490414702 @default.
- W1145659577 cites W1545334487 @default.
- W1145659577 cites W1551955830 @default.
- W1145659577 cites W1567019178 @default.
- W1145659577 cites W1964435748 @default.
- W1145659577 cites W1970604134 @default.
- W1145659577 cites W1972888141 @default.
- W1145659577 cites W1978163866 @default.
- W1145659577 cites W1984285570 @default.
- W1145659577 cites W1985560977 @default.
- W1145659577 cites W2001947174 @default.
- W1145659577 cites W2020779941 @default.
- W1145659577 cites W2057640246 @default.
- W1145659577 cites W2058211319 @default.
- W1145659577 cites W2077795969 @default.
- W1145659577 cites W2079844951 @default.
- W1145659577 cites W2092131162 @default.
- W1145659577 cites W2100240926 @default.
- W1145659577 cites W2108767394 @default.
- W1145659577 cites W2110744759 @default.
- W1145659577 cites W2112695787 @default.
- W1145659577 cites W2113341759 @default.
- W1145659577 cites W2124211486 @default.
- W1145659577 cites W2125202975 @default.
- W1145659577 cites W2128835939 @default.
- W1145659577 cites W2130302792 @default.
- W1145659577 cites W2133140216 @default.
- W1145659577 cites W2133665775 @default.
- W1145659577 cites W2134658556 @default.
- W1145659577 cites W2136576757 @default.
- W1145659577 cites W213693017 @default.
- W1145659577 cites W2137724449 @default.
- W1145659577 cites W2139047213 @default.
- W1145659577 cites W2141585124 @default.
- W1145659577 cites W2149544470 @default.
- W1145659577 cites W2155511848 @default.
- W1145659577 cites W2157920973 @default.
- W1145659577 cites W2159686933 @default.
- W1145659577 cites W2168054893 @default.
- W1145659577 cites W2170793091 @default.
- W1145659577 cites W2171485467 @default.
- W1145659577 cites W2295907397 @default.
- W1145659577 cites W2613779721 @default.
- W1145659577 cites W2753461371 @default.
- W1145659577 cites W3097096317 @default.
- W1145659577 cites W4231340930 @default.
- W1145659577 cites W4253970541 @default.
- W1145659577 doi "https://doi.org/10.1016/j.patcog.2015.08.002" @default.
- W1145659577 hasPublicationYear "2016" @default.
- W1145659577 type Work @default.
- W1145659577 sameAs 1145659577 @default.
- W1145659577 citedByCount "52" @default.
- W1145659577 countsByYear W11456595772015 @default.
- W1145659577 countsByYear W11456595772016 @default.
- W1145659577 countsByYear W11456595772017 @default.
- W1145659577 countsByYear W11456595772018 @default.
- W1145659577 countsByYear W11456595772019 @default.
- W1145659577 countsByYear W11456595772020 @default.
- W1145659577 countsByYear W11456595772021 @default.
- W1145659577 countsByYear W11456595772022 @default.
- W1145659577 countsByYear W11456595772023 @default.
- W1145659577 crossrefType "journal-article" @default.
- W1145659577 hasAuthorship W1145659577A5002366419 @default.
- W1145659577 hasAuthorship W1145659577A5006937759 @default.
- W1145659577 hasAuthorship W1145659577A5026340083 @default.
- W1145659577 hasAuthorship W1145659577A5065359946 @default.
- W1145659577 hasAuthorship W1145659577A5089476507 @default.
- W1145659577 hasBestOaLocation W11456595772 @default.
- W1145659577 hasConcept C105795698 @default.
- W1145659577 hasConcept C106131492 @default.
- W1145659577 hasConcept C115961682 @default.
- W1145659577 hasConcept C144024400 @default.
- W1145659577 hasConcept C153180895 @default.
- W1145659577 hasConcept C154945302 @default.
- W1145659577 hasConcept C165064840 @default.
- W1145659577 hasConcept C2779304628 @default.
- W1145659577 hasConcept C31510193 @default.
- W1145659577 hasConcept C31972630 @default.
- W1145659577 hasConcept C33923547 @default.
- W1145659577 hasConcept C36289849 @default.
- W1145659577 hasConcept C41008148 @default.
- W1145659577 hasConcept C4641261 @default.
- W1145659577 hasConcept C52421305 @default.
- W1145659577 hasConcept C83248878 @default.
- W1145659577 hasConcept C88799230 @default.
- W1145659577 hasConcept C89600930 @default.
- W1145659577 hasConceptScore W1145659577C105795698 @default.