Matches in SemOpenAlex for { <https://semopenalex.org/work/W2898097011> ?p ?o ?g. }
- W2898097011 endingPage "2895" @default.
- W2898097011 startingPage "2884" @default.
- W2898097011 abstract "In computer vision, tracking humans across camera views remain challenging, especially for complex scenarios with frequent occlusions, significant lighting changes, and other difficulties. Under such conditions, most existing appearance and geometric cues are not reliable enough to distinguish humans across camera views. To address these challenges, this paper presents a stochastic attribute grammar model for leveraging complementary and discriminative human attributes for enhancing cross-view tracking. The key idea of our method is to introduce a hierarchical representation, parse graph, to describe a subject and its movement trajectory in both space and time domains. These results in a hierarchical compositional representation, comprising trajectory entities of varying level, including human boxes, 3D human boxes, tracklets, and trajectories. We use a set of grammar rules to decompose a graph node (e.g., tracklet) into a set of children nodes (e.g., 3D human boxes), and augment each node with a set of attributes, including geometry (e.g., moving speed and direction), accessories (e.g., bags), and/or activities (e.g., walking and running). These attributes serve as valuable cues, in addition to appearance features (e.g., colors), in determining the associations of human detection boxes across cameras. In particular, the attributes of a parent node are inherited by its children nodes, resulting in consistency constraints over the feasible parse graph. Thus, we cast cross-view human tracking as finding the most discriminative parse graph for each subject in videos. We develop a learning method to train this attribute grammar model from weakly supervised training data. To infer the optimal parse graph and its attributes, we develop an alternative parsing method that employs both top-down and bottom-up computations to search the optimal solution. We also explicitly reason the occlusion status of each entity in order to deal with significant changes of camera viewpoints. We evaluate the proposed method over public video benchmarks, and demonstrate with extensive experiments that our method clearly outperforms the state-of-the-art tracking methods." @default.
- W2898097011 created "2018-11-02" @default.
- W2898097011 creator A5005665145 @default.
- W2898097011 creator A5028877572 @default.
- W2898097011 creator A5051630734 @default.
- W2898097011 creator A5076498350 @default.
- W2898097011 date "2018-10-01" @default.
- W2898097011 modified "2023-10-02" @default.
- W2898097011 title "A Stochastic Attribute Grammar for Robust Cross-View Human Tracking" @default.
- W2898097011 cites W1506491340 @default.
- W2898097011 cites W1539154803 @default.
- W2898097011 cites W1857884451 @default.
- W2898097011 cites W1912967058 @default.
- W2898097011 cites W1987389530 @default.
- W2898097011 cites W1995903777 @default.
- W2898097011 cites W1997310483 @default.
- W2898097011 cites W2013366982 @default.
- W2898097011 cites W2016135469 @default.
- W2898097011 cites W2020762599 @default.
- W2898097011 cites W2041875597 @default.
- W2898097011 cites W2051588547 @default.
- W2898097011 cites W2053744956 @default.
- W2898097011 cites W2089961441 @default.
- W2898097011 cites W2098669323 @default.
- W2898097011 cites W2102734144 @default.
- W2898097011 cites W2102772283 @default.
- W2898097011 cites W2106110775 @default.
- W2898097011 cites W2108215708 @default.
- W2898097011 cites W2108788743 @default.
- W2898097011 cites W2111644456 @default.
- W2898097011 cites W2120272360 @default.
- W2898097011 cites W2121332494 @default.
- W2898097011 cites W2124541566 @default.
- W2898097011 cites W2124688298 @default.
- W2898097011 cites W2130258433 @default.
- W2898097011 cites W2134529534 @default.
- W2898097011 cites W2134809020 @default.
- W2898097011 cites W2151646056 @default.
- W2898097011 cites W2155893237 @default.
- W2898097011 cites W2156137575 @default.
- W2898097011 cites W2158634074 @default.
- W2898097011 cites W2161481633 @default.
- W2898097011 cites W2171243491 @default.
- W2898097011 cites W2201681750 @default.
- W2898097011 cites W2416798379 @default.
- W2898097011 cites W2473532709 @default.
- W2898097011 cites W2919115771 @default.
- W2898097011 cites W3101705353 @default.
- W2898097011 doi "https://doi.org/10.1109/tcsvt.2017.2781738" @default.
- W2898097011 hasPublicationYear "2018" @default.
- W2898097011 type Work @default.
- W2898097011 sameAs 2898097011 @default.
- W2898097011 citedByCount "12" @default.
- W2898097011 countsByYear W28980970112018 @default.
- W2898097011 countsByYear W28980970112019 @default.
- W2898097011 countsByYear W28980970112020 @default.
- W2898097011 countsByYear W28980970112021 @default.
- W2898097011 countsByYear W28980970112022 @default.
- W2898097011 countsByYear W28980970112023 @default.
- W2898097011 crossrefType "journal-article" @default.
- W2898097011 hasAuthorship W2898097011A5005665145 @default.
- W2898097011 hasAuthorship W2898097011A5028877572 @default.
- W2898097011 hasAuthorship W2898097011A5051630734 @default.
- W2898097011 hasAuthorship W2898097011A5076498350 @default.
- W2898097011 hasBestOaLocation W28980970111 @default.
- W2898097011 hasConcept C119857082 @default.
- W2898097011 hasConcept C127413603 @default.
- W2898097011 hasConcept C132525143 @default.
- W2898097011 hasConcept C138885662 @default.
- W2898097011 hasConcept C153180895 @default.
- W2898097011 hasConcept C154945302 @default.
- W2898097011 hasConcept C177264268 @default.
- W2898097011 hasConcept C186644900 @default.
- W2898097011 hasConcept C199360897 @default.
- W2898097011 hasConcept C204321447 @default.
- W2898097011 hasConcept C26022165 @default.
- W2898097011 hasConcept C31972630 @default.
- W2898097011 hasConcept C41008148 @default.
- W2898097011 hasConcept C41895202 @default.
- W2898097011 hasConcept C62611344 @default.
- W2898097011 hasConcept C66938386 @default.
- W2898097011 hasConcept C80444323 @default.
- W2898097011 hasConcept C97931131 @default.
- W2898097011 hasConceptScore W2898097011C119857082 @default.
- W2898097011 hasConceptScore W2898097011C127413603 @default.
- W2898097011 hasConceptScore W2898097011C132525143 @default.
- W2898097011 hasConceptScore W2898097011C138885662 @default.
- W2898097011 hasConceptScore W2898097011C153180895 @default.
- W2898097011 hasConceptScore W2898097011C154945302 @default.
- W2898097011 hasConceptScore W2898097011C177264268 @default.
- W2898097011 hasConceptScore W2898097011C186644900 @default.
- W2898097011 hasConceptScore W2898097011C199360897 @default.
- W2898097011 hasConceptScore W2898097011C204321447 @default.
- W2898097011 hasConceptScore W2898097011C26022165 @default.
- W2898097011 hasConceptScore W2898097011C31972630 @default.
- W2898097011 hasConceptScore W2898097011C41008148 @default.
- W2898097011 hasConceptScore W2898097011C41895202 @default.
- W2898097011 hasConceptScore W2898097011C62611344 @default.