Matches in SemOpenAlex for { <https://semopenalex.org/work/W2156303437> ?p ?o ?g. }
- W2156303437 endingPage "576" @default.
- W2156303437 startingPage "568" @default.
- W2156303437 abstract "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework.Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification." @default.
- W2156303437 created "2016-06-24" @default.
- W2156303437 creator A5004625775 @default.
- W2156303437 creator A5057678172 @default.
- W2156303437 date "2014-12-08" @default.
- W2156303437 modified "2023-10-18" @default.
- W2156303437 title "Two-Stream Convolutional Networks for Action Recognition in Videos" @default.
- W2156303437 cites W136504859 @default.
- W2156303437 cites W1586730761 @default.
- W2156303437 cites W1595717062 @default.
- W2156303437 cites W1606858007 @default.
- W2156303437 cites W1867429401 @default.
- W2156303437 cites W1983364832 @default.
- W2156303437 cites W1993229407 @default.
- W2156303437 cites W1996904744 @default.
- W2156303437 cites W1999192586 @default.
- W2156303437 cites W2016053056 @default.
- W2156303437 cites W2082627290 @default.
- W2156303437 cites W2096691069 @default.
- W2156303437 cites W2105101328 @default.
- W2156303437 cites W2113221323 @default.
- W2156303437 cites W2117130368 @default.
- W2156303437 cites W2126574503 @default.
- W2156303437 cites W2126579184 @default.
- W2156303437 cites W2142194269 @default.
- W2156303437 cites W2147800946 @default.
- W2156303437 cites W2155893237 @default.
- W2156303437 cites W2157791002 @default.
- W2156303437 cites W2161969291 @default.
- W2156303437 cites W2163605009 @default.
- W2156303437 cites W2165146474 @default.
- W2156303437 cites W2187294194 @default.
- W2156303437 cites W2308045930 @default.
- W2156303437 cites W24089286 @default.
- W2156303437 cites W2951552696 @default.
- W2156303437 cites W2952186574 @default.
- W2156303437 cites W2963173190 @default.
- W2156303437 hasPublicationYear "2014" @default.
- W2156303437 type Work @default.
- W2156303437 sameAs 2156303437 @default.
- W2156303437 citedByCount "1333" @default.
- W2156303437 countsByYear W21563034372014 @default.
- W2156303437 countsByYear W21563034372015 @default.
- W2156303437 countsByYear W21563034372016 @default.
- W2156303437 countsByYear W21563034372017 @default.
- W2156303437 countsByYear W21563034372018 @default.
- W2156303437 countsByYear W21563034372019 @default.
- W2156303437 countsByYear W21563034372020 @default.
- W2156303437 countsByYear W21563034372021 @default.
- W2156303437 countsByYear W21563034372022 @default.
- W2156303437 countsByYear W21563034372023 @default.
- W2156303437 crossrefType "proceedings-article" @default.
- W2156303437 hasAuthorship W2156303437A5004625775 @default.
- W2156303437 hasAuthorship W2156303437A5057678172 @default.
- W2156303437 hasConcept C104114177 @default.
- W2156303437 hasConcept C108583219 @default.
- W2156303437 hasConcept C115961682 @default.
- W2156303437 hasConcept C119857082 @default.
- W2156303437 hasConcept C121332964 @default.
- W2156303437 hasConcept C126042441 @default.
- W2156303437 hasConcept C153180895 @default.
- W2156303437 hasConcept C154945302 @default.
- W2156303437 hasConcept C155542232 @default.
- W2156303437 hasConcept C2777212361 @default.
- W2156303437 hasConcept C2780791683 @default.
- W2156303437 hasConcept C2987834672 @default.
- W2156303437 hasConcept C41008148 @default.
- W2156303437 hasConcept C52622490 @default.
- W2156303437 hasConcept C62520636 @default.
- W2156303437 hasConcept C76155785 @default.
- W2156303437 hasConcept C774472 @default.
- W2156303437 hasConcept C81363708 @default.
- W2156303437 hasConceptScore W2156303437C104114177 @default.
- W2156303437 hasConceptScore W2156303437C108583219 @default.
- W2156303437 hasConceptScore W2156303437C115961682 @default.
- W2156303437 hasConceptScore W2156303437C119857082 @default.
- W2156303437 hasConceptScore W2156303437C121332964 @default.
- W2156303437 hasConceptScore W2156303437C126042441 @default.
- W2156303437 hasConceptScore W2156303437C153180895 @default.
- W2156303437 hasConceptScore W2156303437C154945302 @default.
- W2156303437 hasConceptScore W2156303437C155542232 @default.
- W2156303437 hasConceptScore W2156303437C2777212361 @default.
- W2156303437 hasConceptScore W2156303437C2780791683 @default.
- W2156303437 hasConceptScore W2156303437C2987834672 @default.
- W2156303437 hasConceptScore W2156303437C41008148 @default.
- W2156303437 hasConceptScore W2156303437C52622490 @default.
- W2156303437 hasConceptScore W2156303437C62520636 @default.
- W2156303437 hasConceptScore W2156303437C76155785 @default.
- W2156303437 hasConceptScore W2156303437C774472 @default.
- W2156303437 hasConceptScore W2156303437C81363708 @default.
- W2156303437 hasLocation W21563034371 @default.
- W2156303437 hasOpenAccess W2156303437 @default.
- W2156303437 hasPrimaryLocation W21563034371 @default.
- W2156303437 hasRelatedWork W1522734439 @default.
- W2156303437 hasRelatedWork W1686810756 @default.
- W2156303437 hasRelatedWork W1923404803 @default.
- W2156303437 hasRelatedWork W1947481528 @default.
- W2156303437 hasRelatedWork W1983364832 @default.