Matches in SemOpenAlex for { <https://semopenalex.org/work/W2767165505> ?p ?o ?g. }
- W2767165505 endingPage "399" @default.
- W2767165505 startingPage "387" @default.
- W2767165505 abstract "Estimating the focus of attention of a person looking at an image or a video is a crucial step which can enhance many vision-based inference mechanisms: image segmentation and annotation, video captioning, autonomous driving are some examples. The early stages of the attentive behavior are typically bottom-up; reproducing the same mechanism means to find the saliency embodied in the images, i.e. which parts of an image pop out of a visual scene. This process has been studied for decades in neuroscience and in terms of computational models for reproducing the human cortical process. In the last few years, early models have been replaced by deep learning architectures, that outperform any early approach compared against public datasets. In this paper, we propose a discussion on why convolutional neural networks (CNNs) are so accurate in saliency prediction. We present our DL architectures which combine both bottom-up cues and higher-level semantics, and incorporate the concept of time in the attentional process through LSTM recurrent architectures. Eventually, we present a video-specific architecture based on the C3D network, which can extracts spatio-temporal features by means of 3D convolutions to model task-driven attentive behaviors. The merit of this work is to show how these deep networks are not mere brute-force methods tuned on massive amount of data, but represent well-defined architectures which recall very closely the early saliency models, although improved with the semantics learned by human ground-truth." @default.
- W2767165505 created "2017-11-10" @default.
- W2767165505 creator A5004151075 @default.
- W2767165505 creator A5026148757 @default.
- W2767165505 creator A5030948871 @default.
- W2767165505 creator A5048928616 @default.
- W2767165505 creator A5066519737 @default.
- W2767165505 creator A5075481810 @default.
- W2767165505 date "2017-01-01" @default.
- W2767165505 modified "2023-10-17" @default.
- W2767165505 title "Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era" @default.
- W2767165505 cites W1497599070 @default.
- W2767165505 cites W1510835000 @default.
- W2767165505 cites W1522734439 @default.
- W2767165505 cites W1849277567 @default.
- W2767165505 cites W1934890906 @default.
- W2767165505 cites W1954128991 @default.
- W2767165505 cites W1965301399 @default.
- W2767165505 cites W1978479866 @default.
- W2767165505 cites W2033859430 @default.
- W2767165505 cites W2071555787 @default.
- W2767165505 cites W2078903912 @default.
- W2767165505 cites W2110019070 @default.
- W2767165505 cites W2128272608 @default.
- W2767165505 cites W2135957164 @default.
- W2767165505 cites W2138046011 @default.
- W2767165505 cites W2144764737 @default.
- W2767165505 cites W2149095485 @default.
- W2767165505 cites W2152752164 @default.
- W2767165505 cites W2194775991 @default.
- W2767165505 cites W2212216676 @default.
- W2767165505 cites W2288514685 @default.
- W2767165505 cites W2295598507 @default.
- W2767165505 cites W2442293398 @default.
- W2767165505 cites W2474210745 @default.
- W2767165505 cites W2547191090 @default.
- W2767165505 cites W2614855966 @default.
- W2767165505 cites W2751076261 @default.
- W2767165505 cites W2963828885 @default.
- W2767165505 cites W3098682680 @default.
- W2767165505 cites W3101840568 @default.
- W2767165505 doi "https://doi.org/10.1007/978-3-319-70169-1_29" @default.
- W2767165505 hasPublicationYear "2017" @default.
- W2767165505 type Work @default.
- W2767165505 sameAs 2767165505 @default.
- W2767165505 citedByCount "1" @default.
- W2767165505 countsByYear W27671655052019 @default.
- W2767165505 crossrefType "book-chapter" @default.
- W2767165505 hasAuthorship W2767165505A5004151075 @default.
- W2767165505 hasAuthorship W2767165505A5026148757 @default.
- W2767165505 hasAuthorship W2767165505A5030948871 @default.
- W2767165505 hasAuthorship W2767165505A5048928616 @default.
- W2767165505 hasAuthorship W2767165505A5066519737 @default.
- W2767165505 hasAuthorship W2767165505A5075481810 @default.
- W2767165505 hasBestOaLocation W27671655052 @default.
- W2767165505 hasConcept C108583219 @default.
- W2767165505 hasConcept C111919701 @default.
- W2767165505 hasConcept C115961682 @default.
- W2767165505 hasConcept C119857082 @default.
- W2767165505 hasConcept C120665830 @default.
- W2767165505 hasConcept C121332964 @default.
- W2767165505 hasConcept C146849305 @default.
- W2767165505 hasConcept C154945302 @default.
- W2767165505 hasConcept C157657479 @default.
- W2767165505 hasConcept C162324750 @default.
- W2767165505 hasConcept C184337299 @default.
- W2767165505 hasConcept C187736073 @default.
- W2767165505 hasConcept C192209626 @default.
- W2767165505 hasConcept C199360897 @default.
- W2767165505 hasConcept C2776214188 @default.
- W2767165505 hasConcept C2780451532 @default.
- W2767165505 hasConcept C41008148 @default.
- W2767165505 hasConcept C81363708 @default.
- W2767165505 hasConcept C89600930 @default.
- W2767165505 hasConcept C98045186 @default.
- W2767165505 hasConceptScore W2767165505C108583219 @default.
- W2767165505 hasConceptScore W2767165505C111919701 @default.
- W2767165505 hasConceptScore W2767165505C115961682 @default.
- W2767165505 hasConceptScore W2767165505C119857082 @default.
- W2767165505 hasConceptScore W2767165505C120665830 @default.
- W2767165505 hasConceptScore W2767165505C121332964 @default.
- W2767165505 hasConceptScore W2767165505C146849305 @default.
- W2767165505 hasConceptScore W2767165505C154945302 @default.
- W2767165505 hasConceptScore W2767165505C157657479 @default.
- W2767165505 hasConceptScore W2767165505C162324750 @default.
- W2767165505 hasConceptScore W2767165505C184337299 @default.
- W2767165505 hasConceptScore W2767165505C187736073 @default.
- W2767165505 hasConceptScore W2767165505C192209626 @default.
- W2767165505 hasConceptScore W2767165505C199360897 @default.
- W2767165505 hasConceptScore W2767165505C2776214188 @default.
- W2767165505 hasConceptScore W2767165505C2780451532 @default.
- W2767165505 hasConceptScore W2767165505C41008148 @default.
- W2767165505 hasConceptScore W2767165505C81363708 @default.
- W2767165505 hasConceptScore W2767165505C89600930 @default.
- W2767165505 hasConceptScore W2767165505C98045186 @default.
- W2767165505 hasLocation W27671655051 @default.
- W2767165505 hasLocation W27671655052 @default.
- W2767165505 hasOpenAccess W2767165505 @default.