Matches in SemOpenAlex for { <https://semopenalex.org/work/W2951393419> ?p ?o ?g. }
- W2951393419 abstract "As an agent moves through the world, the apparent motion of scene elements is (usually) inversely proportional to their depth. It is natural for a learning agent to associate image patterns with the magnitude of their displacement over time: as the agent moves, faraway mountains don't move much; nearby trees move a lot. This natural relationship between the appearance of objects and their motion is a rich source of information about the world. In this work, we start by training a deep network, using fully automatic supervision, to predict relative scene depth from single images. The relative depth training images are automatically derived from simple videos of cars moving through a scene, using recent motion segmentation techniques, and no human-provided labels. This proxy task of predicting relative depth from a single image induces features in the network that result in large improvements in a set of downstream tasks including semantic segmentation, joint road segmentation and car detection, and monocular (absolute) depth estimation, over a network trained from scratch. The improvement on the semantic segmentation task is greater than those produced by any other automatically supervised methods. Moreover, for monocular depth estimation, our unsupervised pre-training method even outperforms supervised pre-training with ImageNet. In addition, we demonstrate benefits from learning to predict (unsupervised) relative depth in the specific videos associated with various downstream tasks. We adapt to the specific scenes in those tasks in an unsupervised manner to improve performance. In summary, for semantic segmentation, we present state-of-the-art results among methods that do not use supervised pre-training, and we even exceed the performance of supervised ImageNet pre-trained models for monocular depth estimation, achieving results that are comparable with state-of-the-art methods." @default.
- W2951393419 created "2019-06-27" @default.
- W2951393419 creator A5001371942 @default.
- W2951393419 creator A5005929580 @default.
- W2951393419 creator A5009207422 @default.
- W2951393419 creator A5045674062 @default.
- W2951393419 creator A5086441051 @default.
- W2951393419 date "2017-12-13" @default.
- W2951393419 modified "2023-10-17" @default.
- W2951393419 title "Self-Supervised Relative Depth Learning for Urban Scene Understanding" @default.
- W2951393419 cites W1568514080 @default.
- W2951393419 cites W1686810756 @default.
- W2951393419 cites W1901129140 @default.
- W2951393419 cites W1903029394 @default.
- W2951393419 cites W1913356549 @default.
- W2951393419 cites W1976047850 @default.
- W2951393419 cites W2067107771 @default.
- W2951393419 cites W2108598243 @default.
- W2951393419 cites W2115579991 @default.
- W2951393419 cites W2116435618 @default.
- W2951393419 cites W2145038566 @default.
- W2951393419 cites W2146444479 @default.
- W2951393419 cites W2150066425 @default.
- W2951393419 cites W2171740948 @default.
- W2951393419 cites W2173520492 @default.
- W2951393419 cites W2175760607 @default.
- W2951393419 cites W2178768799 @default.
- W2951393419 cites W2198618282 @default.
- W2951393419 cites W2248556341 @default.
- W2951393419 cites W2285336231 @default.
- W2951393419 cites W2300779272 @default.
- W2951393419 cites W2321533354 @default.
- W2951393419 cites W2340897893 @default.
- W2951393419 cites W2401640538 @default.
- W2951393419 cites W2412320034 @default.
- W2951393419 cites W2461677039 @default.
- W2951393419 cites W2470475590 @default.
- W2951393419 cites W2511428026 @default.
- W2951393419 cites W2520707372 @default.
- W2951393419 cites W2609883120 @default.
- W2951393419 cites W2738028674 @default.
- W2951393419 cites W2743157634 @default.
- W2951393419 cites W2750912449 @default.
- W2951393419 cites W2911709767 @default.
- W2951393419 cites W2949099979 @default.
- W2951393419 cites W2949650786 @default.
- W2951393419 cites W2949891561 @default.
- W2951393419 cites W2950064337 @default.
- W2951393419 cites W2950714698 @default.
- W2951393419 cites W2951333975 @default.
- W2951393419 cites W2951590555 @default.
- W2951393419 cites W2951916398 @default.
- W2951393419 cites W2953259386 @default.
- W2951393419 cites W2962958090 @default.
- W2951393419 cites W2964037671 @default.
- W2951393419 cites W2964121744 @default.
- W2951393419 cites W343636949 @default.
- W2951393419 doi "https://doi.org/10.48550/arxiv.1712.04850" @default.
- W2951393419 hasPublicationYear "2017" @default.
- W2951393419 type Work @default.
- W2951393419 sameAs 2951393419 @default.
- W2951393419 citedByCount "6" @default.
- W2951393419 countsByYear W29513934192018 @default.
- W2951393419 countsByYear W29513934192019 @default.
- W2951393419 countsByYear W29513934192020 @default.
- W2951393419 countsByYear W29513934192021 @default.
- W2951393419 crossrefType "posted-content" @default.
- W2951393419 hasAuthorship W2951393419A5001371942 @default.
- W2951393419 hasAuthorship W2951393419A5005929580 @default.
- W2951393419 hasAuthorship W2951393419A5009207422 @default.
- W2951393419 hasAuthorship W2951393419A5045674062 @default.
- W2951393419 hasAuthorship W2951393419A5086441051 @default.
- W2951393419 hasBestOaLocation W29513934191 @default.
- W2951393419 hasConcept C104114177 @default.
- W2951393419 hasConcept C119857082 @default.
- W2951393419 hasConcept C124504099 @default.
- W2951393419 hasConcept C136389625 @default.
- W2951393419 hasConcept C153180895 @default.
- W2951393419 hasConcept C154945302 @default.
- W2951393419 hasConcept C162324750 @default.
- W2951393419 hasConcept C187736073 @default.
- W2951393419 hasConcept C2780451532 @default.
- W2951393419 hasConcept C31972630 @default.
- W2951393419 hasConcept C41008148 @default.
- W2951393419 hasConcept C50644808 @default.
- W2951393419 hasConcept C65909025 @default.
- W2951393419 hasConcept C89600930 @default.
- W2951393419 hasConceptScore W2951393419C104114177 @default.
- W2951393419 hasConceptScore W2951393419C119857082 @default.
- W2951393419 hasConceptScore W2951393419C124504099 @default.
- W2951393419 hasConceptScore W2951393419C136389625 @default.
- W2951393419 hasConceptScore W2951393419C153180895 @default.
- W2951393419 hasConceptScore W2951393419C154945302 @default.
- W2951393419 hasConceptScore W2951393419C162324750 @default.
- W2951393419 hasConceptScore W2951393419C187736073 @default.
- W2951393419 hasConceptScore W2951393419C2780451532 @default.
- W2951393419 hasConceptScore W2951393419C31972630 @default.
- W2951393419 hasConceptScore W2951393419C41008148 @default.
- W2951393419 hasConceptScore W2951393419C50644808 @default.
- W2951393419 hasConceptScore W2951393419C65909025 @default.