Matches in SemOpenAlex for { <https://semopenalex.org/work/W2892322385> ?p ?o ?g. }
- W2892322385 endingPage "1435" @default.
- W2892322385 startingPage "1435" @default.
- W2892322385 abstract "Urban environments are regions in which spectral variability and spatial variability are extremely high, with a huge range of shapes and sizes, and they also demand high resolution images for applications involving their study. Due to the fact that these environments can grow even more over time, applications related to their monitoring tend to turn to autonomous intelligent systems, which together with remote sensing data could help or even predict daily life situations. The task of mapping cities by autonomous operators was usually carried out by aerial optical images due to its scale and resolution; however new scientific questions have arisen, and this has led research into a new era of highly-detailed data extraction. For many years, using artificial neural models to solve complex problems such as automatic image classification was commonplace, owing much of their popularity to their ability to adapt to complex situations without needing human intervention. In spite of that, their popularity declined in the mid-2000s, mostly due to the complex and time-consuming nature of their methods and workflows. However, newer neural network architectures have brought back the interest in their application for autonomous classifiers, especially for image classification purposes. Convolutional Neural Networks (CNN) have been a trend for pixel-wise image segmentation, showing flexibility when detecting and classifying any kind of object, even in situations where humans failed to perceive differences, such as in city scenarios. In this paper, we aim to explore and experiment with state-of-the-art technologies to semantically label 3D urban models over complex scenarios. To achieve these goals, we split the problem into two main processing lines: first, how to correctly label the façade features in the 2D domain, where a supervised CNN is used to segment ground-based façade images into six feature classes, roof, window, wall, door, balcony and shop; second, a Structure-from-Motion (SfM) and Multi-View-Stereo (MVS) workflow is used to extract the geometry of the façade, wherein the segmented images in the previous stage are then used to label the generated mesh by a “reverse” ray-tracing technique. This paper demonstrates that the proposed methodology is robust in complex scenarios. The façade feature inferences have reached up to 93% accuracy over most of the datasets used. Although it still presents some deficiencies in unknown architectural styles and needs some improvements to be made regarding 3D-labeling, we present a consistent and simple methodology to handle the problem." @default.
- W2892322385 created "2018-09-27" @default.
- W2892322385 creator A5004335574 @default.
- W2892322385 creator A5028124822 @default.
- W2892322385 creator A5049647675 @default.
- W2892322385 creator A5051153987 @default.
- W2892322385 creator A5077729833 @default.
- W2892322385 date "2018-09-08" @default.
- W2892322385 modified "2023-10-16" @default.
- W2892322385 title "3D Façade Labeling over Complex Scenarios: A Case Study Using Convolutional Neural Network and Structure-From-Motion" @default.
- W2892322385 cites W1132527629 @default.
- W2892322385 cites W142267735 @default.
- W2892322385 cites W1578166233 @default.
- W2892322385 cites W1972248659 @default.
- W2892322385 cites W1988753175 @default.
- W2892322385 cites W1989276535 @default.
- W2892322385 cites W1995341919 @default.
- W2892322385 cites W2003548021 @default.
- W2892322385 cites W2019709272 @default.
- W2892322385 cites W2022605644 @default.
- W2892322385 cites W2029316659 @default.
- W2892322385 cites W2037227137 @default.
- W2892322385 cites W2048615644 @default.
- W2892322385 cites W2050195189 @default.
- W2892322385 cites W2053380473 @default.
- W2892322385 cites W2058152242 @default.
- W2892322385 cites W2061490448 @default.
- W2892322385 cites W2066483145 @default.
- W2892322385 cites W2067191022 @default.
- W2892322385 cites W2073581653 @default.
- W2892322385 cites W2076159989 @default.
- W2892322385 cites W2077029830 @default.
- W2892322385 cites W2077264955 @default.
- W2892322385 cites W2082661999 @default.
- W2892322385 cites W2092638400 @default.
- W2892322385 cites W2101309634 @default.
- W2892322385 cites W2104095591 @default.
- W2892322385 cites W2112825566 @default.
- W2892322385 cites W2117731089 @default.
- W2892322385 cites W2118246710 @default.
- W2892322385 cites W2121947440 @default.
- W2892322385 cites W2124475118 @default.
- W2892322385 cites W2127310338 @default.
- W2892322385 cites W2129038857 @default.
- W2892322385 cites W2129259959 @default.
- W2892322385 cites W2147800946 @default.
- W2892322385 cites W2156598602 @default.
- W2892322385 cites W2181926627 @default.
- W2892322385 cites W2215173806 @default.
- W2892322385 cites W2248723555 @default.
- W2892322385 cites W2408830289 @default.
- W2892322385 cites W2410591237 @default.
- W2892322385 cites W2605212484 @default.
- W2892322385 cites W2919115771 @default.
- W2892322385 cites W2963881378 @default.
- W2892322385 cites W2964017310 @default.
- W2892322385 cites W3015358563 @default.
- W2892322385 cites W418214960 @default.
- W2892322385 cites W4231582187 @default.
- W2892322385 cites W809478816 @default.
- W2892322385 doi "https://doi.org/10.3390/rs10091435" @default.
- W2892322385 hasPublicationYear "2018" @default.
- W2892322385 type Work @default.
- W2892322385 sameAs 2892322385 @default.
- W2892322385 citedByCount "18" @default.
- W2892322385 countsByYear W28923223852019 @default.
- W2892322385 countsByYear W28923223852020 @default.
- W2892322385 countsByYear W28923223852021 @default.
- W2892322385 countsByYear W28923223852022 @default.
- W2892322385 countsByYear W28923223852023 @default.
- W2892322385 crossrefType "journal-article" @default.
- W2892322385 hasAuthorship W2892322385A5004335574 @default.
- W2892322385 hasAuthorship W2892322385A5028124822 @default.
- W2892322385 hasAuthorship W2892322385A5049647675 @default.
- W2892322385 hasAuthorship W2892322385A5051153987 @default.
- W2892322385 hasAuthorship W2892322385A5077729833 @default.
- W2892322385 hasBestOaLocation W28923223851 @default.
- W2892322385 hasConcept C105795698 @default.
- W2892322385 hasConcept C119857082 @default.
- W2892322385 hasConcept C153180895 @default.
- W2892322385 hasConcept C154945302 @default.
- W2892322385 hasConcept C15744967 @default.
- W2892322385 hasConcept C162324750 @default.
- W2892322385 hasConcept C187736073 @default.
- W2892322385 hasConcept C2780451532 @default.
- W2892322385 hasConcept C2780586970 @default.
- W2892322385 hasConcept C2780598303 @default.
- W2892322385 hasConcept C33923547 @default.
- W2892322385 hasConcept C41008148 @default.
- W2892322385 hasConcept C77805123 @default.
- W2892322385 hasConcept C81363708 @default.
- W2892322385 hasConcept C89600930 @default.
- W2892322385 hasConceptScore W2892322385C105795698 @default.
- W2892322385 hasConceptScore W2892322385C119857082 @default.
- W2892322385 hasConceptScore W2892322385C153180895 @default.
- W2892322385 hasConceptScore W2892322385C154945302 @default.
- W2892322385 hasConceptScore W2892322385C15744967 @default.
- W2892322385 hasConceptScore W2892322385C162324750 @default.