Matches in SemOpenAlex for { <https://semopenalex.org/work/W2611339630> ?p ?o ?g. }
Showing items 1 to 77 of
77
with 100 items per page.
- W2611339630 abstract "Unmanned Aerial Vehicles are becoming increasingly popular for a broad variety of tasks ranging from aerial imagery to objects delivery. With the expansion of the areas, where drones can be efficiently used, the collision risk with other flying objects increases. Avoiding such collisions would be a relatively easy task, if all the aircrafts in the neighboring airspace could communicate with each other and share their location information. However, it is often the case that either location information is unavailable (e.g. flying in GPS-denied environments) or communication is not possible (e.g. different communication channels or non-cooperative flight scenario). To ensure flight safety in this kind of situations drones need a way to autonomously detect other objects that are intruding the neighboring airspace. Visual-based collision avoidance is of particular interest as cameras generally consume less power and are more lightweight than active sensor alternatives such as radars and lasers. We have therefore developed a set of increasingly sophisticated algorithms to provide drones with a visual collision avoidance capability. First, we present a novel method for detecting flying objects such as drones and planes that occupy a small part of the camera field of view, possibly move in front of complex backgrounds, and are filmed by a moving camera. In order to be solved this problem requires combining motion and appearance information, as neither of the two alone is capable of providing reliable enough detections. We therefore propose a machine learning technique that operates on spatio- temporal cubes of image intensities where individual patches are aligned using an object-centric regression-based motion stabilization algorithm. Second, in order to reduce the need to collect a large training dataset and to manual annotate it, we introduce a way to generate realistic synthetic images. Given only a small set of real examples and a coarse 3D model of the object, synthetic data can be generated in arbitrary quantities and further used to supplement real examples for training a detector. The key ingredient of our method is that the synthetically generated images need to be as close as possible to the real ones not in terms of image quality, but according to the features, used by a machine learning algorithm. Third, though the aforementioned approach yields a substantial increase in performance when using Adaboost and DPM detectors, it does not generalize well to Convolutional Neural Networks, which have become the state-of-the-art. This happens because, as we add more and more synthetic data, the CNNs begin to overfit to the synthetic images at the expense of the real ones. We therefore propose a novel deep domain adaptation technique that allows efficiently combining real and synthetic images without overfitting to either of the two. While most of the adaptation techniques aim at learning features that are invariant to the possible difference of the images, coming from different sources (real and synthetic). Unlike those methods, we suggest modeling this difference with a special two-stream architecture. We evaluate our approach on three different datasets and show its effectiveness for various classification and regression tasks." @default.
- W2611339630 created "2017-05-12" @default.
- W2611339630 creator A5001320062 @default.
- W2611339630 date "2017-01-01" @default.
- W2611339630 modified "2023-09-23" @default.
- W2611339630 title "Vision-based detection of aircrafts and UAVs" @default.
- W2611339630 doi "https://doi.org/10.5075/epfl-thesis-7589" @default.
- W2611339630 hasPublicationYear "2017" @default.
- W2611339630 type Work @default.
- W2611339630 sameAs 2611339630 @default.
- W2611339630 citedByCount "1" @default.
- W2611339630 countsByYear W26113396302018 @default.
- W2611339630 crossrefType "journal-article" @default.
- W2611339630 hasAuthorship W2611339630A5001320062 @default.
- W2611339630 hasConcept C121704057 @default.
- W2611339630 hasConcept C127413603 @default.
- W2611339630 hasConcept C150627866 @default.
- W2611339630 hasConcept C154945302 @default.
- W2611339630 hasConcept C177264268 @default.
- W2611339630 hasConcept C199360897 @default.
- W2611339630 hasConcept C201995342 @default.
- W2611339630 hasConcept C2780451532 @default.
- W2611339630 hasConcept C2780864053 @default.
- W2611339630 hasConcept C31972630 @default.
- W2611339630 hasConcept C38652104 @default.
- W2611339630 hasConcept C41008148 @default.
- W2611339630 hasConcept C54355233 @default.
- W2611339630 hasConcept C59519942 @default.
- W2611339630 hasConcept C60229501 @default.
- W2611339630 hasConcept C76155785 @default.
- W2611339630 hasConcept C79403827 @default.
- W2611339630 hasConcept C86803240 @default.
- W2611339630 hasConceptScore W2611339630C121704057 @default.
- W2611339630 hasConceptScore W2611339630C127413603 @default.
- W2611339630 hasConceptScore W2611339630C150627866 @default.
- W2611339630 hasConceptScore W2611339630C154945302 @default.
- W2611339630 hasConceptScore W2611339630C177264268 @default.
- W2611339630 hasConceptScore W2611339630C199360897 @default.
- W2611339630 hasConceptScore W2611339630C201995342 @default.
- W2611339630 hasConceptScore W2611339630C2780451532 @default.
- W2611339630 hasConceptScore W2611339630C2780864053 @default.
- W2611339630 hasConceptScore W2611339630C31972630 @default.
- W2611339630 hasConceptScore W2611339630C38652104 @default.
- W2611339630 hasConceptScore W2611339630C41008148 @default.
- W2611339630 hasConceptScore W2611339630C54355233 @default.
- W2611339630 hasConceptScore W2611339630C59519942 @default.
- W2611339630 hasConceptScore W2611339630C60229501 @default.
- W2611339630 hasConceptScore W2611339630C76155785 @default.
- W2611339630 hasConceptScore W2611339630C79403827 @default.
- W2611339630 hasConceptScore W2611339630C86803240 @default.
- W2611339630 hasLocation W26113396301 @default.
- W2611339630 hasOpenAccess W2611339630 @default.
- W2611339630 hasPrimaryLocation W26113396301 @default.
- W2611339630 hasRelatedWork W1533201397 @default.
- W2611339630 hasRelatedWork W1568130538 @default.
- W2611339630 hasRelatedWork W194779095 @default.
- W2611339630 hasRelatedWork W2026765180 @default.
- W2611339630 hasRelatedWork W2156010463 @default.
- W2611339630 hasRelatedWork W2181901922 @default.
- W2611339630 hasRelatedWork W2294767871 @default.
- W2611339630 hasRelatedWork W2321473750 @default.
- W2611339630 hasRelatedWork W2518617602 @default.
- W2611339630 hasRelatedWork W2737011785 @default.
- W2611339630 hasRelatedWork W2741736538 @default.
- W2611339630 hasRelatedWork W2887835003 @default.
- W2611339630 hasRelatedWork W2896694677 @default.
- W2611339630 hasRelatedWork W2910198867 @default.
- W2611339630 hasRelatedWork W2994967340 @default.
- W2611339630 hasRelatedWork W3003772167 @default.
- W2611339630 hasRelatedWork W3023911160 @default.
- W2611339630 hasRelatedWork W3024046922 @default.
- W2611339630 hasRelatedWork W62142990 @default.
- W2611339630 hasRelatedWork W85318599 @default.
- W2611339630 isParatext "false" @default.
- W2611339630 isRetracted "false" @default.
- W2611339630 magId "2611339630" @default.
- W2611339630 workType "article" @default.