Matches in SemOpenAlex for { <https://semopenalex.org/work/W3183817008> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W3183817008 abstract "Author(s): Wang, Zining | Advisor(s): Tomizuka, Masayoshi | Abstract: Modern autonomous robotic systems are equipped with perception subsystems to handle unexpected failure cases and to navigate more intelligently in the unstructured environment. Robots navigate in a cluttered environment full of noise and disturbance. Robust perception extracts target objects from visual observations while rejecting all the noise and disturbances. It also focuses on increasing the redundancy by fusing information from multiple sensors. However, the vision sensor is mounted on the robot system and is affected by the model uncertainty of the robot. Therefore, auto-teaching is proposed to handle the modeling error of the robot by calibrating the model parameters while estimating states of the target object. On the other hand, robustly detecting target objects is the prerequisite for auto-teaching without human intervention, which requires studying perception and auto-teaching simultaneously. In addition, perception is also an essential part for perceiving the complex environment, where deep learning methods are becoming the mainstream recently. However, robustness of learning-based perception algorithms is not well explored. In this dissertation, robust perception is discussed for robots carrying vision sensors and auto-teaching is developed for robots to recover from failures. Robustness of the perception subsystem is considered by developing global methods to reject disturbances and sensor fusion to improve redundancy. Several methods are proposed in both classic computer vision and deep learning areas with applications to two kinds of autonomous robotic systems, namely the industrial manipulator and the autonomous vehicle. For the industrial manipulator discussed in Part I of this dissertation, the name hand-eye system is conventionally used referring to a robot arm holding vision sensors. Chapter 2 models the system and builds the motion block. The kinematic model is used for visual-inertial sensor fusion and generating the calibration parameters for auto-teaching. Planning and tracking control of the system are necessary for auto-teaching and ensuring the quality of visual data captured by the hand-eye system.Industrial manipulators and their target objects have rich geometric information and accurate known shape, which is more suitable for classic computer vision (CV) methods. Chapter 3 and 4 constructs the robust perception block of the hand-eye system. Chapter 3 proposes several global shape matching methods for two kinds of visual inputs, namely image and point clouds. We globally search all potential matches of deformed target objects to avoid local optimals caused by disturbances. Chapter 4 introduces probabilistic inference to increase the robustness against noise when matching detected objects temporally. The proposed probabilistic hierarchical registration algorithm outperforms the deterministic feature descriptor-based algorithm used in state-of-the-art SLAM methods.Visual detection is not robust against model uncertainty of the system and only gives 2D location of the object. Auto-teaching simultaneously calibrates the parameters of the systems while estimating the state of detected objects. Chapter 5 introduces the auto-teaching framework directly using the perception results from Chapter 3 and Chapter 4. Visual-inertial sensor fusion is used to increase the calibration accuracy by taking the robot motion measurement into account. Chapter 6 proposes an active auto-teaching framework which closes the calibration loop of the hand-eye system by planning optimal measurement poses using the updated parameters. Autonomous vehicles operate in a more versatile scenario where target objects are complex and unstructured. Deep learning-based methods have become the paradigm in this area in recent years, but robustness is the major concern for scaling up its application in the real world. In Part II of the dissertation, the robustness of learning-based detectors is discussed. Chapter 7 proposes two camera-LiDAR sensor fusion detection networks to increase the performance and redundancy of the detector. The proposed fusion layer is very efficient and back-propagatable which perfectly suits the learning framework. In Chapter 8, we further dive into the training and evaluation procedure of learning-based detectors. A probabilistic representation is proposed for labels in the dataset to handle the uncertainty of training data. A new evaluation metric is introduced for the proposed probabilistic representation to better measure the robustness of learning-based detectors." @default.
- W3183817008 created "2021-08-02" @default.
- W3183817008 creator A5047002691 @default.
- W3183817008 date "2020-01-01" @default.
- W3183817008 modified "2023-10-08" @default.
- W3183817008 title "Robust Perception and Auto-teaching for Autonomous Robotic Systems" @default.
- W3183817008 hasPublicationYear "2020" @default.
- W3183817008 type Work @default.
- W3183817008 sameAs 3183817008 @default.
- W3183817008 citedByCount "0" @default.
- W3183817008 crossrefType "journal-article" @default.
- W3183817008 hasAuthorship W3183817008A5047002691 @default.
- W3183817008 hasConcept C104317684 @default.
- W3183817008 hasConcept C111919701 @default.
- W3183817008 hasConcept C127413603 @default.
- W3183817008 hasConcept C133731056 @default.
- W3183817008 hasConcept C152124472 @default.
- W3183817008 hasConcept C154945302 @default.
- W3183817008 hasConcept C169760540 @default.
- W3183817008 hasConcept C185592680 @default.
- W3183817008 hasConcept C19966478 @default.
- W3183817008 hasConcept C26760741 @default.
- W3183817008 hasConcept C2778835581 @default.
- W3183817008 hasConcept C28063669 @default.
- W3183817008 hasConcept C31972630 @default.
- W3183817008 hasConcept C41008148 @default.
- W3183817008 hasConcept C55493867 @default.
- W3183817008 hasConcept C63479239 @default.
- W3183817008 hasConcept C86803240 @default.
- W3183817008 hasConcept C90509273 @default.
- W3183817008 hasConceptScore W3183817008C104317684 @default.
- W3183817008 hasConceptScore W3183817008C111919701 @default.
- W3183817008 hasConceptScore W3183817008C127413603 @default.
- W3183817008 hasConceptScore W3183817008C133731056 @default.
- W3183817008 hasConceptScore W3183817008C152124472 @default.
- W3183817008 hasConceptScore W3183817008C154945302 @default.
- W3183817008 hasConceptScore W3183817008C169760540 @default.
- W3183817008 hasConceptScore W3183817008C185592680 @default.
- W3183817008 hasConceptScore W3183817008C19966478 @default.
- W3183817008 hasConceptScore W3183817008C26760741 @default.
- W3183817008 hasConceptScore W3183817008C2778835581 @default.
- W3183817008 hasConceptScore W3183817008C28063669 @default.
- W3183817008 hasConceptScore W3183817008C31972630 @default.
- W3183817008 hasConceptScore W3183817008C41008148 @default.
- W3183817008 hasConceptScore W3183817008C55493867 @default.
- W3183817008 hasConceptScore W3183817008C63479239 @default.
- W3183817008 hasConceptScore W3183817008C86803240 @default.
- W3183817008 hasConceptScore W3183817008C90509273 @default.
- W3183817008 hasLocation W31838170081 @default.
- W3183817008 hasOpenAccess W3183817008 @default.
- W3183817008 hasPrimaryLocation W31838170081 @default.
- W3183817008 hasRelatedWork W1565829227 @default.
- W3183817008 hasRelatedWork W1998358614 @default.
- W3183817008 hasRelatedWork W2040865712 @default.
- W3183817008 hasRelatedWork W2064796299 @default.
- W3183817008 hasRelatedWork W2070904513 @default.
- W3183817008 hasRelatedWork W2090143429 @default.
- W3183817008 hasRelatedWork W2110012192 @default.
- W3183817008 hasRelatedWork W2142754914 @default.
- W3183817008 hasRelatedWork W2295228767 @default.
- W3183817008 hasRelatedWork W2337101928 @default.
- W3183817008 hasRelatedWork W2587366730 @default.
- W3183817008 hasRelatedWork W2733920523 @default.
- W3183817008 hasRelatedWork W2783639445 @default.
- W3183817008 hasRelatedWork W2840842958 @default.
- W3183817008 hasRelatedWork W2981481304 @default.
- W3183817008 hasRelatedWork W3101749708 @default.
- W3183817008 hasRelatedWork W3112247184 @default.
- W3183817008 hasRelatedWork W3180079054 @default.
- W3183817008 hasRelatedWork W3181820552 @default.
- W3183817008 hasRelatedWork W2563463113 @default.
- W3183817008 isParatext "false" @default.
- W3183817008 isRetracted "false" @default.
- W3183817008 magId "3183817008" @default.
- W3183817008 workType "article" @default.