Matches in SemOpenAlex for { <https://semopenalex.org/work/W2624774844> ?p ?o ?g. }
Showing items 1 to 77 of
77
with 100 items per page.
- W2624774844 abstract "Over the last couple of decades the demand for high precision and enhanced performance of physical systems has been steadily increasing. This demand often results in miniaturization and complex design, thus increasing the need for complex nonlinear control methods. Some of the state of the art nonlinear methods are stymied by the requirement of full state information, model and parameter uncertainties, mathematical complexity, etc. For many scenarios it is nearly impossible to consider all the uncertainties during the design of a feedback controller. Additionally, while designing a modelbased nonlinear control there is no standard mechanism to incorporate performance measures. Some of the mentioned issues can be addressed by using online learning. Animals and humans have the ability to share, explore, act or respond, memorize the outcome and repeat the task to achieve a better outcome when they encounter the same or a similar scenario. This is called learning from interaction. One instance of this approach is reinforcement learning (RL). However, RL methods are hindered by the curse of dimensionality, non-interpretability and non-monotonic convergence of the learning algorithms. This can be attributed to the intrinsic characteristics of RL, as it is a modelfree approach and hence no standard mechanism exists to incorporate a priori model information. In this thesis, learning methods are proposed which explicitly use the available system knowledge. This can be seen as a new class of approaches that bridge model-based and model-free methods. These methods can address some of the hurdles mentioned earlier. For example, i) a prior system information can speed up the learning, ii) new control objectives can be achieved which otherwise would be extremely difficult to attain using only model-based methods, iii) physical meaning can be attributed to the learned controller. The developed approach is as follows: themodel of the given physical system is represented in the port-Hamiltonian (PH) form. For the system dynamics in PH form a passivity-based control (PBC) law is formulated, which often requires the solution to a set of partial differential equations (PDEs). Instead of finding an analytical solution, the PBC control law is parameterized using an unknown parameter vector. Then, by using a variation of the standard actor-critic learning algorithm, the unknown parameters can be learned online. Using the principles of stochastic approximation theory, a proof of convergence for the developed method is shown. The proposedmethods are evaluated for the stabilization and regulation ofmechanical and electro-mechanical systems. The simulation and experimental results show comparable learning curves. In the final part of the thesis a novel integral reinforcement learning approach is developed to solve for the optimal output tracking control problem for a set of linear heterogeneous multi-agent systems. Unlike existing methods, this approach does not need to solve either the output regulator equation or requires a p-copy of the leader’s dynamics in the agent’s control law. A detailed numerical evaluation has been conducted to show the feasibility of the developed method." @default.
- W2624774844 created "2017-06-23" @default.
- W2624774844 creator A5061604841 @default.
- W2624774844 date "2016-04-18" @default.
- W2624774844 modified "2023-09-25" @default.
- W2624774844 title "Online learning algorithms : For passivity-based and distributed control" @default.
- W2624774844 doi "https://doi.org/10.4233/uuid:9f3a2496-7851-40f6-a947-102080bdd5fd" @default.
- W2624774844 hasPublicationYear "2016" @default.
- W2624774844 type Work @default.
- W2624774844 sameAs 2624774844 @default.
- W2624774844 citedByCount "0" @default.
- W2624774844 crossrefType "journal-article" @default.
- W2624774844 hasAuthorship W2624774844A5061604841 @default.
- W2624774844 hasConcept C111030470 @default.
- W2624774844 hasConcept C111472728 @default.
- W2624774844 hasConcept C119857082 @default.
- W2624774844 hasConcept C121332964 @default.
- W2624774844 hasConcept C126255220 @default.
- W2624774844 hasConcept C138885662 @default.
- W2624774844 hasConcept C144237770 @default.
- W2624774844 hasConcept C148220186 @default.
- W2624774844 hasConcept C154945302 @default.
- W2624774844 hasConcept C158622935 @default.
- W2624774844 hasConcept C162324750 @default.
- W2624774844 hasConcept C2777303404 @default.
- W2624774844 hasConcept C2781067378 @default.
- W2624774844 hasConcept C33923547 @default.
- W2624774844 hasConcept C41008148 @default.
- W2624774844 hasConcept C50522688 @default.
- W2624774844 hasConcept C62520636 @default.
- W2624774844 hasConcept C75553542 @default.
- W2624774844 hasConcept C97541855 @default.
- W2624774844 hasConceptScore W2624774844C111030470 @default.
- W2624774844 hasConceptScore W2624774844C111472728 @default.
- W2624774844 hasConceptScore W2624774844C119857082 @default.
- W2624774844 hasConceptScore W2624774844C121332964 @default.
- W2624774844 hasConceptScore W2624774844C126255220 @default.
- W2624774844 hasConceptScore W2624774844C138885662 @default.
- W2624774844 hasConceptScore W2624774844C144237770 @default.
- W2624774844 hasConceptScore W2624774844C148220186 @default.
- W2624774844 hasConceptScore W2624774844C154945302 @default.
- W2624774844 hasConceptScore W2624774844C158622935 @default.
- W2624774844 hasConceptScore W2624774844C162324750 @default.
- W2624774844 hasConceptScore W2624774844C2777303404 @default.
- W2624774844 hasConceptScore W2624774844C2781067378 @default.
- W2624774844 hasConceptScore W2624774844C33923547 @default.
- W2624774844 hasConceptScore W2624774844C41008148 @default.
- W2624774844 hasConceptScore W2624774844C50522688 @default.
- W2624774844 hasConceptScore W2624774844C62520636 @default.
- W2624774844 hasConceptScore W2624774844C75553542 @default.
- W2624774844 hasConceptScore W2624774844C97541855 @default.
- W2624774844 hasLocation W26247748441 @default.
- W2624774844 hasOpenAccess W2624774844 @default.
- W2624774844 hasPrimaryLocation W26247748441 @default.
- W2624774844 hasRelatedWork W1584101032 @default.
- W2624774844 hasRelatedWork W163289901 @default.
- W2624774844 hasRelatedWork W2098451267 @default.
- W2624774844 hasRelatedWork W2217144225 @default.
- W2624774844 hasRelatedWork W2511289655 @default.
- W2624774844 hasRelatedWork W2793955907 @default.
- W2624774844 hasRelatedWork W2796922012 @default.
- W2624774844 hasRelatedWork W2798274031 @default.
- W2624774844 hasRelatedWork W2912762777 @default.
- W2624774844 hasRelatedWork W2947311320 @default.
- W2624774844 hasRelatedWork W2970112030 @default.
- W2624774844 hasRelatedWork W2996827829 @default.
- W2624774844 hasRelatedWork W3006506827 @default.
- W2624774844 hasRelatedWork W3033207833 @default.
- W2624774844 hasRelatedWork W3094058178 @default.
- W2624774844 hasRelatedWork W3101622906 @default.
- W2624774844 hasRelatedWork W3197723314 @default.
- W2624774844 hasRelatedWork W3197927210 @default.
- W2624774844 hasRelatedWork W3208028819 @default.
- W2624774844 isParatext "false" @default.
- W2624774844 isRetracted "false" @default.
- W2624774844 magId "2624774844" @default.
- W2624774844 workType "article" @default.