Matches in SemOpenAlex for { <https://semopenalex.org/work/W3049172642> ?p ?o ?g. }
Showing items 1 to 65 of
65
with 100 items per page.
- W3049172642 abstract "INTRODUCTION Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on secondorder statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?" @default.
- W3049172642 created "2020-08-21" @default.
- W3049172642 creator A5019504861 @default.
- W3049172642 date "2005-01-01" @default.
- W3049172642 modified "2023-09-27" @default.
- W3049172642 title "Information Theoretic Learning." @default.
- W3049172642 hasPublicationYear "2005" @default.
- W3049172642 type Work @default.
- W3049172642 sameAs 3049172642 @default.
- W3049172642 citedByCount "0" @default.
- W3049172642 crossrefType "journal-article" @default.
- W3049172642 hasAuthorship W3049172642A5019504861 @default.
- W3049172642 hasConcept C105795698 @default.
- W3049172642 hasConcept C111919701 @default.
- W3049172642 hasConcept C121332964 @default.
- W3049172642 hasConcept C126255220 @default.
- W3049172642 hasConcept C154945302 @default.
- W3049172642 hasConcept C158622935 @default.
- W3049172642 hasConcept C163716315 @default.
- W3049172642 hasConcept C199845137 @default.
- W3049172642 hasConcept C33923547 @default.
- W3049172642 hasConcept C41008148 @default.
- W3049172642 hasConcept C50644808 @default.
- W3049172642 hasConcept C62520636 @default.
- W3049172642 hasConcept C99656134 @default.
- W3049172642 hasConceptScore W3049172642C105795698 @default.
- W3049172642 hasConceptScore W3049172642C111919701 @default.
- W3049172642 hasConceptScore W3049172642C121332964 @default.
- W3049172642 hasConceptScore W3049172642C126255220 @default.
- W3049172642 hasConceptScore W3049172642C154945302 @default.
- W3049172642 hasConceptScore W3049172642C158622935 @default.
- W3049172642 hasConceptScore W3049172642C163716315 @default.
- W3049172642 hasConceptScore W3049172642C199845137 @default.
- W3049172642 hasConceptScore W3049172642C33923547 @default.
- W3049172642 hasConceptScore W3049172642C41008148 @default.
- W3049172642 hasConceptScore W3049172642C50644808 @default.
- W3049172642 hasConceptScore W3049172642C62520636 @default.
- W3049172642 hasConceptScore W3049172642C99656134 @default.
- W3049172642 hasLocation W30491726421 @default.
- W3049172642 hasOpenAccess W3049172642 @default.
- W3049172642 hasPrimaryLocation W30491726421 @default.
- W3049172642 hasRelatedWork W1494747600 @default.
- W3049172642 hasRelatedWork W1524483304 @default.
- W3049172642 hasRelatedWork W1624804034 @default.
- W3049172642 hasRelatedWork W2049418957 @default.
- W3049172642 hasRelatedWork W2103334569 @default.
- W3049172642 hasRelatedWork W2109033052 @default.
- W3049172642 hasRelatedWork W2148440006 @default.
- W3049172642 hasRelatedWork W2148474239 @default.
- W3049172642 hasRelatedWork W2241666630 @default.
- W3049172642 hasRelatedWork W2314140600 @default.
- W3049172642 hasRelatedWork W2397935916 @default.
- W3049172642 hasRelatedWork W2889432972 @default.
- W3049172642 hasRelatedWork W2929009136 @default.
- W3049172642 hasRelatedWork W2949455471 @default.
- W3049172642 hasRelatedWork W2953340873 @default.
- W3049172642 hasRelatedWork W2964933009 @default.
- W3049172642 hasRelatedWork W3013593644 @default.
- W3049172642 hasRelatedWork W3100353959 @default.
- W3049172642 hasRelatedWork W329869691 @default.
- W3049172642 hasRelatedWork W866740092 @default.
- W3049172642 isParatext "false" @default.
- W3049172642 isRetracted "false" @default.
- W3049172642 magId "3049172642" @default.
- W3049172642 workType "article" @default.