Matches in SemOpenAlex for { <https://semopenalex.org/work/W2600783590> ?p ?o ?g. }
- W2600783590 abstract "Prototype-based classifiers such as learning vector quantization (LVQ) often display intuitive and flexible classification and learning rules. However, classical techniques are restricted to vectorial data only, and hence not suited for more complex data structures. Therefore, a few extensions of diverse LVQ variants to more general data which are characterized based on pairwise similarities or dissimilarities only have been proposed recently in the literature. In this contribution, we propose a novel extension of LVQ to similarity data which is based on the kernelization of an underlying probabilistic model: kernel robust soft LVQ (KRSLVQ). Relying on the notion of a pseudo-Euclidean embedding of proximity data, we put this specific approach as well as existing alternatives into a general framework which characterizes different fundamental possibilities how to extend LVQ towards proximity data: the main characteristics are given by the choice of the cost function, the interface to the data in terms of similarities or dissimilarities, and the way in which optimization takes place. In particular the latter strategy highlights the difference of popular kernel approaches versus so-called relational approaches. While KRSLVQ and alternatives lead to state of the art results, these extensions have two drawbacks as compared to their vectorial counterparts: (i) a quadratic training complexity is encountered due to the dependency of the methods on the full proximity matrix; (ii) prototypes are no longer given by vectors but they are represented in terms of an implicit linear combination of data, i.e. interpretability of the prototypes is lost. We investigate different techniques to deal with these challenges: We consider a speed-up of training by means of low rank approximations of the Gram matrix by its Nystrom approximation. In benchmarks, this strategy is successful if the considered data are intrinsically low-dimensional. We propose a quick check to efficiently test this property prior to training. We extend KRSLVQ by sparse approximations of the prototypes: instead of the full coefficient vectors, few exemplars which represent the prototypes can be directly inspected by practitioners in the same way as data. We compare different paradigms based on which to infer a sparse approximation: sparsity priors while training, geometric approaches including orthogonal matching pursuit and core techniques, and heuristic approximations based on the coefficients or proximities. We demonstrate the performance of these LVQ techniques for benchmark data, reaching state of the art results. We discuss the behavior of the methods to enhance performance and interpretability as concerns quality, sparsity, and representativity, and we propose different measures how to quantitatively evaluate the performance of the approaches. We would like to point out that we had the possibility to present our findings in international publication organs including three journal articles [6, 9, 2], four conference papers [8, 5, 7, 1] and two workshop contributions [4, 3]. References [1] A. Gisbrecht, D. Hofmann, and B. Hammer. Discriminative dimensionality reduction mappings. Advances in Intelligent Data Analysis, 7619: 126–138, 2012. [2] B. Hammer, D. Hofmann, F.-M. Schleif, and X. Zhu. Learning vector quantization for (dis-)similarities. Neurocomputing, 131: 43–51, 2014. [3] D. Hofmann. Sparse approximations for kernel robust soft lvq. Mittweida Workshop on Computational Intelligence, 2013. [4] D. Hofmann, A. Gisbrecht, and B. Hammer. Discriminative probabilistic prototype based models in kernel space. New Challenges in Neural Computation, TR Machine Learning Reports, 2012. [5] D. Hofmann, A. Gisbrecht, and B. Hammer. Efficient approximations of kernel robust soft lvq. Workshop on Self-Organizing Maps, 198: 183–192, 2012. [6] D. Hofmann, A. Gisbrecht, and B. Hammer. Efficient approximations of robust soft learning vector quantization for non-vectorial data. Neurocomputing, 147: 96–106, 2015. [7] D. Hofmann and B. Hammer. Kernel robust soft learning vector quantization. Artificial Neural Networks in Pattern Recognition, 7477: 14–23, 2012. [8] D. Hofmann and B. Hammer. Sparse approximations for kernel learning vector quantization. European Symposium on Artificial Neural Networks, 549–554, 2013. [9] D. Hofmann, F.-M. Schleif, B. Paasen, and B. Hammer. Learning interpretable kernelized prototype-based models. Neurocomputing, 141: 84–96, 2014." @default.
- W2600783590 created "2017-04-07" @default.
- W2600783590 creator A5054054606 @default.
- W2600783590 date "2016-01-01" @default.
- W2600783590 modified "2023-09-24" @default.
- W2600783590 title "Learning vector quantization for proximity data" @default.
- W2600783590 cites W108285666 @default.
- W2600783590 cites W143588678 @default.
- W2600783590 cites W1480708938 @default.
- W2600783590 cites W1510073064 @default.
- W2600783590 cites W1530899232 @default.
- W2600783590 cites W15804473 @default.
- W2600783590 cites W1589103544 @default.
- W2600783590 cites W1663973292 @default.
- W2600783590 cites W166718936 @default.
- W2600783590 cites W1808967201 @default.
- W2600783590 cites W1861597095 @default.
- W2600783590 cites W1900549406 @default.
- W2600783590 cites W194799689 @default.
- W2600783590 cites W1964910722 @default.
- W2600783590 cites W1967934524 @default.
- W2600783590 cites W1971853424 @default.
- W2600783590 cites W1976709621 @default.
- W2600783590 cites W1977164605 @default.
- W2600783590 cites W1987492370 @default.
- W2600783590 cites W1987527649 @default.
- W2600783590 cites W1991016861 @default.
- W2600783590 cites W1995598019 @default.
- W2600783590 cites W2008918578 @default.
- W2600783590 cites W2018363437 @default.
- W2600783590 cites W2019377636 @default.
- W2600783590 cites W2022130318 @default.
- W2600783590 cites W2025823698 @default.
- W2600783590 cites W2028781966 @default.
- W2600783590 cites W2030188093 @default.
- W2600783590 cites W2036256527 @default.
- W2600783590 cites W2037080954 @default.
- W2600783590 cites W2037288520 @default.
- W2600783590 cites W2039434802 @default.
- W2600783590 cites W2042902006 @default.
- W2600783590 cites W2045621032 @default.
- W2600783590 cites W2051126492 @default.
- W2600783590 cites W2066625485 @default.
- W2600783590 cites W2088032561 @default.
- W2600783590 cites W2093901628 @default.
- W2600783590 cites W2094150678 @default.
- W2600783590 cites W2101460669 @default.
- W2600783590 cites W2103484968 @default.
- W2600783590 cites W2103595817 @default.
- W2600783590 cites W2107025582 @default.
- W2600783590 cites W2109834471 @default.
- W2600783590 cites W2109993655 @default.
- W2600783590 cites W2112545207 @default.
- W2600783590 cites W2120164251 @default.
- W2600783590 cites W2123749980 @default.
- W2600783590 cites W2123963736 @default.
- W2600783590 cites W2124067153 @default.
- W2600783590 cites W2127827747 @default.
- W2600783590 cites W2128750506 @default.
- W2600783590 cites W2128859735 @default.
- W2600783590 cites W2134146049 @default.
- W2600783590 cites W2136427590 @default.
- W2600783590 cites W2145889472 @default.
- W2600783590 cites W2146837144 @default.
- W2600783590 cites W2150120952 @default.
- W2600783590 cites W2151827881 @default.
- W2600783590 cites W2155319834 @default.
- W2600783590 cites W2165232124 @default.
- W2600783590 cites W2166322089 @default.
- W2600783590 cites W2170958680 @default.
- W2600783590 cites W2187089797 @default.
- W2600783590 cites W2579923771 @default.
- W2600783590 cites W2768149277 @default.
- W2600783590 cites W58158116 @default.
- W2600783590 cites W79449126 @default.
- W2600783590 cites W814282759 @default.
- W2600783590 cites W826990995 @default.
- W2600783590 cites W93538563 @default.
- W2600783590 hasPublicationYear "2016" @default.
- W2600783590 type Work @default.
- W2600783590 sameAs 2600783590 @default.
- W2600783590 citedByCount "0" @default.
- W2600783590 crossrefType "journal-article" @default.
- W2600783590 hasAuthorship W2600783590A5054054606 @default.
- W2600783590 hasConcept C119857082 @default.
- W2600783590 hasConcept C132525143 @default.
- W2600783590 hasConcept C153180895 @default.
- W2600783590 hasConcept C154945302 @default.
- W2600783590 hasConcept C199833920 @default.
- W2600783590 hasConcept C207225210 @default.
- W2600783590 hasConcept C33923547 @default.
- W2600783590 hasConcept C40567965 @default.
- W2600783590 hasConcept C41008148 @default.
- W2600783590 hasConcept C41608201 @default.
- W2600783590 hasConcept C80444323 @default.
- W2600783590 hasConceptScore W2600783590C119857082 @default.
- W2600783590 hasConceptScore W2600783590C132525143 @default.
- W2600783590 hasConceptScore W2600783590C153180895 @default.
- W2600783590 hasConceptScore W2600783590C154945302 @default.
- W2600783590 hasConceptScore W2600783590C199833920 @default.