Matches in SemOpenAlex for { <https://semopenalex.org/work/W2220049097> ?p ?o ?g. }
- W2220049097 abstract "Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Hongyu Su Name of the doctoral dissertation Multilabel Classification through Structured Output Learning Methods and Applications Publisher School of Science Unit Department of Computer Science Series Aalto University publication series DOCTORAL DISSERTATIONS 28/2015 Field of research Information and Computer Science Manuscript submitted 7 November 2014 Date of the defence 27 March 2015 Permission to publish granted (date) 14 January 2015 Language English Monograph Article dissertation (summary + original articles) Abstract Multilabel classification is an important topic in machine learning that arises naturally from many real world applications. For example, in document classification, a research article can be categorized as “science”, “drug discovery” and “genomics” at the same time. The goal of multilabel classification is to reliably predict multiple outputs for a given input. As multiple interdependent labels can be “on” and “off” simultaneously, the central problem in multilabel classification is how to best exploit the correlation between labels to make accurate predictions. Compared to the previous flat multilabel classification approaches which treat multiple labels as a flat vector, structured output learning relies on an output graph connecting multiple labels to model the correlation between labels in a comprehensive manner. The main question studied in this thesis is how to tackle multilabel classification through structured output learning. This thesis starts with an extensive review on the topic of classification learning including both single-label and multilabel classification. The first problem we address is how to solve the multilabel classification problem when the output graph is observed apriori. We discuss several well-established structured output learning algorithms and study the network response prediction problem within the context of social network analysis. As the current structured output learning algorithms rely on the output graph to exploit the dependency between labels, the second problem we address is how to use structured output learning when the output graph is not known. Specifically, we examine the potential of learning on a set of random output graphs when the “real” one is hidden. This problem is relevant as in most multilabel classification problems there does not exist any output graph that reveals the dependency between labels. The third problem we address is how to analyze the proposed learning algorithms in a theoretical manner. Specifically, we want to explain the intuition behind the proposed models and to study the generalization error. The main contributions of this thesis are several new learning algorithms that widen the applicability of structured output learning. For the problem with an observed output graph, the proposed algorithm “SPIN” is able to predict an optimal directed acyclic graph from an observed underlying network that best responses to an input. For general multilabel classification problems without any known output graph, we proposed several learning algorithms that combine a set of structured output learners built on random output graphs. In addition, we develop a joint learning and inference framework which is based on max-margin learning over a random sample of spanning trees. The theoretic analysis also guarantees the generalization error of the proposed methods.Multilabel classification is an important topic in machine learning that arises naturally from many real world applications. For example, in document classification, a research article can be categorized as “science”, “drug discovery” and “genomics” at the same time. The goal of multilabel classification is to reliably predict multiple outputs for a given input. As multiple interdependent labels can be “on” and “off” simultaneously, the central problem in multilabel classification is how to best exploit the correlation between labels to make accurate predictions. Compared to the previous flat multilabel classification approaches which treat multiple labels as a flat vector, structured output learning relies on an output graph connecting multiple labels to model the correlation between labels in a comprehensive manner. The main question studied in this thesis is how to tackle multilabel classification through structured output learning. This thesis starts with an extensive review on the topic of classification learning including both single-label and multilabel classification. The first problem we address is how to solve the multilabel classification problem when the output graph is observed apriori. We discuss several well-established structured output learning algorithms and study the network response prediction problem within the context of social network analysis. As the current structured output learning algorithms rely on the output graph to exploit the dependency between labels, the second problem we address is how to use structured output learning when the output graph is not known. Specifically, we examine the potential of learning on a set of random output graphs when the “real” one is hidden. This problem is relevant as in most multilabel classification problems there does not exist any output graph that reveals the dependency between labels. The third problem we address is how to analyze the proposed learning algorithms in a theoretical manner. Specifically, we want to explain the intuition behind the proposed models and to study the generalization error. The main contributions of this thesis are several new learning algorithms that widen the applicability of structured output learning. For the problem with an observed output graph, the proposed algorithm “SPIN” is able to predict an optimal directed acyclic graph from an observed underlying network that best responses to an input. For general multilabel classification problems without any known output graph, we proposed several learning algorithms that combine a set of structured output learners built on random output graphs. In addition, we develop a joint learning and inference framework which is based on max-margin learning over a random sample of spanning trees. The theoretic analysis also guarantees the generalization error of the proposed methods. 1" @default.
- W2220049097 created "2016-06-24" @default.
- W2220049097 creator A5012380614 @default.
- W2220049097 date "2015-01-01" @default.
- W2220049097 modified "2023-09-27" @default.
- W2220049097 title "Multilabel Classification through Structured Output Learning - Methods and Applications" @default.
- W2220049097 cites W118766000 @default.
- W2220049097 cites W146018178 @default.
- W2220049097 cites W146125889 @default.
- W2220049097 cites W1506806321 @default.
- W2220049097 cites W1507255258 @default.
- W2220049097 cites W1510073064 @default.
- W2220049097 cites W1511986666 @default.
- W2220049097 cites W1530699444 @default.
- W2220049097 cites W1536816303 @default.
- W2220049097 cites W1538493107 @default.
- W2220049097 cites W1551209770 @default.
- W2220049097 cites W1553313034 @default.
- W2220049097 cites W1560724230 @default.
- W2220049097 cites W1563375353 @default.
- W2220049097 cites W1569436870 @default.
- W2220049097 cites W1576213419 @default.
- W2220049097 cites W1576520375 @default.
- W2220049097 cites W1576828772 @default.
- W2220049097 cites W1585529040 @default.
- W2220049097 cites W1592796124 @default.
- W2220049097 cites W1606697907 @default.
- W2220049097 cites W1608733719 @default.
- W2220049097 cites W1620204465 @default.
- W2220049097 cites W171462922 @default.
- W2220049097 cites W1761010383 @default.
- W2220049097 cites W176125184 @default.
- W2220049097 cites W1783333813 @default.
- W2220049097 cites W1792316426 @default.
- W2220049097 cites W1835509607 @default.
- W2220049097 cites W1856548066 @default.
- W2220049097 cites W1880262756 @default.
- W2220049097 cites W1895481600 @default.
- W2220049097 cites W1904457459 @default.
- W2220049097 cites W1908728294 @default.
- W2220049097 cites W1939941161 @default.
- W2220049097 cites W1953606363 @default.
- W2220049097 cites W1967934524 @default.
- W2220049097 cites W1974166884 @default.
- W2220049097 cites W1978919502 @default.
- W2220049097 cites W1979711143 @default.
- W2220049097 cites W1982649163 @default.
- W2220049097 cites W1985123706 @default.
- W2220049097 cites W1988790447 @default.
- W2220049097 cites W1990513740 @default.
- W2220049097 cites W1996816151 @default.
- W2220049097 cites W1998839399 @default.
- W2220049097 cites W1999954155 @default.
- W2220049097 cites W2001792610 @default.
- W2220049097 cites W2007436490 @default.
- W2220049097 cites W2008031590 @default.
- W2220049097 cites W2008652694 @default.
- W2220049097 cites W2009985472 @default.
- W2220049097 cites W2011039300 @default.
- W2220049097 cites W2014566476 @default.
- W2220049097 cites W2017091502 @default.
- W2220049097 cites W2018590585 @default.
- W2220049097 cites W2021314079 @default.
- W2220049097 cites W2023492421 @default.
- W2220049097 cites W2025047573 @default.
- W2220049097 cites W2031248101 @default.
- W2220049097 cites W2032210760 @default.
- W2220049097 cites W2036043322 @default.
- W2220049097 cites W203785280 @default.
- W2220049097 cites W2040870580 @default.
- W2220049097 cites W2042038728 @default.
- W2220049097 cites W2042123098 @default.
- W2220049097 cites W2047028564 @default.
- W2220049097 cites W2052684427 @default.
- W2220049097 cites W2054039029 @default.
- W2220049097 cites W2054205017 @default.
- W2220049097 cites W2061212083 @default.
- W2220049097 cites W2061351061 @default.
- W2220049097 cites W2061820396 @default.
- W2220049097 cites W2063862666 @default.
- W2220049097 cites W2065180801 @default.
- W2220049097 cites W2065903747 @default.
- W2220049097 cites W2068405010 @default.
- W2220049097 cites W2070272652 @default.
- W2220049097 cites W2072128103 @default.
- W2220049097 cites W2073926352 @default.
- W2220049097 cites W2076080604 @default.
- W2220049097 cites W2081301924 @default.
- W2220049097 cites W2084802027 @default.
- W2220049097 cites W2087312216 @default.
- W2220049097 cites W2093717447 @default.
- W2220049097 cites W2096431153 @default.
- W2220049097 cites W2105644991 @default.
- W2220049097 cites W2105842272 @default.
- W2220049097 cites W2107666336 @default.
- W2220049097 cites W2107753812 @default.
- W2220049097 cites W2108619558 @default.
- W2220049097 cites W2108712612 @default.
- W2220049097 cites W2109844857 @default.
- W2220049097 cites W2110185237 @default.