Matches in SemOpenAlex for { <https://semopenalex.org/work/W3100756144> ?p ?o ?g. }
- W3100756144 abstract "Opinions are omnipresent in written and spoken text ranging from editorials, reviews, blogs, guides, and informal conversations to written and broadcast news. However, past research in NLP has mainly addressed explicit opinion expressions, ignoring implicit opinions. As a result, research in opinion analysis has plateaued at a somewhat superficial level, providing methods that only recognize what is explicitly said and do not understand what is implied.In this dissertation, we develop machine learning models for two tasks that presumably support propagation of sentiment in discourse, beyond one sentence. The first task we address is opinion role labeling, i.e. the task of detecting who expressed a given attitude toward what or who. The second task is abstract anaphora resolution, i.e. the task of finding a (typically) non-nominal antecedent of pronouns and noun phrases that refer to abstract objects like facts, events, actions, or situations in the preceding discourse.We propose a neural model for labeling of opinion holders and targets and circumvent the problems that arise from the limited labeled data. In particular, we extend the baseline model with different multi-task learning frameworks. We obtain clear performance improvements using semantic role labeling as the auxiliary task. We conduct a thorough analysis to demonstrate how multi-task learning helps, what has been solved for the task, and what is next. We show that future developments should improve the ability of the models to capture long-range dependencies and consider other auxiliary tasks such as dependency parsing or recognizing textual entailment. We emphasize that future improvements can be measured more reliably if opinion expressions with missing roles are curated and if the evaluation considers all mentions in opinion role coreference chains as well as discontinuous roles.To the best of our knowledge, we propose the first abstract anaphora resolution model that handles the unrestricted phenomenon in a realistic setting.We cast abstract anaphora resolution as the task of learning attributes of the relation that holds between the sentence with the abstract anaphor and its antecedent. We propose a Mention-Ranking siamese-LSTM model (MR-LSTM) for learning what characterizes the mentioned relation in a data-driven fashion. The current resources for abstract anaphora resolution are quite limited. However, we can train our models without conventional data for abstract anaphora resolution. In particular, we can train our models on many instances of antecedent-anaphoric sentence pairs. Such pairs can be automatically extracted from parsed corpora by searching for a common construction which consists of a verb with an embedded sentence (complement or adverbial), applying a simple transformation that replaces the embedded sentence with an abstract anaphor, and using the cut-off embedded sentence as the antecedent. We refer to the extracted data as silver data.We evaluate our MR-LSTM models in a realistic task setup in which models need to rank embedded sentences and verb phrases from the sentence with the anaphor as well as a few preceding sentences. We report the first benchmark results on an abstract anaphora subset of the ARRAU corpus citep{uryupina_et_al_2016} which presents a greater challenge due to a mixture of nominal and pronominal anaphors as well as a greater range of confounders. We also use two additional evaluation datasets: a subset of the CoNLL-12 shared task dataset citep{pradhan_et_al_2012} and a subset of the ASN corpus citep{kolhatkar_et_al_2013_crowdsourcing}. We show that our MR-LSTM models outperform the baselines in all evaluation datasets, except for events in the CoNLL-12 dataset. We conclude that training on the small-scale gold data works well if we encounter the same type of anaphors at the evaluation time. However, the gold training data contains only six shell nouns and events and thus resolution of anaphors in the ARRAU corpus that covers a variety of anaphor types benefits from the silver data. Our MR-LSTM models for resolution of abstract anaphors outperform the prior work for shell noun resolution citep{kolhatkar_et_al_2013} in their restricted task setup. Finally, we try to get the best out of the gold and silver training data by mixing them. Moreover, we speculate that we could improve the training on a mixture if we: (i) handle artifacts in the silver data with adversarial training and (ii) use multi-task learning to enable our models to make ranking decisions dependent on the type of anaphor. These proposals give us mixed results and hence a robust mixed training strategy remains a challenge." @default.
- W3100756144 created "2020-11-23" @default.
- W3100756144 creator A5087098432 @default.
- W3100756144 date "2020-01-01" @default.
- W3100756144 modified "2023-09-25" @default.
- W3100756144 title "Deep Learning With Sentiment Inference For Discourse-Oriented Opinion Analysis" @default.
- W3100756144 cites W111184761 @default.
- W3100756144 cites W1560781570 @default.
- W3100756144 cites W1564649749 @default.
- W3100756144 cites W1566346388 @default.
- W3100756144 cites W1632114991 @default.
- W3100756144 cites W1675450783 @default.
- W3100756144 cites W1815076433 @default.
- W3100756144 cites W1896424170 @default.
- W3100756144 cites W1982229380 @default.
- W3100756144 cites W2022204871 @default.
- W3100756144 cites W205145189 @default.
- W3100756144 cites W2064594469 @default.
- W3100756144 cites W2067533161 @default.
- W3100756144 cites W2081375810 @default.
- W3100756144 cites W2082291422 @default.
- W3100756144 cites W2093835839 @default.
- W3100756144 cites W2097726431 @default.
- W3100756144 cites W2097752345 @default.
- W3100756144 cites W2098040709 @default.
- W3100756144 cites W2100529970 @default.
- W3100756144 cites W2115834228 @default.
- W3100756144 cites W2122282387 @default.
- W3100756144 cites W2123442489 @default.
- W3100756144 cites W2129294185 @default.
- W3100756144 cites W2136408680 @default.
- W3100756144 cites W2141599568 @default.
- W3100756144 cites W2145071407 @default.
- W3100756144 cites W2147218300 @default.
- W3100756144 cites W2155069789 @default.
- W3100756144 cites W2156094048 @default.
- W3100756144 cites W2157526690 @default.
- W3100756144 cites W2158847908 @default.
- W3100756144 cites W2159457224 @default.
- W3100756144 cites W2163552363 @default.
- W3100756144 cites W2163794943 @default.
- W3100756144 cites W2165921749 @default.
- W3100756144 cites W2166481425 @default.
- W3100756144 cites W2169415915 @default.
- W3100756144 cites W2180284103 @default.
- W3100756144 cites W2250539671 @default.
- W3100756144 cites W2250835245 @default.
- W3100756144 cites W2250966211 @default.
- W3100756144 cites W2250981850 @default.
- W3100756144 cites W2251064706 @default.
- W3100756144 cites W2251143283 @default.
- W3100756144 cites W2251175199 @default.
- W3100756144 cites W2251599843 @default.
- W3100756144 cites W2251939518 @default.
- W3100756144 cites W2252024663 @default.
- W3100756144 cites W2271112965 @default.
- W3100756144 cites W2293778248 @default.
- W3100756144 cites W2432541215 @default.
- W3100756144 cites W2460100836 @default.
- W3100756144 cites W2463809169 @default.
- W3100756144 cites W2508865106 @default.
- W3100756144 cites W2510940142 @default.
- W3100756144 cites W2516255829 @default.
- W3100756144 cites W2566563465 @default.
- W3100756144 cites W2575002427 @default.
- W3100756144 cites W2581637843 @default.
- W3100756144 cites W2586756136 @default.
- W3100756144 cites W2587783059 @default.
- W3100756144 cites W2600702321 @default.
- W3100756144 cites W2610536450 @default.
- W3100756144 cites W2618925779 @default.
- W3100756144 cites W2624871570 @default.
- W3100756144 cites W2740132093 @default.
- W3100756144 cites W2741719406 @default.
- W3100756144 cites W2760600531 @default.
- W3100756144 cites W2761988601 @default.
- W3100756144 cites W2803909062 @default.
- W3100756144 cites W2805589998 @default.
- W3100756144 cites W2886987881 @default.
- W3100756144 cites W2888329843 @default.
- W3100756144 cites W2894706950 @default.
- W3100756144 cites W2903559363 @default.
- W3100756144 cites W2914430436 @default.
- W3100756144 cites W2950577311 @default.
- W3100756144 cites W2962739339 @default.
- W3100756144 cites W2962808042 @default.
- W3100756144 cites W2962897020 @default.
- W3100756144 cites W2963069209 @default.
- W3100756144 cites W2963083845 @default.
- W3100756144 cites W2963355447 @default.
- W3100756144 cites W2963442673 @default.
- W3100756144 cites W2963565598 @default.
- W3100756144 cites W2963567867 @default.
- W3100756144 cites W2963721761 @default.
- W3100756144 cites W2963851958 @default.
- W3100756144 cites W2963888891 @default.
- W3100756144 cites W2964101860 @default.
- W3100756144 cites W2964102772 @default.
- W3100756144 cites W2964185534 @default.
- W3100756144 cites W2964222246 @default.