Matches in SemOpenAlex for { <https://semopenalex.org/work/W3111500217> ?p ?o ?g. }
Showing items 1 to 68 of
68
with 100 items per page.
- W3111500217 abstract "If a machine translation is wrong, how we can tell the underlying model to fix it? Answering this question requires (1) a machine learning algorithm to define update rules, (2) an interface for feedback to be submitted, and (3) expertise on the side of the human who gives the feedback. This thesis investigates solutions for machine learning updates, the suitability of feedback interfaces, and the dependency on reliability and expertise for different types of feedback.We start with an interactive online learning scenario where a machine translation (MT) system receives bandit feedback (i.e. only once per source) instead of references for learning. Policy gradient algorithms for statistical and neural MT are developed to learn from absolute and pairwise judgments. Our experiments on domain adaptation with simulated online feedback show that the models can largely improve under weak feedback, with variance reduction techniques being very effective.In production environments offline learning is often preferred over online learning. We evaluate algorithms for counterfactual learning from human feedback in a study on eBay product title translations. Feedback is either collected via explicit star ratings from users, or implicitly from the user interaction with cross-lingual product search. Leveraging implicit feedback turns out to be more successful due to lower levels of noise. We compare the reliability and learnability of absolute Likert-scale ratings with pairwise preferences in a smaller user study, and find that absolute ratings are overall more effective for improvements in down-stream tasks. Furthermore, we discover that error markings provide a cheap and practical alternative to error corrections.In a generalized interactive learning framework we propose a self-regulation approach, where the learner, guided by a regulator module, decides which type of feedback to choose for each input. The regulator is reinforced to find a good trade-off between supervision effect and cost. In our experiments, it discovers strategies that are more efficient than active learning and standard fully supervised learning." @default.
- W3111500217 created "2020-12-21" @default.
- W3111500217 creator A5048307591 @default.
- W3111500217 date "2020-01-01" @default.
- W3111500217 modified "2023-09-27" @default.
- W3111500217 title "Reinforcement Learning for Machine Translation: from Simulations to Real-World Applications" @default.
- W3111500217 doi "https://doi.org/10.11588/heidok.00028862" @default.
- W3111500217 hasPublicationYear "2020" @default.
- W3111500217 type Work @default.
- W3111500217 sameAs 3111500217 @default.
- W3111500217 citedByCount "0" @default.
- W3111500217 crossrefType "dissertation" @default.
- W3111500217 hasAuthorship W3111500217A5048307591 @default.
- W3111500217 hasConcept C107457646 @default.
- W3111500217 hasConcept C115903097 @default.
- W3111500217 hasConcept C119857082 @default.
- W3111500217 hasConcept C121332964 @default.
- W3111500217 hasConcept C154945302 @default.
- W3111500217 hasConcept C163258240 @default.
- W3111500217 hasConcept C184898388 @default.
- W3111500217 hasConcept C203005215 @default.
- W3111500217 hasConcept C2777723229 @default.
- W3111500217 hasConcept C41008148 @default.
- W3111500217 hasConcept C43214815 @default.
- W3111500217 hasConcept C62520636 @default.
- W3111500217 hasConcept C77967617 @default.
- W3111500217 hasConcept C97541855 @default.
- W3111500217 hasConceptScore W3111500217C107457646 @default.
- W3111500217 hasConceptScore W3111500217C115903097 @default.
- W3111500217 hasConceptScore W3111500217C119857082 @default.
- W3111500217 hasConceptScore W3111500217C121332964 @default.
- W3111500217 hasConceptScore W3111500217C154945302 @default.
- W3111500217 hasConceptScore W3111500217C163258240 @default.
- W3111500217 hasConceptScore W3111500217C184898388 @default.
- W3111500217 hasConceptScore W3111500217C203005215 @default.
- W3111500217 hasConceptScore W3111500217C2777723229 @default.
- W3111500217 hasConceptScore W3111500217C41008148 @default.
- W3111500217 hasConceptScore W3111500217C43214815 @default.
- W3111500217 hasConceptScore W3111500217C62520636 @default.
- W3111500217 hasConceptScore W3111500217C77967617 @default.
- W3111500217 hasConceptScore W3111500217C97541855 @default.
- W3111500217 hasLocation W31115002171 @default.
- W3111500217 hasOpenAccess W3111500217 @default.
- W3111500217 hasPrimaryLocation W31115002171 @default.
- W3111500217 hasRelatedWork W1534150364 @default.
- W3111500217 hasRelatedWork W1988431231 @default.
- W3111500217 hasRelatedWork W2625490831 @default.
- W3111500217 hasRelatedWork W2800367501 @default.
- W3111500217 hasRelatedWork W2907109142 @default.
- W3111500217 hasRelatedWork W2915070458 @default.
- W3111500217 hasRelatedWork W2926246612 @default.
- W3111500217 hasRelatedWork W2942485988 @default.
- W3111500217 hasRelatedWork W2963393617 @default.
- W3111500217 hasRelatedWork W2963542691 @default.
- W3111500217 hasRelatedWork W2963633674 @default.
- W3111500217 hasRelatedWork W3048367007 @default.
- W3111500217 hasRelatedWork W3092995464 @default.
- W3111500217 hasRelatedWork W3098502857 @default.
- W3111500217 hasRelatedWork W3101314357 @default.
- W3111500217 hasRelatedWork W3104240813 @default.
- W3111500217 hasRelatedWork W3151183549 @default.
- W3111500217 hasRelatedWork W3201923760 @default.
- W3111500217 hasRelatedWork W3202454639 @default.
- W3111500217 hasRelatedWork W2114503508 @default.
- W3111500217 isParatext "false" @default.
- W3111500217 isRetracted "false" @default.
- W3111500217 magId "3111500217" @default.
- W3111500217 workType "dissertation" @default.