Matches in SemOpenAlex for { <https://semopenalex.org/work/W3021188327> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W3021188327 endingPage "101103" @default.
- W3021188327 startingPage "101103" @default.
- W3021188327 abstract "In order to properly train an automatic speech recognition system, speech with its annotated transcriptions is most often required. The amount of real annotated data recorded in noisy and reverberant conditions is extremely limited, especially compared to the amount of data than can be simulated by adding noise to clean annotated speech. Thus, using both real and simulated data is important in order to improve robust speech recognition, as this increases the amount and diversity of training data (thanks to the simulated data) while also benefiting from a reduced mismatch between training and operation of the system (thanks to the real data). Another promising method applied to speech recognition in noisy and reverberant conditions is multi-task learning. The idea is to train one acoustic model to solve simultaneously at least two tasks that are different but related, with speech recognition being the main task. A successful auxiliary task consists of generating clean speech features using a regression loss (as a denoising auto-encoder). This auxiliary task though uses as targets clean speech, which implies that real data cannot be used. In order to tackle this problem a Hybrid-Task Learning system is proposed. This system switches frequently between multi and single-task learning depending on whether the input is real or simulated data respectively. Having a hybrid architecture allows us to benefit from both real and simulated data while using a denoising auto-encoder as auxiliary task of a multi-task setup. We show that the relative improvement brought by the proposed hybrid-task learning architecture can reach up to 4.4% compared to the traditional single-task learning approach on the CHiME4 database. We also demonstrate the benefits of the hybrid approach compared to multi-task learning or adaptation." @default.
- W3021188327 created "2020-05-13" @default.
- W3021188327 creator A5016964600 @default.
- W3021188327 creator A5040916551 @default.
- W3021188327 creator A5083563222 @default.
- W3021188327 date "2020-11-01" @default.
- W3021188327 modified "2023-09-26" @default.
- W3021188327 title "Hybrid-task learning for robust automatic speech recognition" @default.
- W3021188327 cites W1513862252 @default.
- W3021188327 cites W1992475611 @default.
- W3021188327 cites W2124136621 @default.
- W3021188327 cites W2142193238 @default.
- W3021188327 cites W2150769028 @default.
- W3021188327 cites W2160815625 @default.
- W3021188327 cites W2242685705 @default.
- W3021188327 cites W2506203739 @default.
- W3021188327 cites W2587088898 @default.
- W3021188327 cites W2759071281 @default.
- W3021188327 cites W2913340405 @default.
- W3021188327 cites W2919115771 @default.
- W3021188327 cites W2962816167 @default.
- W3021188327 cites W2964243145 @default.
- W3021188327 doi "https://doi.org/10.1016/j.csl.2020.101103" @default.
- W3021188327 hasPublicationYear "2020" @default.
- W3021188327 type Work @default.
- W3021188327 sameAs 3021188327 @default.
- W3021188327 citedByCount "9" @default.
- W3021188327 countsByYear W30211883272021 @default.
- W3021188327 countsByYear W30211883272022 @default.
- W3021188327 countsByYear W30211883272023 @default.
- W3021188327 crossrefType "journal-article" @default.
- W3021188327 hasAuthorship W3021188327A5016964600 @default.
- W3021188327 hasAuthorship W3021188327A5040916551 @default.
- W3021188327 hasAuthorship W3021188327A5083563222 @default.
- W3021188327 hasConcept C111919701 @default.
- W3021188327 hasConcept C115961682 @default.
- W3021188327 hasConcept C118505674 @default.
- W3021188327 hasConcept C153180895 @default.
- W3021188327 hasConcept C154945302 @default.
- W3021188327 hasConcept C162324750 @default.
- W3021188327 hasConcept C163294075 @default.
- W3021188327 hasConcept C187736073 @default.
- W3021188327 hasConcept C2776182073 @default.
- W3021188327 hasConcept C2780451532 @default.
- W3021188327 hasConcept C28006648 @default.
- W3021188327 hasConcept C28490314 @default.
- W3021188327 hasConcept C41008148 @default.
- W3021188327 hasConcept C99498987 @default.
- W3021188327 hasConceptScore W3021188327C111919701 @default.
- W3021188327 hasConceptScore W3021188327C115961682 @default.
- W3021188327 hasConceptScore W3021188327C118505674 @default.
- W3021188327 hasConceptScore W3021188327C153180895 @default.
- W3021188327 hasConceptScore W3021188327C154945302 @default.
- W3021188327 hasConceptScore W3021188327C162324750 @default.
- W3021188327 hasConceptScore W3021188327C163294075 @default.
- W3021188327 hasConceptScore W3021188327C187736073 @default.
- W3021188327 hasConceptScore W3021188327C2776182073 @default.
- W3021188327 hasConceptScore W3021188327C2780451532 @default.
- W3021188327 hasConceptScore W3021188327C28006648 @default.
- W3021188327 hasConceptScore W3021188327C28490314 @default.
- W3021188327 hasConceptScore W3021188327C41008148 @default.
- W3021188327 hasConceptScore W3021188327C99498987 @default.
- W3021188327 hasFunder F4320325905 @default.
- W3021188327 hasLocation W30211883271 @default.
- W3021188327 hasOpenAccess W3021188327 @default.
- W3021188327 hasPrimaryLocation W30211883271 @default.
- W3021188327 hasRelatedWork W1573346329 @default.
- W3021188327 hasRelatedWork W1963950237 @default.
- W3021188327 hasRelatedWork W2062994807 @default.
- W3021188327 hasRelatedWork W2151333624 @default.
- W3021188327 hasRelatedWork W2290548146 @default.
- W3021188327 hasRelatedWork W2405774341 @default.
- W3021188327 hasRelatedWork W2810291168 @default.
- W3021188327 hasRelatedWork W3129072390 @default.
- W3021188327 hasRelatedWork W4221152531 @default.
- W3021188327 hasRelatedWork W2092619848 @default.
- W3021188327 hasVolume "64" @default.
- W3021188327 isParatext "false" @default.
- W3021188327 isRetracted "false" @default.
- W3021188327 magId "3021188327" @default.
- W3021188327 workType "article" @default.