Matches in SemOpenAlex for { <https://semopenalex.org/work/W4377130487> ?p ?o ?g. }
Showing items 1 to 57 of
57
with 100 items per page.
- W4377130487 abstract "Recently, fine-tuning large pre-trained Transformer models using downstream datasets has received a rising interest. Despite their success, it is still challenging to disentangle the benefits of large-scale datasets and Transformer structures from the limitations of the pre-training. In this paper, we introduce a hierarchical training approach, named self-pretraining, in which Transformer models are pretrained and finetuned on the same dataset. Three pre-trained models including HuBERT, Conformer and WavLM are evaluated on four different speaker verification datasets with varying sizes. Our experiments show that these self-pretrained models achieve competitive performance on downstream speaker verification tasks with only one-third of the data compared to Librispeech pretraining, such as VoxCeleb1 and CNCeleb1. Furthermore, when pre-training only on the VoxCeleb2-dev, the Conformer model outperforms the one pre-trained on 94k hours of data using the same fine-tuning settings." @default.
- W4377130487 created "2023-05-21" @default.
- W4377130487 creator A5037061214 @default.
- W4377130487 creator A5042273299 @default.
- W4377130487 creator A5045539248 @default.
- W4377130487 creator A5055201620 @default.
- W4377130487 creator A5061939508 @default.
- W4377130487 creator A5064238041 @default.
- W4377130487 date "2023-05-17" @default.
- W4377130487 modified "2023-09-26" @default.
- W4377130487 title "Improving Speaker Verification with Self-Pretrained Transformer Models" @default.
- W4377130487 doi "https://doi.org/10.48550/arxiv.2305.10517" @default.
- W4377130487 hasPublicationYear "2023" @default.
- W4377130487 type Work @default.
- W4377130487 citedByCount "0" @default.
- W4377130487 crossrefType "posted-content" @default.
- W4377130487 hasAuthorship W4377130487A5037061214 @default.
- W4377130487 hasAuthorship W4377130487A5042273299 @default.
- W4377130487 hasAuthorship W4377130487A5045539248 @default.
- W4377130487 hasAuthorship W4377130487A5055201620 @default.
- W4377130487 hasAuthorship W4377130487A5061939508 @default.
- W4377130487 hasAuthorship W4377130487A5064238041 @default.
- W4377130487 hasBestOaLocation W43771304871 @default.
- W4377130487 hasConcept C119857082 @default.
- W4377130487 hasConcept C121332964 @default.
- W4377130487 hasConcept C154945302 @default.
- W4377130487 hasConcept C165801399 @default.
- W4377130487 hasConcept C28490314 @default.
- W4377130487 hasConcept C41008148 @default.
- W4377130487 hasConcept C51632099 @default.
- W4377130487 hasConcept C62520636 @default.
- W4377130487 hasConcept C66322947 @default.
- W4377130487 hasConceptScore W4377130487C119857082 @default.
- W4377130487 hasConceptScore W4377130487C121332964 @default.
- W4377130487 hasConceptScore W4377130487C154945302 @default.
- W4377130487 hasConceptScore W4377130487C165801399 @default.
- W4377130487 hasConceptScore W4377130487C28490314 @default.
- W4377130487 hasConceptScore W4377130487C41008148 @default.
- W4377130487 hasConceptScore W4377130487C51632099 @default.
- W4377130487 hasConceptScore W4377130487C62520636 @default.
- W4377130487 hasConceptScore W4377130487C66322947 @default.
- W4377130487 hasLocation W43771304871 @default.
- W4377130487 hasOpenAccess W4377130487 @default.
- W4377130487 hasPrimaryLocation W43771304871 @default.
- W4377130487 hasRelatedWork W2961085424 @default.
- W4377130487 hasRelatedWork W3046775127 @default.
- W4377130487 hasRelatedWork W3107474891 @default.
- W4377130487 hasRelatedWork W3170094116 @default.
- W4377130487 hasRelatedWork W4205958290 @default.
- W4377130487 hasRelatedWork W4285260836 @default.
- W4377130487 hasRelatedWork W4286629047 @default.
- W4377130487 hasRelatedWork W4306321456 @default.
- W4377130487 hasRelatedWork W4306674287 @default.
- W4377130487 hasRelatedWork W4224009465 @default.
- W4377130487 isParatext "false" @default.
- W4377130487 isRetracted "false" @default.
- W4377130487 workType "article" @default.