Matches in SemOpenAlex for { <https://semopenalex.org/work/W3086963535> ?p ?o ?g. }
Showing items 1 to 81 of
81
with 100 items per page.
- W3086963535 endingPage "181292" @default.
- W3086963535 startingPage "181258" @default.
- W3086963535 abstract "Massive textual content has enabled rapid advances in natural language modeling. The use of pre-trained deep neural language models has significantly improved natural language understanding tasks. However, the extent to which these systems can be applied to content generation is unclear. While a few informal studies have claimed that these models can generate ‘high quality’ readable content, there is no prior study on analyzing the generated content from these models based on sampling and fine-tuning hyperparameters. We conduct an in-depth comparison of several language models for open-ended story generation from given prompts. Using a diverse set of automated metrics, we compare the performance of transformer-based generative models – OpenAI’s GPT2 (pre-trained and fine-tuned) and Google’s pre-trained Transformer-XL and XLNet to human-written textual references. Studying inter-metric correlation along with metric ranking reveals interesting insights – the high correlation between the readability scores and word usage in the text. A study of the statistical significance and empirical evaluations between the scores (human and machine-generated) at higher sampling hyperparameter combinations ( $t={0.75, 1.0}$ , $k={100, 150, 250}$ ) reveal that the top pre-trained and fine-tuned models generated samples condition well on the prompt with an increased occurrence of unique and difficult words. The GPT2-medium model fine-tuned on the 1024 Byte-pair Encoding (BPE) tokenized version of the dataset along with pre-trained Transformer-XL models generated samples close to human written content on three metrics: prompt-based overlap, coherence, and variation in sentence length. A study of overall model stability and performance shows that fine-tuned GPT2 language models have the least deviation in metric scores from human performance." @default.
- W3086963535 created "2020-09-21" @default.
- W3086963535 creator A5050592031 @default.
- W3086963535 creator A5065790431 @default.
- W3086963535 date "2020-01-01" @default.
- W3086963535 modified "2023-10-13" @default.
- W3086963535 title "Can Machines Tell Stories? A Comparative Study of Deep Neural Language Models and Metrics" @default.
- W3086963535 cites W1507711477 @default.
- W3086963535 cites W1982897610 @default.
- W3086963535 cites W2068390867 @default.
- W3086963535 cites W2072471709 @default.
- W3086963535 cites W2135046866 @default.
- W3086963535 cites W2143017621 @default.
- W3086963535 cites W2250539671 @default.
- W3086963535 cites W2561658355 @default.
- W3086963535 cites W2563845258 @default.
- W3086963535 cites W2598692538 @default.
- W3086963535 cites W2605035112 @default.
- W3086963535 cites W2752337926 @default.
- W3086963535 cites W2807738734 @default.
- W3086963535 cites W2807791032 @default.
- W3086963535 cites W2807925339 @default.
- W3086963535 cites W2808064329 @default.
- W3086963535 cites W2810732773 @default.
- W3086963535 cites W2889009749 @default.
- W3086963535 cites W2914949666 @default.
- W3086963535 cites W2962739339 @default.
- W3086963535 cites W2962784628 @default.
- W3086963535 cites W2962821399 @default.
- W3086963535 cites W2963096510 @default.
- W3086963535 cites W2963167310 @default.
- W3086963535 cites W2963544700 @default.
- W3086963535 cites W2963672599 @default.
- W3086963535 cites W2964110616 @default.
- W3086963535 cites W2964213788 @default.
- W3086963535 cites W2983962589 @default.
- W3086963535 cites W2992347006 @default.
- W3086963535 doi "https://doi.org/10.1109/access.2020.3023421" @default.
- W3086963535 hasPublicationYear "2020" @default.
- W3086963535 type Work @default.
- W3086963535 sameAs 3086963535 @default.
- W3086963535 citedByCount "8" @default.
- W3086963535 countsByYear W30869635352021 @default.
- W3086963535 countsByYear W30869635352022 @default.
- W3086963535 countsByYear W30869635352023 @default.
- W3086963535 crossrefType "journal-article" @default.
- W3086963535 hasAuthorship W3086963535A5050592031 @default.
- W3086963535 hasAuthorship W3086963535A5065790431 @default.
- W3086963535 hasBestOaLocation W30869635351 @default.
- W3086963535 hasConcept C119857082 @default.
- W3086963535 hasConcept C154945302 @default.
- W3086963535 hasConcept C204321447 @default.
- W3086963535 hasConcept C41008148 @default.
- W3086963535 hasConcept C50644808 @default.
- W3086963535 hasConceptScore W3086963535C119857082 @default.
- W3086963535 hasConceptScore W3086963535C154945302 @default.
- W3086963535 hasConceptScore W3086963535C204321447 @default.
- W3086963535 hasConceptScore W3086963535C41008148 @default.
- W3086963535 hasConceptScore W3086963535C50644808 @default.
- W3086963535 hasFunder F4320306076 @default.
- W3086963535 hasFunder F4320338281 @default.
- W3086963535 hasLocation W30869635351 @default.
- W3086963535 hasOpenAccess W3086963535 @default.
- W3086963535 hasPrimaryLocation W30869635351 @default.
- W3086963535 hasRelatedWork W2611614995 @default.
- W3086963535 hasRelatedWork W2961085424 @default.
- W3086963535 hasRelatedWork W3046775127 @default.
- W3086963535 hasRelatedWork W3107474891 @default.
- W3086963535 hasRelatedWork W4205958290 @default.
- W3086963535 hasRelatedWork W4286629047 @default.
- W3086963535 hasRelatedWork W4306321456 @default.
- W3086963535 hasRelatedWork W4306674287 @default.
- W3086963535 hasRelatedWork W1629725936 @default.
- W3086963535 hasRelatedWork W4224009465 @default.
- W3086963535 hasVolume "8" @default.
- W3086963535 isParatext "false" @default.
- W3086963535 isRetracted "false" @default.
- W3086963535 magId "3086963535" @default.
- W3086963535 workType "article" @default.