Matches in SemOpenAlex for { <https://semopenalex.org/work/W4312915485> ?p ?o ?g. }
Showing items 1 to 85 of
85
with 100 items per page.
- W4312915485 endingPage "544" @default.
- W4312915485 startingPage "532" @default.
- W4312915485 abstract "We live in a digital era - an era of technology, artificial intelligence, big data, and information. The data and information on which we depend to fulfil several daily tasks and decision-making can become overwhelming to deal with and requires effective processing. This can be achieved by designing improved and robust automatic text summarization systems. These systems reduce the size of text document while retaining the salient information. The resurgence of deep learning and its progress from the Recurrent Neural Networks to deep transformer based Pretrained Language Models (PLM) with huge parameters and ample world and common-sense knowledge have opened the doors for huge success and improvement of the Natural Language Processing tasks including Abstractive Text Summarization (ATS). This work surveys the scientific literature to explore and analyze recent research on pre-trained language models and abstractive text summarization utilizing these models. The pretrained language models on abstractive summarization tasks have been analyzed quantitatively based on ROUGE scores on four standard datasets while the analysis of state-of-the-art ATS models has been conducted qualitatively to identify some issues and challenges encountered on finetuning large PLMs on downstream datasets for abstractive summarization. The survey further highlights some techniques that can help boost the performance of these systems. The findings in terms of performance improvement reveal that the models with better performance use either one or a combination of these strategies: (1) Domain Adaptation, (2) Model Augmentation, (3) Stable finetuning, and (4) Data Augmentation." @default.
- W4312915485 created "2023-01-05" @default.
- W4312915485 creator A5000614683 @default.
- W4312915485 creator A5002028237 @default.
- W4312915485 creator A5027581753 @default.
- W4312915485 creator A5034535767 @default.
- W4312915485 creator A5080606292 @default.
- W4312915485 date "2022-01-01" @default.
- W4312915485 modified "2023-10-16" @default.
- W4312915485 title "A Survey of Abstractive Text Summarization Utilising Pretrained Language Models" @default.
- W4312915485 cites W2970024416 @default.
- W4312915485 cites W2970419734 @default.
- W4312915485 cites W2998172865 @default.
- W4312915485 cites W2998518989 @default.
- W4312915485 cites W3027246109 @default.
- W4312915485 cites W3034238904 @default.
- W4312915485 cites W3034999214 @default.
- W4312915485 cites W3098136301 @default.
- W4312915485 cites W3099142828 @default.
- W4312915485 cites W3103385281 @default.
- W4312915485 cites W3103417625 @default.
- W4312915485 cites W3116567773 @default.
- W4312915485 cites W3121250149 @default.
- W4312915485 cites W3169565655 @default.
- W4312915485 cites W3170305303 @default.
- W4312915485 cites W3173360659 @default.
- W4312915485 cites W3186252291 @default.
- W4312915485 cites W3198659451 @default.
- W4312915485 doi "https://doi.org/10.1007/978-3-031-21743-2_42" @default.
- W4312915485 hasPublicationYear "2022" @default.
- W4312915485 type Work @default.
- W4312915485 citedByCount "1" @default.
- W4312915485 countsByYear W43129154852023 @default.
- W4312915485 crossrefType "book-chapter" @default.
- W4312915485 hasAuthorship W4312915485A5000614683 @default.
- W4312915485 hasAuthorship W4312915485A5002028237 @default.
- W4312915485 hasAuthorship W4312915485A5027581753 @default.
- W4312915485 hasAuthorship W4312915485A5034535767 @default.
- W4312915485 hasAuthorship W4312915485A5080606292 @default.
- W4312915485 hasConcept C108583219 @default.
- W4312915485 hasConcept C119857082 @default.
- W4312915485 hasConcept C121332964 @default.
- W4312915485 hasConcept C137293760 @default.
- W4312915485 hasConcept C154945302 @default.
- W4312915485 hasConcept C165801399 @default.
- W4312915485 hasConcept C170858558 @default.
- W4312915485 hasConcept C195324797 @default.
- W4312915485 hasConcept C204321447 @default.
- W4312915485 hasConcept C2779439875 @default.
- W4312915485 hasConcept C2780719617 @default.
- W4312915485 hasConcept C41008148 @default.
- W4312915485 hasConcept C62520636 @default.
- W4312915485 hasConcept C66322947 @default.
- W4312915485 hasConceptScore W4312915485C108583219 @default.
- W4312915485 hasConceptScore W4312915485C119857082 @default.
- W4312915485 hasConceptScore W4312915485C121332964 @default.
- W4312915485 hasConceptScore W4312915485C137293760 @default.
- W4312915485 hasConceptScore W4312915485C154945302 @default.
- W4312915485 hasConceptScore W4312915485C165801399 @default.
- W4312915485 hasConceptScore W4312915485C170858558 @default.
- W4312915485 hasConceptScore W4312915485C195324797 @default.
- W4312915485 hasConceptScore W4312915485C204321447 @default.
- W4312915485 hasConceptScore W4312915485C2779439875 @default.
- W4312915485 hasConceptScore W4312915485C2780719617 @default.
- W4312915485 hasConceptScore W4312915485C41008148 @default.
- W4312915485 hasConceptScore W4312915485C62520636 @default.
- W4312915485 hasConceptScore W4312915485C66322947 @default.
- W4312915485 hasLocation W43129154851 @default.
- W4312915485 hasOpenAccess W4312915485 @default.
- W4312915485 hasPrimaryLocation W43129154851 @default.
- W4312915485 hasRelatedWork W2058609994 @default.
- W4312915485 hasRelatedWork W2710833826 @default.
- W4312915485 hasRelatedWork W2907846330 @default.
- W4312915485 hasRelatedWork W2977842567 @default.
- W4312915485 hasRelatedWork W3115006989 @default.
- W4312915485 hasRelatedWork W3207693618 @default.
- W4312915485 hasRelatedWork W4289107476 @default.
- W4312915485 hasRelatedWork W4316012698 @default.
- W4312915485 hasRelatedWork W4362598702 @default.
- W4312915485 hasRelatedWork W4385570271 @default.
- W4312915485 isParatext "false" @default.
- W4312915485 isRetracted "false" @default.
- W4312915485 workType "book-chapter" @default.