Matches in SemOpenAlex for { <https://semopenalex.org/work/W3127274523> ?p ?o ?g. }
- W3127274523 endingPage "114652" @default.
- W3127274523 startingPage "114652" @default.
- W3127274523 abstract "Humans must easily handle the vast amounts of data being generated by the revolution of information technology. Thus, Automatic Text summarization has been applied to various domains in order to find the most relevant information and make critical decisions quickly. In the context of Arabic, text summarization techniques suffer from several problems. First, most existing methods do not consider the context or domain to which the document belongs. Second, the majority of the existing approaches are based on the traditional bag-of-words representation, which involves high dimensional and sparse data, and makes it difficult to capture relevant information. Third, research in Arabic Text summarization is fairly small and only recently compared to that on Anglo-Saxon and other languages due to the shortage of Arabic corpora, resources, and automatic processing tools. In this paper, we try to overcome these limitations by proposing a new approach using documents clustering, topic modeling, and unsupervised neural networks in order to build an efficient document representation model. First, a new document clustering technique using Extreme learning machine is performed on large text collection. Second, topic modeling is applied to documents collection in order to identify topics present in each cluster. Third, each document is represented in a topic space by a matrix where rows represent the document sentences and columns represent the cluster topics. The generated matrix is then trained using several unsupervised neural networks and ensemble learning algorithms in order to build an abstract representation of the document in the concept space. Important sentences are ranked and extracted according to a graph model with a redundancy elimination component. The proposed approach is evaluated on Essex Arabic Summaries Corpus and compared against other Arabic text summarization approaches using ROUGE measure. Experimental results showed that the models trained on topic representation learn better representations and improve significantly the summarization performance. In particular, ensemble learning models demonstrated an important improvement on Rouge recall and promising results on F-measure." @default.
- W3127274523 created "2021-02-15" @default.
- W3127274523 creator A5002879528 @default.
- W3127274523 creator A5004210985 @default.
- W3127274523 creator A5041658383 @default.
- W3127274523 creator A5060823204 @default.
- W3127274523 creator A5071643748 @default.
- W3127274523 date "2021-06-01" @default.
- W3127274523 modified "2023-10-07" @default.
- W3127274523 title "Unsupervised neural networks for automatic Arabic text summarization using document clustering and topic modeling" @default.
- W3127274523 cites W1040822411 @default.
- W3127274523 cites W1790954942 @default.
- W3127274523 cites W1974339500 @default.
- W3127274523 cites W1994472257 @default.
- W3127274523 cites W1997036897 @default.
- W3127274523 cites W2013668369 @default.
- W3127274523 cites W2024381473 @default.
- W3127274523 cites W2037414258 @default.
- W3127274523 cites W2044403794 @default.
- W3127274523 cites W2046729820 @default.
- W3127274523 cites W2066636486 @default.
- W3127274523 cites W2074765505 @default.
- W3127274523 cites W2100495367 @default.
- W3127274523 cites W2118364625 @default.
- W3127274523 cites W2141211247 @default.
- W3127274523 cites W2154053567 @default.
- W3127274523 cites W2166347079 @default.
- W3127274523 cites W2281052917 @default.
- W3127274523 cites W2413317570 @default.
- W3127274523 cites W2508429489 @default.
- W3127274523 cites W2530979053 @default.
- W3127274523 cites W2536583325 @default.
- W3127274523 cites W2552124255 @default.
- W3127274523 cites W2572043993 @default.
- W3127274523 cites W2597328883 @default.
- W3127274523 cites W2727551782 @default.
- W3127274523 cites W2734779477 @default.
- W3127274523 cites W2768163028 @default.
- W3127274523 cites W2776249353 @default.
- W3127274523 cites W2790702239 @default.
- W3127274523 cites W2792089754 @default.
- W3127274523 cites W2795243228 @default.
- W3127274523 cites W2898069881 @default.
- W3127274523 cites W2909602489 @default.
- W3127274523 cites W2946787236 @default.
- W3127274523 cites W2948582464 @default.
- W3127274523 cites W2993529567 @default.
- W3127274523 cites W3004795901 @default.
- W3127274523 cites W3040359118 @default.
- W3127274523 cites W30790492 @default.
- W3127274523 doi "https://doi.org/10.1016/j.eswa.2021.114652" @default.
- W3127274523 hasPublicationYear "2021" @default.
- W3127274523 type Work @default.
- W3127274523 sameAs 3127274523 @default.
- W3127274523 citedByCount "22" @default.
- W3127274523 countsByYear W31272745232021 @default.
- W3127274523 countsByYear W31272745232022 @default.
- W3127274523 countsByYear W31272745232023 @default.
- W3127274523 crossrefType "journal-article" @default.
- W3127274523 hasAuthorship W3127274523A5002879528 @default.
- W3127274523 hasAuthorship W3127274523A5004210985 @default.
- W3127274523 hasAuthorship W3127274523A5041658383 @default.
- W3127274523 hasAuthorship W3127274523A5060823204 @default.
- W3127274523 hasAuthorship W3127274523A5071643748 @default.
- W3127274523 hasConcept C134714966 @default.
- W3127274523 hasConcept C151730666 @default.
- W3127274523 hasConcept C154945302 @default.
- W3127274523 hasConcept C170858558 @default.
- W3127274523 hasConcept C17744445 @default.
- W3127274523 hasConcept C177937566 @default.
- W3127274523 hasConcept C199539241 @default.
- W3127274523 hasConcept C204321447 @default.
- W3127274523 hasConcept C23123220 @default.
- W3127274523 hasConcept C2776359362 @default.
- W3127274523 hasConcept C2779343474 @default.
- W3127274523 hasConcept C2779500292 @default.
- W3127274523 hasConcept C41008148 @default.
- W3127274523 hasConcept C50644808 @default.
- W3127274523 hasConcept C73555534 @default.
- W3127274523 hasConcept C86803240 @default.
- W3127274523 hasConcept C94625758 @default.
- W3127274523 hasConceptScore W3127274523C134714966 @default.
- W3127274523 hasConceptScore W3127274523C151730666 @default.
- W3127274523 hasConceptScore W3127274523C154945302 @default.
- W3127274523 hasConceptScore W3127274523C170858558 @default.
- W3127274523 hasConceptScore W3127274523C17744445 @default.
- W3127274523 hasConceptScore W3127274523C177937566 @default.
- W3127274523 hasConceptScore W3127274523C199539241 @default.
- W3127274523 hasConceptScore W3127274523C204321447 @default.
- W3127274523 hasConceptScore W3127274523C23123220 @default.
- W3127274523 hasConceptScore W3127274523C2776359362 @default.
- W3127274523 hasConceptScore W3127274523C2779343474 @default.
- W3127274523 hasConceptScore W3127274523C2779500292 @default.
- W3127274523 hasConceptScore W3127274523C41008148 @default.
- W3127274523 hasConceptScore W3127274523C50644808 @default.
- W3127274523 hasConceptScore W3127274523C73555534 @default.
- W3127274523 hasConceptScore W3127274523C86803240 @default.
- W3127274523 hasConceptScore W3127274523C94625758 @default.