Matches in SemOpenAlex for { <https://semopenalex.org/work/W4285262309> ?p ?o ?g. }
- W4285262309 endingPage "536" @default.
- W4285262309 startingPage "523" @default.
- W4285262309 abstract "Given a degraded low-resolution input image, super-resolution (SR) aims at restoring the lost textures and structures and generating high-resolution image content. Significant advances in image super-resolution have been made lately, dominated by convolutional neural networks (CNNs). The top performing CNN-based SR networks typically employ very deep models for embracing the benefits of generating spatially precise results, but at the cost of loss of long-term contextual information. Additionally, state-of-the-art (SOTA) methods generally lack in maintaining the balance between spatial details and contextual information, which is the basic requirement for exhibiting superior performance in SR task. For restoration application like SR, the overall network generally demands efficient preservation of low-frequency information and reconstruction of high-frequency details. Thus, our work presents a novel architecture with the holistic objective of maintaining spatially-precise representation by collecting contextual content and restoring multi-frequency information throughout the network. Our proposed model learns an enriched set of features, that besides combining contextual information from multiple scales simultaneously preserves the high-resolution spatial details. The core of our approach is a novel non-local and local attention (NLLA) block which focuses on (1) learning enriched features by collecting information from multiple scales, (2) simultaneously handling the different frequency information, and (3) effectively fusing the relevant low-frequency and high-frequency information by ignoring the redundant features. Additionally, for effectively mapping the low-resolution features to high-resolution, we propose a novel aggregated attentive up-sampler (AAU) block that attentively learns the weights to up-sample the refined low-resolution feature maps to high-resolution output. Extensive experiments on the benchmark SR datasets demonstrate that the proposed method achieves appealing performance, both qualitatively and quantitatively." @default.
- W4285262309 created "2022-07-14" @default.
- W4285262309 creator A5032409915 @default.
- W4285262309 creator A5051482926 @default.
- W4285262309 date "2023-04-01" @default.
- W4285262309 modified "2023-10-17" @default.
- W4285262309 title "(MLE$^{2}$A$^{2}$U)-Net: Image Super-Resolution via Multi-Level Edge Embedding and Aggregated Attentive Upsampler Network" @default.
- W4285262309 cites W1791560514 @default.
- W4285262309 cites W1885185971 @default.
- W4285262309 cites W1906770428 @default.
- W4285262309 cites W1930824406 @default.
- W4285262309 cites W1970307692 @default.
- W4285262309 cites W2029684123 @default.
- W4285262309 cites W2047920195 @default.
- W4285262309 cites W2054515210 @default.
- W4285262309 cites W2110158442 @default.
- W4285262309 cites W2133665775 @default.
- W4285262309 cites W2157494358 @default.
- W4285262309 cites W2192954843 @default.
- W4285262309 cites W2194775991 @default.
- W4285262309 cites W2214802144 @default.
- W4285262309 cites W2476548250 @default.
- W4285262309 cites W2535388113 @default.
- W4285262309 cites W2607041014 @default.
- W4285262309 cites W2739757502 @default.
- W4285262309 cites W2747898905 @default.
- W4285262309 cites W2752782242 @default.
- W4285262309 cites W2790538633 @default.
- W4285262309 cites W2795024892 @default.
- W4285262309 cites W2866634454 @default.
- W4285262309 cites W2884585870 @default.
- W4285262309 cites W2887183215 @default.
- W4285262309 cites W2895598217 @default.
- W4285262309 cites W2922509574 @default.
- W4285262309 cites W2947156405 @default.
- W4285262309 cites W2954930822 @default.
- W4285262309 cites W2963031226 @default.
- W4285262309 cites W2963091558 @default.
- W4285262309 cites W2963372104 @default.
- W4285262309 cites W2963610452 @default.
- W4285262309 cites W2963645458 @default.
- W4285262309 cites W2963729050 @default.
- W4285262309 cites W2963986095 @default.
- W4285262309 cites W2964101377 @default.
- W4285262309 cites W2964125708 @default.
- W4285262309 cites W2964277374 @default.
- W4285262309 cites W2986556279 @default.
- W4285262309 cites W3010250471 @default.
- W4285262309 cites W3011914958 @default.
- W4285262309 cites W3034247386 @default.
- W4285262309 cites W3035280441 @default.
- W4285262309 cites W3035302306 @default.
- W4285262309 cites W3047753861 @default.
- W4285262309 cites W3049418625 @default.
- W4285262309 cites W3080742155 @default.
- W4285262309 cites W3083579885 @default.
- W4285262309 cites W3084368948 @default.
- W4285262309 cites W3088103684 @default.
- W4285262309 cites W3089400682 @default.
- W4285262309 cites W3094642822 @default.
- W4285262309 cites W3098546057 @default.
- W4285262309 cites W3104028135 @default.
- W4285262309 cites W3107113572 @default.
- W4285262309 cites W3118490733 @default.
- W4285262309 cites W3119363261 @default.
- W4285262309 cites W3149210910 @default.
- W4285262309 cites W3158383274 @default.
- W4285262309 cites W3170026688 @default.
- W4285262309 cites W3171125843 @default.
- W4285262309 cites W3176287161 @default.
- W4285262309 cites W3178925107 @default.
- W4285262309 cites W3181096156 @default.
- W4285262309 cites W3183995431 @default.
- W4285262309 cites W3193659426 @default.
- W4285262309 cites W3198096356 @default.
- W4285262309 cites W3201414140 @default.
- W4285262309 cites W3204321367 @default.
- W4285262309 cites W3207918547 @default.
- W4285262309 cites W4206054012 @default.
- W4285262309 cites W4206728077 @default.
- W4285262309 cites W4213374277 @default.
- W4285262309 doi "https://doi.org/10.1109/tetci.2022.3182654" @default.
- W4285262309 hasPublicationYear "2023" @default.
- W4285262309 type Work @default.
- W4285262309 citedByCount "1" @default.
- W4285262309 countsByYear W42852623092022 @default.
- W4285262309 crossrefType "journal-article" @default.
- W4285262309 hasAuthorship W4285262309A5032409915 @default.
- W4285262309 hasAuthorship W4285262309A5051482926 @default.
- W4285262309 hasConcept C115961682 @default.
- W4285262309 hasConcept C124101348 @default.
- W4285262309 hasConcept C153180895 @default.
- W4285262309 hasConcept C154945302 @default.
- W4285262309 hasConcept C162307627 @default.
- W4285262309 hasConcept C177264268 @default.
- W4285262309 hasConcept C17744445 @default.
- W4285262309 hasConcept C199360897 @default.
- W4285262309 hasConcept C199539241 @default.