Matches in SemOpenAlex for { <https://semopenalex.org/work/W4321483886> ?p ?o ?g. }
- W4321483886 endingPage "146" @default.
- W4321483886 startingPage "133" @default.
- W4321483886 abstract "The convolutional neural network (CNN)-based reconstruction methods have dominated the compressive sensing (CS) in recent years. However, existing CNN-based approaches show potential restrictions in capturing non-local similarity of images, because of the intrinsic characteristic of convolutional layers, <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink><tex-math notation=LaTeX>$mathit{i.e.}$</tex-math></inline-formula> , locality and weight sharing. In parallel, the emerging Transformer architecture shows fine capacity in modeling long-distance correlations onto embedded tokens for language and images. Yet vanilla Transformer does not exceed CNN-based networks considerably but shows roughly comparable performance, and the culprit can be the missing of sophisticated inductive bias regarding the local image structures. In this article, to eliminate the restrictions of the aforementioned paradigms, we propose a Transformer-based hierarchical framework, dubbed TCS-Net, for compressive image sensing (or image compressive sensing) with a <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink><tex-math notation=LaTeX>$mathit{patch-to-pixel}$</tex-math></inline-formula> manner. Concretely, the proposed TCS-Net consists of an image acquisition module and a reconstruction module (includes two key decoding phases: a patch-wise decoding phase and a pixel-wise decoding phase). The acquisition module can implement data-driven image sampling by jointly learning with the decoding phases. By adjusting the Transformer architecture to the <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink><tex-math notation=LaTeX>$mathit{patch-to-pixel}$</tex-math></inline-formula> multi-stage pattern, our reconstruction module can gradually decode the CS measurements from the patch-wise outlines to the pixel-wise textures, thereby building a high-precision mapping for image reconstruction. Extensive experiments on several datasets verify that the proposed TCS-Net outperforms existing state-of-the-art image CS methods by considerable margins." @default.
- W4321483886 created "2023-02-23" @default.
- W4321483886 creator A5007532242 @default.
- W4321483886 creator A5009136223 @default.
- W4321483886 creator A5038058634 @default.
- W4321483886 creator A5045082735 @default.
- W4321483886 creator A5071986802 @default.
- W4321483886 date "2023-01-01" @default.
- W4321483886 modified "2023-10-17" @default.
- W4321483886 title "From Patch to Pixel: A Transformer-Based Hierarchical Framework for Compressive Image Sensing" @default.
- W4321483886 cites W1912194039 @default.
- W4321483886 cites W2045737896 @default.
- W4321483886 cites W2100556411 @default.
- W4321483886 cites W2110158442 @default.
- W4321483886 cites W2121927366 @default.
- W4321483886 cites W2145096794 @default.
- W4321483886 cites W2161907179 @default.
- W4321483886 cites W2273561594 @default.
- W4321483886 cites W2326013034 @default.
- W4321483886 cites W2589027757 @default.
- W4321483886 cites W2590877996 @default.
- W4321483886 cites W2618530766 @default.
- W4321483886 cites W2798559986 @default.
- W4321483886 cites W2884144629 @default.
- W4321483886 cites W2902719825 @default.
- W4321483886 cites W2904103769 @default.
- W4321483886 cites W2904577146 @default.
- W4321483886 cites W2963081547 @default.
- W4321483886 cites W2963814095 @default.
- W4321483886 cites W2964082260 @default.
- W4321483886 cites W2970415236 @default.
- W4321483886 cites W2981551308 @default.
- W4321483886 cites W2990787829 @default.
- W4321483886 cites W2998853723 @default.
- W4321483886 cites W3009991223 @default.
- W4321483886 cites W3014707848 @default.
- W4321483886 cites W3033733028 @default.
- W4321483886 cites W3087291587 @default.
- W4321483886 cites W3102722370 @default.
- W4321483886 cites W3103321200 @default.
- W4321483886 cites W3115447952 @default.
- W4321483886 cites W3138516171 @default.
- W4321483886 cites W3161747711 @default.
- W4321483886 cites W3170881570 @default.
- W4321483886 cites W3207918547 @default.
- W4321483886 cites W4205523161 @default.
- W4321483886 cites W4226069877 @default.
- W4321483886 cites W4293193211 @default.
- W4321483886 cites W4300263211 @default.
- W4321483886 cites W4313026462 @default.
- W4321483886 doi "https://doi.org/10.1109/tci.2023.3244396" @default.
- W4321483886 hasPublicationYear "2023" @default.
- W4321483886 type Work @default.
- W4321483886 citedByCount "0" @default.
- W4321483886 crossrefType "journal-article" @default.
- W4321483886 hasAuthorship W4321483886A5007532242 @default.
- W4321483886 hasAuthorship W4321483886A5009136223 @default.
- W4321483886 hasAuthorship W4321483886A5038058634 @default.
- W4321483886 hasAuthorship W4321483886A5045082735 @default.
- W4321483886 hasAuthorship W4321483886A5071986802 @default.
- W4321483886 hasConcept C11413529 @default.
- W4321483886 hasConcept C119599485 @default.
- W4321483886 hasConcept C124851039 @default.
- W4321483886 hasConcept C127413603 @default.
- W4321483886 hasConcept C141379421 @default.
- W4321483886 hasConcept C153180895 @default.
- W4321483886 hasConcept C154945302 @default.
- W4321483886 hasConcept C160633673 @default.
- W4321483886 hasConcept C165801399 @default.
- W4321483886 hasConcept C41008148 @default.
- W4321483886 hasConcept C57273362 @default.
- W4321483886 hasConcept C66322947 @default.
- W4321483886 hasConcept C81363708 @default.
- W4321483886 hasConceptScore W4321483886C11413529 @default.
- W4321483886 hasConceptScore W4321483886C119599485 @default.
- W4321483886 hasConceptScore W4321483886C124851039 @default.
- W4321483886 hasConceptScore W4321483886C127413603 @default.
- W4321483886 hasConceptScore W4321483886C141379421 @default.
- W4321483886 hasConceptScore W4321483886C153180895 @default.
- W4321483886 hasConceptScore W4321483886C154945302 @default.
- W4321483886 hasConceptScore W4321483886C160633673 @default.
- W4321483886 hasConceptScore W4321483886C165801399 @default.
- W4321483886 hasConceptScore W4321483886C41008148 @default.
- W4321483886 hasConceptScore W4321483886C57273362 @default.
- W4321483886 hasConceptScore W4321483886C66322947 @default.
- W4321483886 hasConceptScore W4321483886C81363708 @default.
- W4321483886 hasFunder F4320321001 @default.
- W4321483886 hasFunder F4320335787 @default.
- W4321483886 hasFunder F4320336567 @default.
- W4321483886 hasLocation W43214838861 @default.
- W4321483886 hasOpenAccess W4321483886 @default.
- W4321483886 hasPrimaryLocation W43214838861 @default.
- W4321483886 hasRelatedWork W2136485282 @default.
- W4321483886 hasRelatedWork W2175746458 @default.
- W4321483886 hasRelatedWork W2546871836 @default.
- W4321483886 hasRelatedWork W2613736958 @default.
- W4321483886 hasRelatedWork W2726121760 @default.
- W4321483886 hasRelatedWork W2732542196 @default.