Matches in SemOpenAlex for { <https://semopenalex.org/work/W4224266925> ?p ?o ?g. }
- W4224266925 abstract "<sec> <title>BACKGROUND</title> As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resembles the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. </sec> <sec> <title>OBJECTIVE</title> This review aimed to survey both GAN and attention mechanisms with particular application for medical image analysis. This review also provided a brief introduction of some well-known GAN variations and attention mechanisms. </sec> <sec> <title>METHODS</title> To organize this review comprehensively, we conducted a literature search in PubMed, arXiv, Society of Photo-Optical Instrumentation Engineer Medical Imaging (SPIE), IEEE International Symposium on Biomedical Imaging (ISBI), and International Conference on Deep Learning for Medical Imaging (MIDL). We set the search deadline to March 30, 2022. </sec> <sec> <title>RESULTS</title> We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. </sec> <sec> <title>CONCLUSIONS</title> This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis. </sec>" @default.
- W4224266925 created "2022-04-26" @default.
- W4224266925 creator A5003759585 @default.
- W4224266925 creator A5054496800 @default.
- W4224266925 creator A5060950641 @default.
- W4224266925 creator A5067495807 @default.
- W4224266925 date "2022-04-06" @default.
- W4224266925 modified "2023-09-27" @default.
- W4224266925 title "Attention-based Generative Adversarial Network in Medical Imaging: A Review (Preprint)" @default.
- W4224266925 cites W138615131 @default.
- W4224266925 cites W1901129140 @default.
- W4224266925 cites W2158960952 @default.
- W4224266925 cites W2322534411 @default.
- W4224266925 cites W2331128040 @default.
- W4224266925 cites W2534086599 @default.
- W4224266925 cites W2604178507 @default.
- W4224266925 cites W2745006834 @default.
- W4224266925 cites W2750216653 @default.
- W4224266925 cites W2889871190 @default.
- W4224266925 cites W2899235874 @default.
- W4224266925 cites W2911091074 @default.
- W4224266925 cites W2962793481 @default.
- W4224266925 cites W2962825119 @default.
- W4224266925 cites W2962914239 @default.
- W4224266925 cites W2962965405 @default.
- W4224266925 cites W2962968835 @default.
- W4224266925 cites W2963073614 @default.
- W4224266925 cites W2963091558 @default.
- W4224266925 cites W2963261836 @default.
- W4224266925 cites W2963420686 @default.
- W4224266925 cites W2963470893 @default.
- W4224266925 cites W2963727650 @default.
- W4224266925 cites W2967223102 @default.
- W4224266925 cites W2972517644 @default.
- W4224266925 cites W2972944446 @default.
- W4224266925 cites W2977890283 @default.
- W4224266925 cites W2979533827 @default.
- W4224266925 cites W2982220924 @default.
- W4224266925 cites W2985312512 @default.
- W4224266925 cites W2995976346 @default.
- W4224266925 cites W3003254765 @default.
- W4224266925 cites W3004492706 @default.
- W4224266925 cites W3011794065 @default.
- W4224266925 cites W3024390022 @default.
- W4224266925 cites W3027707629 @default.
- W4224266925 cites W3035187956 @default.
- W4224266925 cites W3035235751 @default.
- W4224266925 cites W3047625747 @default.
- W4224266925 cites W3049603761 @default.
- W4224266925 cites W3079841961 @default.
- W4224266925 cites W3082317190 @default.
- W4224266925 cites W3092939492 @default.
- W4224266925 cites W3094100360 @default.
- W4224266925 cites W3098325931 @default.
- W4224266925 cites W3122682475 @default.
- W4224266925 cites W3124194096 @default.
- W4224266925 cites W3127696710 @default.
- W4224266925 cites W3174778426 @default.
- W4224266925 cites W3176717614 @default.
- W4224266925 cites W3191007476 @default.
- W4224266925 cites W3195539942 @default.
- W4224266925 cites W3199270857 @default.
- W4224266925 cites W3200891683 @default.
- W4224266925 cites W3201029014 @default.
- W4224266925 cites W3203480968 @default.
- W4224266925 cites W3209349412 @default.
- W4224266925 cites W4205309705 @default.
- W4224266925 cites W4226299522 @default.
- W4224266925 doi "https://doi.org/10.2196/preprints.38410" @default.
- W4224266925 hasPublicationYear "2022" @default.
- W4224266925 type Work @default.
- W4224266925 citedByCount "0" @default.
- W4224266925 crossrefType "posted-content" @default.
- W4224266925 hasAuthorship W4224266925A5003759585 @default.
- W4224266925 hasAuthorship W4224266925A5054496800 @default.
- W4224266925 hasAuthorship W4224266925A5060950641 @default.
- W4224266925 hasAuthorship W4224266925A5067495807 @default.
- W4224266925 hasConcept C108583219 @default.
- W4224266925 hasConcept C119857082 @default.
- W4224266925 hasConcept C153083717 @default.
- W4224266925 hasConcept C154945302 @default.
- W4224266925 hasConcept C2522767166 @default.
- W4224266925 hasConcept C2988773926 @default.
- W4224266925 hasConcept C31601959 @default.
- W4224266925 hasConcept C37736160 @default.
- W4224266925 hasConcept C39890363 @default.
- W4224266925 hasConcept C41008148 @default.
- W4224266925 hasConceptScore W4224266925C108583219 @default.
- W4224266925 hasConceptScore W4224266925C119857082 @default.
- W4224266925 hasConceptScore W4224266925C153083717 @default.
- W4224266925 hasConceptScore W4224266925C154945302 @default.
- W4224266925 hasConceptScore W4224266925C2522767166 @default.
- W4224266925 hasConceptScore W4224266925C2988773926 @default.
- W4224266925 hasConceptScore W4224266925C31601959 @default.
- W4224266925 hasConceptScore W4224266925C37736160 @default.
- W4224266925 hasConceptScore W4224266925C39890363 @default.
- W4224266925 hasConceptScore W4224266925C41008148 @default.
- W4224266925 hasLocation W42242669251 @default.
- W4224266925 hasOpenAccess W4224266925 @default.
- W4224266925 hasPrimaryLocation W42242669251 @default.