Matches in SemOpenAlex for { <https://semopenalex.org/work/W4366758861> ?p ?o ?g. }
Showing items 1 to 78 of
78
with 100 items per page.
- W4366758861 endingPage "19" @default.
- W4366758861 startingPage "1" @default.
- W4366758861 abstract "ABSTRACTRecent research on transformer-based models have highlighted particular methods for medical image segmentation. Additionally, the majority of transformer-based network designs used in computer vision applications have a significant number of parameters and demand extensive training datasets. Inspired by the success of transformers in recent researches, the unet-transformer approach has become one of the de-facto ideas in overcoming the above challenges. In this manuscript, a novel unet-transformer approach was proposed for heart image segmentation to solve parameters, limited dataset, over segmentation, sensitivity noise and higher training time problems. A framework in which a novel width and height wise axial attention mechanism is incorporated into the design to effectively give positional embeddings and encode spatial flattening. Furthermore, a novel local and global spatial attention mechanism is proposed to effectively learn the local and global interactions between encoder features. Finally, we introduce a mechanism to fuse both contexts for better feature representation and preparation into the decoder. The results demonstrate that our prototype provides a robust novel axial-attention mechanism.KEYWORDS: Transformersunetheart-segmentationlong range dependenciesspatial encodingpositional encodingaxial attentioncomputed tomography (CT) Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 https://www.kaggle.com/datasets/.2 https://www.synapse.org/#!Synapse:syn3193805/wiki/217752.3 https://www.synapse.org/#!Synapse:syn3193805/wiki/217789.4 https://www.creatis.insa-lyon.fr/Challenge/acdc/index.html.Additional informationFundingThis work was supported by the National Natural Science Foundation of China [grant number 61402204, 61572239 and 61772242] Research Fund for Advanced Talents of Jiangsu University [grant number 14JDG141]; Qing Lan Project; China Postdoctoral Science Foundation [grant number 2017M611737]; and Zhenjiang Social Development Project [grant number SH2016029].Notes on contributorsAddae Emmanuel AddoAddae Emmanuel Addo received the B.S. degree in computer science from the Jiangsu University, Zhenjiang city, Jiangsu province, China, in 2018, where he is currently pursuing the master's degree in computer science. His researches focus on deep learning and medical image processing.Kashala Kabe GedeonKashala Kabe Gedeon received his Ph.D. degree in Computer Science in 2021 from Jiangsu University, Zhenjiang, China. He is now a postdoctoral and lecturer at School of Computer Science and Telecommunication Engineering, Jiangsu University. Dr. Kashala has published extensively in peer reviewed journals and won several scientific awards. Currently, his research interests include medical image processing and analysis, deep learning, transfer learning and pattern recognition.Zhe LiuZhe Liu got her Ph.D. degree in Computer Science in 2012 from Jiangsu University, Zhenjiang, China. She is a visiting scholar at the Department of Radiology at the University of Pittsburgh Medical Center, Pennsylvania, USA, and also a professor at the School of Computer Science and Telecommunication Engineering, Jiangsu University, Zhenjiang. Her research interests include image processing, data mining, and pattern recognition. She is a member of CCF and IEEE." @default.
- W4366758861 created "2023-04-24" @default.
- W4366758861 creator A5034956382 @default.
- W4366758861 creator A5060231443 @default.
- W4366758861 creator A5060618041 @default.
- W4366758861 date "2023-04-22" @default.
- W4366758861 modified "2023-10-18" @default.
- W4366758861 title "Transformer-based heart organ segmentation using a novel axial attention and fusion mechanism" @default.
- W4366758861 cites W1901129140 @default.
- W4366758861 cites W1903029394 @default.
- W4366758861 cites W2517600007 @default.
- W4366758861 cites W2541451878 @default.
- W4366758861 cites W3007268491 @default.
- W4366758861 cites W3014974411 @default.
- W4366758861 cites W3046824133 @default.
- W4366758861 cites W3096609285 @default.
- W4366758861 cites W3097065222 @default.
- W4366758861 cites W3110880468 @default.
- W4366758861 cites W3134651880 @default.
- W4366758861 cites W3203480968 @default.
- W4366758861 cites W3204166336 @default.
- W4366758861 cites W3204255739 @default.
- W4366758861 cites W3206685025 @default.
- W4366758861 cites W3206770282 @default.
- W4366758861 cites W4226497331 @default.
- W4366758861 cites W4238961266 @default.
- W4366758861 cites W4283445811 @default.
- W4366758861 cites W4302363625 @default.
- W4366758861 doi "https://doi.org/10.1080/13682199.2023.2198394" @default.
- W4366758861 hasPublicationYear "2023" @default.
- W4366758861 type Work @default.
- W4366758861 citedByCount "0" @default.
- W4366758861 crossrefType "journal-article" @default.
- W4366758861 hasAuthorship W4366758861A5034956382 @default.
- W4366758861 hasAuthorship W4366758861A5060231443 @default.
- W4366758861 hasAuthorship W4366758861A5060618041 @default.
- W4366758861 hasConcept C111919701 @default.
- W4366758861 hasConcept C118505674 @default.
- W4366758861 hasConcept C119599485 @default.
- W4366758861 hasConcept C127413603 @default.
- W4366758861 hasConcept C153180895 @default.
- W4366758861 hasConcept C154945302 @default.
- W4366758861 hasConcept C165801399 @default.
- W4366758861 hasConcept C41008148 @default.
- W4366758861 hasConcept C66322947 @default.
- W4366758861 hasConcept C89600930 @default.
- W4366758861 hasConceptScore W4366758861C111919701 @default.
- W4366758861 hasConceptScore W4366758861C118505674 @default.
- W4366758861 hasConceptScore W4366758861C119599485 @default.
- W4366758861 hasConceptScore W4366758861C127413603 @default.
- W4366758861 hasConceptScore W4366758861C153180895 @default.
- W4366758861 hasConceptScore W4366758861C154945302 @default.
- W4366758861 hasConceptScore W4366758861C165801399 @default.
- W4366758861 hasConceptScore W4366758861C41008148 @default.
- W4366758861 hasConceptScore W4366758861C66322947 @default.
- W4366758861 hasConceptScore W4366758861C89600930 @default.
- W4366758861 hasFunder F4320321001 @default.
- W4366758861 hasFunder F4320321410 @default.
- W4366758861 hasFunder F4320321543 @default.
- W4366758861 hasFunder F4320327794 @default.
- W4366758861 hasLocation W43667588611 @default.
- W4366758861 hasOpenAccess W4366758861 @default.
- W4366758861 hasPrimaryLocation W43667588611 @default.
- W4366758861 hasRelatedWork W2033914206 @default.
- W4366758861 hasRelatedWork W2046077695 @default.
- W4366758861 hasRelatedWork W2146076056 @default.
- W4366758861 hasRelatedWork W2163831990 @default.
- W4366758861 hasRelatedWork W2275988210 @default.
- W4366758861 hasRelatedWork W2358941527 @default.
- W4366758861 hasRelatedWork W2385621972 @default.
- W4366758861 hasRelatedWork W2589098947 @default.
- W4366758861 hasRelatedWork W3003836766 @default.
- W4366758861 hasRelatedWork W4231964008 @default.
- W4366758861 isParatext "false" @default.
- W4366758861 isRetracted "false" @default.
- W4366758861 workType "article" @default.