Matches in SemOpenAlex for { <https://semopenalex.org/work/W3152803807> ?p ?o ?g. }
- W3152803807 endingPage "102058" @default.
- W3152803807 startingPage "102058" @default.
- W3152803807 abstract "Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1." @default.
- W3152803807 created "2021-04-26" @default.
- W3152803807 creator A5020331930 @default.
- W3152803807 creator A5021122445 @default.
- W3152803807 creator A5023181397 @default.
- W3152803807 creator A5023315356 @default.
- W3152803807 creator A5024675075 @default.
- W3152803807 creator A5025215159 @default.
- W3152803807 creator A5028533583 @default.
- W3152803807 creator A5037401826 @default.
- W3152803807 creator A5043211921 @default.
- W3152803807 creator A5050649130 @default.
- W3152803807 creator A5053610493 @default.
- W3152803807 creator A5061268857 @default.
- W3152803807 creator A5061782113 @default.
- W3152803807 creator A5068948069 @default.
- W3152803807 creator A5080050834 @default.
- W3152803807 creator A5083273390 @default.
- W3152803807 date "2021-07-01" @default.
- W3152803807 modified "2023-10-16" @default.
- W3152803807 title "EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos" @default.
- W3152803807 cites W1803059841 @default.
- W3152803807 cites W1967329161 @default.
- W3152803807 cites W1974195097 @default.
- W3152803807 cites W1987568141 @default.
- W3152803807 cites W1993267444 @default.
- W3152803807 cites W2008359794 @default.
- W3152803807 cites W2021088830 @default.
- W3152803807 cites W2034269173 @default.
- W3152803807 cites W2038874815 @default.
- W3152803807 cites W2057416234 @default.
- W3152803807 cites W2126060993 @default.
- W3152803807 cites W2148867806 @default.
- W3152803807 cites W2149680049 @default.
- W3152803807 cites W2150382645 @default.
- W3152803807 cites W2157777247 @default.
- W3152803807 cites W2193969413 @default.
- W3152803807 cites W2285968993 @default.
- W3152803807 cites W2292347270 @default.
- W3152803807 cites W2344087428 @default.
- W3152803807 cites W2586952804 @default.
- W3152803807 cites W2603211304 @default.
- W3152803807 cites W2609416538 @default.
- W3152803807 cites W2620841913 @default.
- W3152803807 cites W2790503911 @default.
- W3152803807 cites W2793904093 @default.
- W3152803807 cites W2801997348 @default.
- W3152803807 cites W2811195474 @default.
- W3152803807 cites W2943868926 @default.
- W3152803807 cites W2963591054 @default.
- W3152803807 cites W2963596017 @default.
- W3152803807 cites W2963773612 @default.
- W3152803807 cites W2964968086 @default.
- W3152803807 cites W2985775862 @default.
- W3152803807 cites W2989184872 @default.
- W3152803807 cites W3014713533 @default.
- W3152803807 cites W3033327942 @default.
- W3152803807 cites W3099097979 @default.
- W3152803807 cites W3100211012 @default.
- W3152803807 cites W3127501898 @default.
- W3152803807 cites W4255173569 @default.
- W3152803807 doi "https://doi.org/10.1016/j.media.2021.102058" @default.
- W3152803807 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/33930829" @default.
- W3152803807 hasPublicationYear "2021" @default.
- W3152803807 type Work @default.
- W3152803807 sameAs 3152803807 @default.
- W3152803807 citedByCount "57" @default.
- W3152803807 countsByYear W31528038072021 @default.
- W3152803807 countsByYear W31528038072022 @default.
- W3152803807 countsByYear W31528038072023 @default.
- W3152803807 crossrefType "journal-article" @default.
- W3152803807 hasAuthorship W3152803807A5020331930 @default.
- W3152803807 hasAuthorship W3152803807A5021122445 @default.
- W3152803807 hasAuthorship W3152803807A5023181397 @default.
- W3152803807 hasAuthorship W3152803807A5023315356 @default.
- W3152803807 hasAuthorship W3152803807A5024675075 @default.
- W3152803807 hasAuthorship W3152803807A5025215159 @default.
- W3152803807 hasAuthorship W3152803807A5028533583 @default.
- W3152803807 hasAuthorship W3152803807A5037401826 @default.
- W3152803807 hasAuthorship W3152803807A5043211921 @default.
- W3152803807 hasAuthorship W3152803807A5050649130 @default.
- W3152803807 hasAuthorship W3152803807A5053610493 @default.
- W3152803807 hasAuthorship W3152803807A5061268857 @default.
- W3152803807 hasAuthorship W3152803807A5061782113 @default.
- W3152803807 hasAuthorship W3152803807A5068948069 @default.
- W3152803807 hasAuthorship W3152803807A5080050834 @default.
- W3152803807 hasAuthorship W3152803807A5083273390 @default.
- W3152803807 hasConcept C153180895 @default.
- W3152803807 hasConcept C154945302 @default.
- W3152803807 hasConcept C19966478 @default.
- W3152803807 hasConcept C31972630 @default.
- W3152803807 hasConcept C41008148 @default.
- W3152803807 hasConcept C49441653 @default.
- W3152803807 hasConcept C5799516 @default.
- W3152803807 hasConcept C65909025 @default.
- W3152803807 hasConcept C90509273 @default.
- W3152803807 hasConceptScore W3152803807C153180895 @default.
- W3152803807 hasConceptScore W3152803807C154945302 @default.