Matches in SemOpenAlex for { <https://semopenalex.org/work/W4319334806> ?p ?o ?g. }
Showing items 1 to 79 of
79
with 100 items per page.
- W4319334806 abstract "Knee arthroscopy is one of the most complex minimally invasive surgeries, it is routinely performed to treat a range of ailments and injuries to the knee joint. Its complex ergonomic design imposes visualization and navigation constraints, consequently leading to unintended tissue damage and a steep learning curve before surgeons gain proficiency. The lack of robust visual texture and landmark frame features further limit the success of image-guided approaches to knee arthroscopy Feature- and texture-less tissue structures of knee anatomy, lighting conditions, noise, blur, debris, lack of accurate ground-truth label, tissue degeneration, and injury makes semantic segmentation an extremely challenging task. To address this complex research problem this study reported the utility of reconstructed surface reflectance as a viable piece of information that can be used with cutting edge deep learning technique to achieve highly accurate segmented scenes. We proposed an intraoperative, two-tier deep learning method that makes full use of tissue reflectance information present within an RGB frame to segment texture-less images into multiple tissue types from knee arthroscopy video frames. Study included several cadaver knees experiments at the Medical and Engineering Research Facility (MERF), located within the Prince Charles Hospital campus, Brisbane Queensland. Data were collected from total five cadaver knees, among them three were male and one female. The donors were from 56–93 years old. Ageing related tissue degeneration, and some anterior cruciate ligament injury was observed in most cadaver knees. An arthroscopic image dataset was created and subsequently labelled by clinical experts. Study also included validation of a prototype stereo arthroscope, along with conventional arthroscope, to attain larger Field-of-View (FoV) and stereo vision. We reconstructed surface reflectance from camera responses which exhibited distinct spatial features at different wavelengths ranging from 380 to 730 nm in the RGB spectrum. Towards the aim to segment texture-less tissue types, this data was used within a two-stage deep learning model. The accuracy of the network was measured using dice coefficient score. The average segmentation accuracy for the tissue type ACL was 0.6625, for the tissue type bone it was 0.84, and for the tissue type meniscus it was 0.565. For the analysis, we excluded extremely poor quality of frames. Here, a frame is considered as of extremely poor quality when more that 50% of any tissue structures are over- or under- exposed due to non-uniform light exposure. Additionally, when only high quality of frames was considered during the training and validation stage, the average bone segmentation accuracy improved to 0.92 and the average ACL segmentation accuracy reached 0.73. These two tissue types, namely Femur bone and ACL have high importance inarthroscopy for tissue tracking. Comparatively, the previous work based on RGB data achieved a much lower average accuracy for femur, tibia, ACL, and meniscus were 0.78, 0.50, 0.41, 0.43 using the U-net and 0.79, 0.50, 0.51, 0.48 using the U-Net++. From this analysis, it is evident that our multi-spectral method outperforms the previously proposed methods and delivers a much better solution in acheiving automatic arthroscopic scene segmentation. The method which was based on deep learning model and requires reconstructed surface reflectance. It could provide tissue awareness in intra-operative manner which has a high potential to improve surgical precisions. It could be applied to other minimally invasive surgery as an online segmentation tool for training, aide, and guidance for surgeons as well as image guided surgeries." @default.
- W4319334806 created "2023-02-08" @default.
- W4319334806 creator A5006427323 @default.
- W4319334806 creator A5047346073 @default.
- W4319334806 creator A5078690675 @default.
- W4319334806 date "2023-02-01" @default.
- W4319334806 modified "2023-10-18" @default.
- W4319334806 title "Arthroscopic scene segmentation using multi-spectral reconstructed frames and deep learning" @default.
- W4319334806 cites W1964448844 @default.
- W4319334806 cites W2136817911 @default.
- W4319334806 cites W2165090305 @default.
- W4319334806 cites W2461069179 @default.
- W4319334806 cites W2521559113 @default.
- W4319334806 cites W2765276001 @default.
- W4319334806 cites W2795122939 @default.
- W4319334806 cites W2900793827 @default.
- W4319334806 cites W2982220924 @default.
- W4319334806 cites W2990439211 @default.
- W4319334806 cites W3003188686 @default.
- W4319334806 cites W3011783795 @default.
- W4319334806 cites W3095592978 @default.
- W4319334806 cites W3098736294 @default.
- W4319334806 cites W3125937743 @default.
- W4319334806 cites W3191492272 @default.
- W4319334806 cites W4280621764 @default.
- W4319334806 cites W4285270398 @default.
- W4319334806 cites W8423413 @default.
- W4319334806 doi "https://doi.org/10.1016/j.imed.2022.10.006" @default.
- W4319334806 hasPublicationYear "2023" @default.
- W4319334806 type Work @default.
- W4319334806 citedByCount "3" @default.
- W4319334806 countsByYear W43193348062023 @default.
- W4319334806 crossrefType "journal-article" @default.
- W4319334806 hasAuthorship W4319334806A5006427323 @default.
- W4319334806 hasAuthorship W4319334806A5047346073 @default.
- W4319334806 hasAuthorship W4319334806A5078690675 @default.
- W4319334806 hasBestOaLocation W43193348061 @default.
- W4319334806 hasConcept C105702510 @default.
- W4319334806 hasConcept C108583219 @default.
- W4319334806 hasConcept C138885662 @default.
- W4319334806 hasConcept C141071460 @default.
- W4319334806 hasConcept C154945302 @default.
- W4319334806 hasConcept C2776401178 @default.
- W4319334806 hasConcept C2779162959 @default.
- W4319334806 hasConcept C31972630 @default.
- W4319334806 hasConcept C41008148 @default.
- W4319334806 hasConcept C41895202 @default.
- W4319334806 hasConcept C71924100 @default.
- W4319334806 hasConcept C89600930 @default.
- W4319334806 hasConcept C91762617 @default.
- W4319334806 hasConceptScore W4319334806C105702510 @default.
- W4319334806 hasConceptScore W4319334806C108583219 @default.
- W4319334806 hasConceptScore W4319334806C138885662 @default.
- W4319334806 hasConceptScore W4319334806C141071460 @default.
- W4319334806 hasConceptScore W4319334806C154945302 @default.
- W4319334806 hasConceptScore W4319334806C2776401178 @default.
- W4319334806 hasConceptScore W4319334806C2779162959 @default.
- W4319334806 hasConceptScore W4319334806C31972630 @default.
- W4319334806 hasConceptScore W4319334806C41008148 @default.
- W4319334806 hasConceptScore W4319334806C41895202 @default.
- W4319334806 hasConceptScore W4319334806C71924100 @default.
- W4319334806 hasConceptScore W4319334806C89600930 @default.
- W4319334806 hasConceptScore W4319334806C91762617 @default.
- W4319334806 hasLocation W43193348061 @default.
- W4319334806 hasOpenAccess W4319334806 @default.
- W4319334806 hasPrimaryLocation W43193348061 @default.
- W4319334806 hasRelatedWork W1669643531 @default.
- W4319334806 hasRelatedWork W1982826852 @default.
- W4319334806 hasRelatedWork W2005437358 @default.
- W4319334806 hasRelatedWork W2008656436 @default.
- W4319334806 hasRelatedWork W2023558673 @default.
- W4319334806 hasRelatedWork W2110230079 @default.
- W4319334806 hasRelatedWork W2134924024 @default.
- W4319334806 hasRelatedWork W2517104666 @default.
- W4319334806 hasRelatedWork W2613186388 @default.
- W4319334806 hasRelatedWork W2790662084 @default.
- W4319334806 isParatext "false" @default.
- W4319334806 isRetracted "false" @default.
- W4319334806 workType "article" @default.