Matches in SemOpenAlex for { <https://semopenalex.org/work/W4297523786> ?p ?o ?g. }
Showing items 1 to 60 of
60
with 100 items per page.
- W4297523786 endingPage "S19" @default.
- W4297523786 startingPage "S19" @default.
- W4297523786 abstract "Study ObjectivesThe Focused Assessment with Sonography in Trauma (FAST) exam is used to rapidly identify unstable trauma patients with intraperitoneal hemorrhage. Maintaining high diagnostic precision during a point-of-care FAST Exam requires accurate and prompt recognition of relevant anatomical landmarks. In the cranial abdominal views (right upper quadrant – RUQ, and left upper quadrant – LUQ), these landmarks include solid organs (Spleen, Liver, Kidney) and their interfaces with surrounding structures (the Diaphragm and the inferior tips of the Liver/Spleen). In this study we describe the potential of an Artificial intelligence (AI) model to detect these key organs.MethodsStudy DesignSingle center, prospective, observational study approved by the Institutional Review Board.DataEmergency Ultrasound (EUS) fellowship trained emergency physicians acquired cine-loops of FAST exams in non-trauma volunteers (N=11) and trauma patients (N=21) using three transducers (Philips X5-1, S4-1U and C5-2U). The subjects were aged 22-64 years, had a BMI range of 21-34, and 7/32 subjects had positive FAST exams.Image annotationTrained annotators labeled each image with bounding boxes around key features (Liver, Spleen, Kidney, Diaphragm, and caudal Liver/Spleen Tip). The annotations were then adjudicated by at least two EUS fellowship-trained physicians. If none of the organs were present, the image was labeled as having insufficient image quality (IQ).AlgorithmAn object detection artificial intelligence (AI) model (YoloV3-Tiny) with low inference time and memory requirements was chosen with the goal of mobile device deployment. The model was trained and tested to classify and localize key organs on individual images. RUQ data consisted of 8652 images from 17 subjects for algorithm training and 2131 images from 3 subjects for testing. LUQ data consisted of 6667 images from 19 subjects for training and 2593 images from 4 subjects for testing. For each organ, the accuracy of the AI predicted bounding box was calculated. True detection occurred if the model correctly predicted the presence of that organ in that image with a sufficient overlap with the human annotated bounding boxes. The acceptable overlap is an Intersection Over Union (IOU) of > 0.5 between the model prediction and human annotation.ResultsThe model showed sufficient accuracy > 0.8 for most organs in the RUQ and LUQ (Figure 1a). A trend towards higher accuracy was seen for the RUQ organs as well as for larger organs such as the liver, kidney, and spleen. Furthermore, the model identified insufficient IQ images with high accuracy.ConclusionsThis work demonstrates that a lightweight AI-based organ detection is feasible for FAST exam imagery. Future work will further optimize model performance for diaphragm and liver tip features. The model may enable the realization of passive user guidance by facilitating key organ detection, thereby ensuring exam completeness and accuracy for less trained users including first responders in mass-casualty and combat scenarios.AcknowledgmentsFunding and technical support for this work is provided by the Biomedical Advanced Research and Development Authority (BARDA), under the Assistant Secretary for Preparedness and Response (ASPR), within the U.S. Department of Health and Human Services (HHS), under ongoing USG Contract No. 75A50120C00097. For more information about BARDA, refer to https://www.medicalcountermeasures.gov/.No, authors do not have interests to disclose Study ObjectivesThe Focused Assessment with Sonography in Trauma (FAST) exam is used to rapidly identify unstable trauma patients with intraperitoneal hemorrhage. Maintaining high diagnostic precision during a point-of-care FAST Exam requires accurate and prompt recognition of relevant anatomical landmarks. In the cranial abdominal views (right upper quadrant – RUQ, and left upper quadrant – LUQ), these landmarks include solid organs (Spleen, Liver, Kidney) and their interfaces with surrounding structures (the Diaphragm and the inferior tips of the Liver/Spleen). In this study we describe the potential of an Artificial intelligence (AI) model to detect these key organs. The Focused Assessment with Sonography in Trauma (FAST) exam is used to rapidly identify unstable trauma patients with intraperitoneal hemorrhage. Maintaining high diagnostic precision during a point-of-care FAST Exam requires accurate and prompt recognition of relevant anatomical landmarks. In the cranial abdominal views (right upper quadrant – RUQ, and left upper quadrant – LUQ), these landmarks include solid organs (Spleen, Liver, Kidney) and their interfaces with surrounding structures (the Diaphragm and the inferior tips of the Liver/Spleen). In this study we describe the potential of an Artificial intelligence (AI) model to detect these key organs. MethodsStudy DesignSingle center, prospective, observational study approved by the Institutional Review Board.DataEmergency Ultrasound (EUS) fellowship trained emergency physicians acquired cine-loops of FAST exams in non-trauma volunteers (N=11) and trauma patients (N=21) using three transducers (Philips X5-1, S4-1U and C5-2U). The subjects were aged 22-64 years, had a BMI range of 21-34, and 7/32 subjects had positive FAST exams.Image annotationTrained annotators labeled each image with bounding boxes around key features (Liver, Spleen, Kidney, Diaphragm, and caudal Liver/Spleen Tip). The annotations were then adjudicated by at least two EUS fellowship-trained physicians. If none of the organs were present, the image was labeled as having insufficient image quality (IQ).AlgorithmAn object detection artificial intelligence (AI) model (YoloV3-Tiny) with low inference time and memory requirements was chosen with the goal of mobile device deployment. The model was trained and tested to classify and localize key organs on individual images. RUQ data consisted of 8652 images from 17 subjects for algorithm training and 2131 images from 3 subjects for testing. LUQ data consisted of 6667 images from 19 subjects for training and 2593 images from 4 subjects for testing. For each organ, the accuracy of the AI predicted bounding box was calculated. True detection occurred if the model correctly predicted the presence of that organ in that image with a sufficient overlap with the human annotated bounding boxes. The acceptable overlap is an Intersection Over Union (IOU) of > 0.5 between the model prediction and human annotation. Study Design Single center, prospective, observational study approved by the Institutional Review Board. Data Emergency Ultrasound (EUS) fellowship trained emergency physicians acquired cine-loops of FAST exams in non-trauma volunteers (N=11) and trauma patients (N=21) using three transducers (Philips X5-1, S4-1U and C5-2U). The subjects were aged 22-64 years, had a BMI range of 21-34, and 7/32 subjects had positive FAST exams. Image annotation Trained annotators labeled each image with bounding boxes around key features (Liver, Spleen, Kidney, Diaphragm, and caudal Liver/Spleen Tip). The annotations were then adjudicated by at least two EUS fellowship-trained physicians. If none of the organs were present, the image was labeled as having insufficient image quality (IQ). Algorithm An object detection artificial intelligence (AI) model (YoloV3-Tiny) with low inference time and memory requirements was chosen with the goal of mobile device deployment. The model was trained and tested to classify and localize key organs on individual images. RUQ data consisted of 8652 images from 17 subjects for algorithm training and 2131 images from 3 subjects for testing. LUQ data consisted of 6667 images from 19 subjects for training and 2593 images from 4 subjects for testing. For each organ, the accuracy of the AI predicted bounding box was calculated. True detection occurred if the model correctly predicted the presence of that organ in that image with a sufficient overlap with the human annotated bounding boxes. The acceptable overlap is an Intersection Over Union (IOU) of > 0.5 between the model prediction and human annotation. ResultsThe model showed sufficient accuracy > 0.8 for most organs in the RUQ and LUQ (Figure 1a). A trend towards higher accuracy was seen for the RUQ organs as well as for larger organs such as the liver, kidney, and spleen. Furthermore, the model identified insufficient IQ images with high accuracy. The model showed sufficient accuracy > 0.8 for most organs in the RUQ and LUQ (Figure 1a). A trend towards higher accuracy was seen for the RUQ organs as well as for larger organs such as the liver, kidney, and spleen. Furthermore, the model identified insufficient IQ images with high accuracy. ConclusionsThis work demonstrates that a lightweight AI-based organ detection is feasible for FAST exam imagery. Future work will further optimize model performance for diaphragm and liver tip features. The model may enable the realization of passive user guidance by facilitating key organ detection, thereby ensuring exam completeness and accuracy for less trained users including first responders in mass-casualty and combat scenarios. This work demonstrates that a lightweight AI-based organ detection is feasible for FAST exam imagery. Future work will further optimize model performance for diaphragm and liver tip features. The model may enable the realization of passive user guidance by facilitating key organ detection, thereby ensuring exam completeness and accuracy for less trained users including first responders in mass-casualty and combat scenarios." @default.
- W4297523786 created "2022-09-29" @default.
- W4297523786 creator A5001332475 @default.
- W4297523786 creator A5005202220 @default.
- W4297523786 creator A5028916346 @default.
- W4297523786 creator A5032796190 @default.
- W4297523786 creator A5041647723 @default.
- W4297523786 creator A5065668867 @default.
- W4297523786 creator A5078941771 @default.
- W4297523786 creator A5081580164 @default.
- W4297523786 creator A5088392700 @default.
- W4297523786 date "2022-10-01" @default.
- W4297523786 modified "2023-10-14" @default.
- W4297523786 title "42 Artificial Intelligence Model to Identify Organ Features for Guiding FAST Ultrasound Exams" @default.
- W4297523786 doi "https://doi.org/10.1016/j.annemergmed.2022.08.065" @default.
- W4297523786 hasPublicationYear "2022" @default.
- W4297523786 type Work @default.
- W4297523786 citedByCount "0" @default.
- W4297523786 crossrefType "journal-article" @default.
- W4297523786 hasAuthorship W4297523786A5001332475 @default.
- W4297523786 hasAuthorship W4297523786A5005202220 @default.
- W4297523786 hasAuthorship W4297523786A5028916346 @default.
- W4297523786 hasAuthorship W4297523786A5032796190 @default.
- W4297523786 hasAuthorship W4297523786A5041647723 @default.
- W4297523786 hasAuthorship W4297523786A5065668867 @default.
- W4297523786 hasAuthorship W4297523786A5078941771 @default.
- W4297523786 hasAuthorship W4297523786A5081580164 @default.
- W4297523786 hasAuthorship W4297523786A5088392700 @default.
- W4297523786 hasConcept C126838900 @default.
- W4297523786 hasConcept C143753070 @default.
- W4297523786 hasConcept C154945302 @default.
- W4297523786 hasConcept C19527891 @default.
- W4297523786 hasConcept C41008148 @default.
- W4297523786 hasConcept C71924100 @default.
- W4297523786 hasConceptScore W4297523786C126838900 @default.
- W4297523786 hasConceptScore W4297523786C143753070 @default.
- W4297523786 hasConceptScore W4297523786C154945302 @default.
- W4297523786 hasConceptScore W4297523786C19527891 @default.
- W4297523786 hasConceptScore W4297523786C41008148 @default.
- W4297523786 hasConceptScore W4297523786C71924100 @default.
- W4297523786 hasIssue "4" @default.
- W4297523786 hasLocation W42975237861 @default.
- W4297523786 hasOpenAccess W4297523786 @default.
- W4297523786 hasPrimaryLocation W42975237861 @default.
- W4297523786 hasRelatedWork W2013133633 @default.
- W4297523786 hasRelatedWork W2127510782 @default.
- W4297523786 hasRelatedWork W2373416058 @default.
- W4297523786 hasRelatedWork W2381371073 @default.
- W4297523786 hasRelatedWork W2467765637 @default.
- W4297523786 hasRelatedWork W2948634361 @default.
- W4297523786 hasRelatedWork W4239970430 @default.
- W4297523786 hasRelatedWork W4252259355 @default.
- W4297523786 hasRelatedWork W4256079608 @default.
- W4297523786 hasRelatedWork W4312298876 @default.
- W4297523786 hasVolume "80" @default.
- W4297523786 isParatext "false" @default.
- W4297523786 isRetracted "false" @default.
- W4297523786 workType "article" @default.