Matches in SemOpenAlex for { <https://semopenalex.org/work/W4225146199> ?p ?o ?g. }
- W4225146199 abstract "<sec> <title>BACKGROUND</title> Voice screening and diagnosis are processes that are used during voice disorders investigations. Both have limited standardized tests, which are affected by the clinician’s experience and subjective judgment. Machine learning (ML) algorithms were introduced and employed in screening/diagnosing patients’ voices as an objective tool. The effectiveness of ML algorithms in assessing and diagnosing voice disorders has been investigated by numerous studies. </sec> <sec> <title>OBJECTIVE</title> This systematic review aims to assess the effectiveness of ML algorithms in screening and diagnosing voice disorders. </sec> <sec> <title>METHODS</title> An electronic search was conducted in five databases. We included studies that examined the performance (accuracy, sensitivity, and specificity) of any ML algorithms in detecting abnormal voice samples. Two reviewers independently selected the studies, extracted data from the included studies, and assessed the risk of bias in the included studies. The methodological quality of each study was assessed using the QUADAS-2 tool. Characteristics of studies, population, and index tests were extracted. Meta-analyses were conducted for pooling accuracy, sensitivity, and specificity of ML techniques. Sources of heterogeneity were addressed by excluding some studies and discussing the possible sources of it. </sec> <sec> <title>RESULTS</title> Out of 1409 records retrieved, 13 studies were included (participants: 4079) in this review. Thirteen machine learning techniques were used in the included studies, but the most commonly used technique was SVM. The pooled accuracy, sensitivity, and specificity of ML techniques in screening voice disorders were 93%, 96%, and 93%, respectively. LS-SVM had the highest accuracy (99%) while K-NN had the highest sensitivity (98%) and specificity (98%). Quadric Discriminant analysis (QDA) achieved the lowest accuracy (91%), sensitivity (89%), and specificity (89%). </sec> <sec> <title>CONCLUSIONS</title> ML showed promising findings in screening voice disorders. However, the findings could not be conclusive in diagnosing voice disorders due to the limited number of studies that used ML for diagnosing purposes, thus, more investigations need to be made. Accordingly, it might not be possible to use ML as a substitution for the current diagnostic tools. Instead, it might be used as a decision support tool for clinicians to assess their patients, this could improve the management process for voice disorders assessment. </sec>" @default.
- W4225146199 created "2022-05-01" @default.
- W4225146199 creator A5006837999 @default.
- W4225146199 creator A5020810020 @default.
- W4225146199 creator A5032781929 @default.
- W4225146199 creator A5048456579 @default.
- W4225146199 creator A5057926319 @default.
- W4225146199 date "2022-04-04" @default.
- W4225146199 modified "2023-09-23" @default.
- W4225146199 title "The Effectiveness of Supervised Machine Learning in Screening and Diagnosing Voice Disorders: A Systematic Review and Meta-Analysis (Preprint)" @default.
- W4225146199 cites W1598514414 @default.
- W4225146199 cites W1919503377 @default.
- W4225146199 cites W1965586148 @default.
- W4225146199 cites W1969628835 @default.
- W4225146199 cites W1983276100 @default.
- W4225146199 cites W1988974600 @default.
- W4225146199 cites W1994997260 @default.
- W4225146199 cites W2010365089 @default.
- W4225146199 cites W2016726006 @default.
- W4225146199 cites W2018692972 @default.
- W4225146199 cites W2031072412 @default.
- W4225146199 cites W203535910 @default.
- W4225146199 cites W2050966660 @default.
- W4225146199 cites W2083921681 @default.
- W4225146199 cites W2092514808 @default.
- W4225146199 cites W2092923243 @default.
- W4225146199 cites W2107638293 @default.
- W4225146199 cites W2113650738 @default.
- W4225146199 cites W2123608681 @default.
- W4225146199 cites W2125435699 @default.
- W4225146199 cites W2132025473 @default.
- W4225146199 cites W2138041026 @default.
- W4225146199 cites W2143903753 @default.
- W4225146199 cites W2148244507 @default.
- W4225146199 cites W2148411079 @default.
- W4225146199 cites W2170517798 @default.
- W4225146199 cites W2411705124 @default.
- W4225146199 cites W2530177228 @default.
- W4225146199 cites W2530222296 @default.
- W4225146199 cites W2766329439 @default.
- W4225146199 cites W2784152731 @default.
- W4225146199 cites W2793278500 @default.
- W4225146199 cites W2806974377 @default.
- W4225146199 cites W2810958285 @default.
- W4225146199 cites W2896464508 @default.
- W4225146199 cites W2897150037 @default.
- W4225146199 cites W3031948080 @default.
- W4225146199 cites W3106009882 @default.
- W4225146199 cites W3125384922 @default.
- W4225146199 cites W4232510938 @default.
- W4225146199 cites W4236792319 @default.
- W4225146199 cites W984830536 @default.
- W4225146199 doi "https://doi.org/10.2196/preprints.38472" @default.
- W4225146199 hasPublicationYear "2022" @default.
- W4225146199 type Work @default.
- W4225146199 citedByCount "0" @default.
- W4225146199 crossrefType "posted-content" @default.
- W4225146199 hasAuthorship W4225146199A5006837999 @default.
- W4225146199 hasAuthorship W4225146199A5020810020 @default.
- W4225146199 hasAuthorship W4225146199A5032781929 @default.
- W4225146199 hasAuthorship W4225146199A5048456579 @default.
- W4225146199 hasAuthorship W4225146199A5057926319 @default.
- W4225146199 hasConcept C119857082 @default.
- W4225146199 hasConcept C12267149 @default.
- W4225146199 hasConcept C126322002 @default.
- W4225146199 hasConcept C142724271 @default.
- W4225146199 hasConcept C154945302 @default.
- W4225146199 hasConcept C17744445 @default.
- W4225146199 hasConcept C189708586 @default.
- W4225146199 hasConcept C199539241 @default.
- W4225146199 hasConcept C204321447 @default.
- W4225146199 hasConcept C2779473830 @default.
- W4225146199 hasConcept C3020132585 @default.
- W4225146199 hasConcept C41008148 @default.
- W4225146199 hasConcept C70437156 @default.
- W4225146199 hasConcept C71924100 @default.
- W4225146199 hasConcept C95190672 @default.
- W4225146199 hasConceptScore W4225146199C119857082 @default.
- W4225146199 hasConceptScore W4225146199C12267149 @default.
- W4225146199 hasConceptScore W4225146199C126322002 @default.
- W4225146199 hasConceptScore W4225146199C142724271 @default.
- W4225146199 hasConceptScore W4225146199C154945302 @default.
- W4225146199 hasConceptScore W4225146199C17744445 @default.
- W4225146199 hasConceptScore W4225146199C189708586 @default.
- W4225146199 hasConceptScore W4225146199C199539241 @default.
- W4225146199 hasConceptScore W4225146199C204321447 @default.
- W4225146199 hasConceptScore W4225146199C2779473830 @default.
- W4225146199 hasConceptScore W4225146199C3020132585 @default.
- W4225146199 hasConceptScore W4225146199C41008148 @default.
- W4225146199 hasConceptScore W4225146199C70437156 @default.
- W4225146199 hasConceptScore W4225146199C71924100 @default.
- W4225146199 hasConceptScore W4225146199C95190672 @default.
- W4225146199 hasLocation W42251461991 @default.
- W4225146199 hasOpenAccess W4225146199 @default.
- W4225146199 hasPrimaryLocation W42251461991 @default.
- W4225146199 hasRelatedWork W1996541855 @default.
- W4225146199 hasRelatedWork W2101819884 @default.
- W4225146199 hasRelatedWork W2423149877 @default.
- W4225146199 hasRelatedWork W2937631562 @default.
- W4225146199 hasRelatedWork W3107474891 @default.