Matches in SemOpenAlex for { <https://semopenalex.org/work/W4384130932> ?p ?o ?g. }
- W4384130932 endingPage "3081" @default.
- W4384130932 startingPage "3081" @default.
- W4384130932 abstract "Dysphagia is a common geriatric syndrome that might induce serious complications and death. Standard diagnostics using the Videofluoroscopic Swallowing Study (VFSS) or Fiberoptic Evaluation of Swallowing (FEES) are expensive and expose patients to risks, while bedside screening is subjective and might lack reliability. An affordable and accessible instrumented screening is necessary. This study aimed to evaluate the classification performance of Transformer models and convolutional networks in identifying swallowing and non-swallowing tasks through depth video data. Different activation functions (ReLU, LeakyReLU, GELU, ELU, SiLU, and GLU) were then evaluated on the best-performing model. Sixty-five healthy participants (n = 65) were invited to perform swallowing (eating a cracker and drinking water) and non-swallowing tasks (a deep breath and pronouncing vowels: “/eɪ/”, “/iː/”, “/aɪ/”, “/oʊ/”, “/u:/”). Swallowing and non-swallowing were classified by Transformer models (TimeSFormer, Video Vision Transformer (ViViT)), and convolutional neural networks (SlowFast, X3D, and R(2+1)D), respectively. In general, convolutional neural networks outperformed the Transformer models. X3D was the best model with good-to-excellent performance (F1-score: 0.920; adjusted F1-score: 0.885) in classifying swallowing and non-swallowing conditions. Moreover, X3D with its default activation function (ReLU) produced the best results, although LeakyReLU performed better in deep breathing and pronouncing “/aɪ/” tasks. Future studies shall consider collecting more data for pretraining and developing a hyperparameter tuning strategy for activation functions and the high dimensionality video data for Transformer models." @default.
- W4384130932 created "2023-07-14" @default.
- W4384130932 creator A5010270380 @default.
- W4384130932 creator A5024029482 @default.
- W4384130932 creator A5034328556 @default.
- W4384130932 creator A5036742312 @default.
- W4384130932 creator A5047913546 @default.
- W4384130932 creator A5079045231 @default.
- W4384130932 creator A5079619989 @default.
- W4384130932 creator A5087421914 @default.
- W4384130932 date "2023-07-12" @default.
- W4384130932 modified "2023-09-30" @default.
- W4384130932 title "Transformer Models and Convolutional Networks with Different Activation Functions for Swallow Classification Using Depth Video Data" @default.
- W4384130932 cites W1536973336 @default.
- W4384130932 cites W1677182931 @default.
- W4384130932 cites W1964678094 @default.
- W4384130932 cites W1966716734 @default.
- W4384130932 cites W1977570898 @default.
- W4384130932 cites W2010315761 @default.
- W4384130932 cites W2023002118 @default.
- W4384130932 cites W2026612570 @default.
- W4384130932 cites W2031601906 @default.
- W4384130932 cites W2046537303 @default.
- W4384130932 cites W2086146067 @default.
- W4384130932 cites W2086304902 @default.
- W4384130932 cites W2089916011 @default.
- W4384130932 cites W2090037365 @default.
- W4384130932 cites W2097120324 @default.
- W4384130932 cites W2107878631 @default.
- W4384130932 cites W2125162604 @default.
- W4384130932 cites W2125250238 @default.
- W4384130932 cites W2141212940 @default.
- W4384130932 cites W2160246420 @default.
- W4384130932 cites W2294662980 @default.
- W4384130932 cites W2314996484 @default.
- W4384130932 cites W2507009361 @default.
- W4384130932 cites W2528182051 @default.
- W4384130932 cites W2586884333 @default.
- W4384130932 cites W2624958217 @default.
- W4384130932 cites W2739658573 @default.
- W4384130932 cites W2754998475 @default.
- W4384130932 cites W2765421612 @default.
- W4384130932 cites W2768205776 @default.
- W4384130932 cites W2786846101 @default.
- W4384130932 cites W2791514042 @default.
- W4384130932 cites W2810623657 @default.
- W4384130932 cites W2895586864 @default.
- W4384130932 cites W2898280479 @default.
- W4384130932 cites W2898898181 @default.
- W4384130932 cites W2899318048 @default.
- W4384130932 cites W2901795049 @default.
- W4384130932 cites W2948551291 @default.
- W4384130932 cites W2962834855 @default.
- W4384130932 cites W2963155035 @default.
- W4384130932 cites W2979632005 @default.
- W4384130932 cites W2990503944 @default.
- W4384130932 cites W3000238064 @default.
- W4384130932 cites W3009825789 @default.
- W4384130932 cites W3034572008 @default.
- W4384130932 cites W3044709444 @default.
- W4384130932 cites W3082264823 @default.
- W4384130932 cites W3130008318 @default.
- W4384130932 cites W3141564595 @default.
- W4384130932 cites W3174828871 @default.
- W4384130932 cites W3185107288 @default.
- W4384130932 cites W3195191735 @default.
- W4384130932 cites W3213509763 @default.
- W4384130932 cites W4205698889 @default.
- W4384130932 cites W4213123767 @default.
- W4384130932 cites W4214612132 @default.
- W4384130932 cites W4226017353 @default.
- W4384130932 cites W4226042788 @default.
- W4384130932 cites W4241220453 @default.
- W4384130932 cites W4247811648 @default.
- W4384130932 cites W4280522591 @default.
- W4384130932 cites W4283791586 @default.
- W4384130932 cites W4296280690 @default.
- W4384130932 cites W4306743325 @default.
- W4384130932 cites W4313328820 @default.
- W4384130932 cites W4318486313 @default.
- W4384130932 cites W4319763423 @default.
- W4384130932 cites W4319777935 @default.
- W4384130932 cites W4382468793 @default.
- W4384130932 doi "https://doi.org/10.3390/math11143081" @default.
- W4384130932 hasPublicationYear "2023" @default.
- W4384130932 type Work @default.
- W4384130932 citedByCount "0" @default.
- W4384130932 crossrefType "journal-article" @default.
- W4384130932 hasAuthorship W4384130932A5010270380 @default.
- W4384130932 hasAuthorship W4384130932A5024029482 @default.
- W4384130932 hasAuthorship W4384130932A5034328556 @default.
- W4384130932 hasAuthorship W4384130932A5036742312 @default.
- W4384130932 hasAuthorship W4384130932A5047913546 @default.
- W4384130932 hasAuthorship W4384130932A5079045231 @default.
- W4384130932 hasAuthorship W4384130932A5079619989 @default.
- W4384130932 hasAuthorship W4384130932A5087421914 @default.
- W4384130932 hasBestOaLocation W43841309321 @default.
- W4384130932 hasConcept C108583219 @default.