Matches in SemOpenAlex for { <https://semopenalex.org/work/W4306376994> ?p ?o ?g. }
- W4306376994 endingPage "3335" @default.
- W4306376994 startingPage "3335" @default.
- W4306376994 abstract "Recent advances in machine and deep learning algorithms and enhanced computational capabilities have revolutionized healthcare and medicine. Nowadays, research on assistive technology has benefited from such advances in creating visual substitution for visual impairment. Several obstacles exist for people with visual impairment in reading printed text which is normally substituted with a pattern-based display known as Braille. Over the past decade, more wearable and embedded assistive devices and solutions were created for people with visual impairment to facilitate the reading of texts. However, assistive tools for comprehending the embedded meaning in images or objects are still limited. In this paper, we present a Deep Learning approach for people with visual impairment that addresses the aforementioned issue with a voice-based form to represent and illustrate images embedded in printed texts. The proposed system is divided into three phases: collecting input images, extracting features for training the deep learning model, and evaluating performance. The proposed approach leverages deep learning algorithms; namely, Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), for extracting salient features, captioning images, and converting written text to speech. The Convolution Neural Network (CNN) is implemented for detecting features from the printed image and its associated caption. The Long Short-Term Memory (LSTM) network is used as a captioning tool to describe the detected text from images. The identified captions and detected text is converted into voice message to the user via Text-To-Speech API. The proposed CNN-LSTM model is investigated using various network architectures, namely, GoogleNet, AlexNet, ResNet, SqueezeNet, and VGG16. The empirical results conclude that the CNN-LSTM based training model with ResNet architecture achieved the highest prediction accuracy of an image caption of 83%." @default.
- W4306376994 created "2022-10-17" @default.
- W4306376994 creator A5015700815 @default.
- W4306376994 creator A5034196986 @default.
- W4306376994 creator A5068512068 @default.
- W4306376994 creator A5080637374 @default.
- W4306376994 creator A5083814185 @default.
- W4306376994 creator A5085432173 @default.
- W4306376994 date "2022-10-16" @default.
- W4306376994 modified "2023-10-18" @default.
- W4306376994 title "Deep Learning Reader for Visually Impaired" @default.
- W4306376994 cites W2036020661 @default.
- W4306376994 cites W2036785686 @default.
- W4306376994 cites W2108598243 @default.
- W4306376994 cites W2112796928 @default.
- W4306376994 cites W2123436043 @default.
- W4306376994 cites W2568039188 @default.
- W4306376994 cites W2618530766 @default.
- W4306376994 cites W2766191760 @default.
- W4306376994 cites W2789804061 @default.
- W4306376994 cites W2885195348 @default.
- W4306376994 cites W2886388922 @default.
- W4306376994 cites W2886445457 @default.
- W4306376994 cites W2898741879 @default.
- W4306376994 cites W2902170910 @default.
- W4306376994 cites W2919358988 @default.
- W4306376994 cites W2926287779 @default.
- W4306376994 cites W3001112922 @default.
- W4306376994 cites W3011451419 @default.
- W4306376994 cites W3017628311 @default.
- W4306376994 cites W3044554477 @default.
- W4306376994 cites W3086271189 @default.
- W4306376994 cites W3089686247 @default.
- W4306376994 cites W3107886507 @default.
- W4306376994 cites W3108280614 @default.
- W4306376994 cites W3153187775 @default.
- W4306376994 cites W3157298521 @default.
- W4306376994 cites W3164780212 @default.
- W4306376994 cites W3196915641 @default.
- W4306376994 cites W3214300304 @default.
- W4306376994 cites W3214498996 @default.
- W4306376994 cites W3217117458 @default.
- W4306376994 cites W3217580385 @default.
- W4306376994 cites W4214587440 @default.
- W4306376994 cites W4220971325 @default.
- W4306376994 cites W4288901872 @default.
- W4306376994 cites W68733909 @default.
- W4306376994 doi "https://doi.org/10.3390/electronics11203335" @default.
- W4306376994 hasPublicationYear "2022" @default.
- W4306376994 type Work @default.
- W4306376994 citedByCount "7" @default.
- W4306376994 countsByYear W43063769942022 @default.
- W4306376994 countsByYear W43063769942023 @default.
- W4306376994 crossrefType "journal-article" @default.
- W4306376994 hasAuthorship W4306376994A5015700815 @default.
- W4306376994 hasAuthorship W4306376994A5034196986 @default.
- W4306376994 hasAuthorship W4306376994A5068512068 @default.
- W4306376994 hasAuthorship W4306376994A5080637374 @default.
- W4306376994 hasAuthorship W4306376994A5083814185 @default.
- W4306376994 hasAuthorship W4306376994A5085432173 @default.
- W4306376994 hasBestOaLocation W43063769941 @default.
- W4306376994 hasConcept C108583219 @default.
- W4306376994 hasConcept C111919701 @default.
- W4306376994 hasConcept C115961682 @default.
- W4306376994 hasConcept C149635348 @default.
- W4306376994 hasConcept C150594956 @default.
- W4306376994 hasConcept C154945302 @default.
- W4306376994 hasConcept C157657479 @default.
- W4306376994 hasConcept C17744445 @default.
- W4306376994 hasConcept C199539241 @default.
- W4306376994 hasConcept C2778802812 @default.
- W4306376994 hasConcept C28490314 @default.
- W4306376994 hasConcept C36464697 @default.
- W4306376994 hasConcept C41008148 @default.
- W4306376994 hasConcept C554936623 @default.
- W4306376994 hasConcept C81363708 @default.
- W4306376994 hasConceptScore W4306376994C108583219 @default.
- W4306376994 hasConceptScore W4306376994C111919701 @default.
- W4306376994 hasConceptScore W4306376994C115961682 @default.
- W4306376994 hasConceptScore W4306376994C149635348 @default.
- W4306376994 hasConceptScore W4306376994C150594956 @default.
- W4306376994 hasConceptScore W4306376994C154945302 @default.
- W4306376994 hasConceptScore W4306376994C157657479 @default.
- W4306376994 hasConceptScore W4306376994C17744445 @default.
- W4306376994 hasConceptScore W4306376994C199539241 @default.
- W4306376994 hasConceptScore W4306376994C2778802812 @default.
- W4306376994 hasConceptScore W4306376994C28490314 @default.
- W4306376994 hasConceptScore W4306376994C36464697 @default.
- W4306376994 hasConceptScore W4306376994C41008148 @default.
- W4306376994 hasConceptScore W4306376994C554936623 @default.
- W4306376994 hasConceptScore W4306376994C81363708 @default.
- W4306376994 hasIssue "20" @default.
- W4306376994 hasLocation W43063769941 @default.
- W4306376994 hasLocation W43063769942 @default.
- W4306376994 hasOpenAccess W4306376994 @default.
- W4306376994 hasPrimaryLocation W43063769941 @default.
- W4306376994 hasRelatedWork W2731899572 @default.
- W4306376994 hasRelatedWork W2999805992 @default.