Matches in SemOpenAlex for { <https://semopenalex.org/work/W4386490908> ?p ?o ?g. }
Showing items 1 to 79 of
79
with 100 items per page.
- W4386490908 abstract "Purpose With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos. Design/methodology/approach The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality. Findings Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model. Research limitations/implications A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional. Originality/value In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms." @default.
- W4386490908 created "2023-09-07" @default.
- W4386490908 creator A5018767794 @default.
- W4386490908 creator A5029008623 @default.
- W4386490908 creator A5041579592 @default.
- W4386490908 creator A5073452348 @default.
- W4386490908 date "2023-09-08" @default.
- W4386490908 modified "2023-09-27" @default.
- W4386490908 title "Online educational video engagement prediction based on dynamic graph neural networks" @default.
- W4386490908 cites W2042995780 @default.
- W4386490908 cites W2165027149 @default.
- W4386490908 cites W2169570446 @default.
- W4386490908 cites W2901504064 @default.
- W4386490908 cites W3121087702 @default.
- W4386490908 cites W3191804333 @default.
- W4386490908 cites W3204508881 @default.
- W4386490908 cites W4210682500 @default.
- W4386490908 cites W4252894279 @default.
- W4386490908 cites W4281935227 @default.
- W4386490908 cites W4285736194 @default.
- W4386490908 cites W4321021744 @default.
- W4386490908 cites W4378639249 @default.
- W4386490908 doi "https://doi.org/10.1108/ijwis-05-2023-0083" @default.
- W4386490908 hasPublicationYear "2023" @default.
- W4386490908 type Work @default.
- W4386490908 citedByCount "0" @default.
- W4386490908 crossrefType "journal-article" @default.
- W4386490908 hasAuthorship W4386490908A5018767794 @default.
- W4386490908 hasAuthorship W4386490908A5029008623 @default.
- W4386490908 hasAuthorship W4386490908A5041579592 @default.
- W4386490908 hasAuthorship W4386490908A5073452348 @default.
- W4386490908 hasConcept C103910844 @default.
- W4386490908 hasConcept C111472728 @default.
- W4386490908 hasConcept C119857082 @default.
- W4386490908 hasConcept C132525143 @default.
- W4386490908 hasConcept C136764020 @default.
- W4386490908 hasConcept C138885662 @default.
- W4386490908 hasConcept C154945302 @default.
- W4386490908 hasConcept C162324750 @default.
- W4386490908 hasConcept C176217482 @default.
- W4386490908 hasConcept C21547014 @default.
- W4386490908 hasConcept C2779530757 @default.
- W4386490908 hasConcept C41008148 @default.
- W4386490908 hasConcept C49774154 @default.
- W4386490908 hasConcept C50644808 @default.
- W4386490908 hasConcept C80444323 @default.
- W4386490908 hasConcept C93518851 @default.
- W4386490908 hasConceptScore W4386490908C103910844 @default.
- W4386490908 hasConceptScore W4386490908C111472728 @default.
- W4386490908 hasConceptScore W4386490908C119857082 @default.
- W4386490908 hasConceptScore W4386490908C132525143 @default.
- W4386490908 hasConceptScore W4386490908C136764020 @default.
- W4386490908 hasConceptScore W4386490908C138885662 @default.
- W4386490908 hasConceptScore W4386490908C154945302 @default.
- W4386490908 hasConceptScore W4386490908C162324750 @default.
- W4386490908 hasConceptScore W4386490908C176217482 @default.
- W4386490908 hasConceptScore W4386490908C21547014 @default.
- W4386490908 hasConceptScore W4386490908C2779530757 @default.
- W4386490908 hasConceptScore W4386490908C41008148 @default.
- W4386490908 hasConceptScore W4386490908C49774154 @default.
- W4386490908 hasConceptScore W4386490908C50644808 @default.
- W4386490908 hasConceptScore W4386490908C80444323 @default.
- W4386490908 hasConceptScore W4386490908C93518851 @default.
- W4386490908 hasLocation W43864909081 @default.
- W4386490908 hasOpenAccess W4386490908 @default.
- W4386490908 hasPrimaryLocation W43864909081 @default.
- W4386490908 hasRelatedWork W1515308544 @default.
- W4386490908 hasRelatedWork W1992807924 @default.
- W4386490908 hasRelatedWork W2062427795 @default.
- W4386490908 hasRelatedWork W2313595856 @default.
- W4386490908 hasRelatedWork W2324261804 @default.
- W4386490908 hasRelatedWork W2354642172 @default.
- W4386490908 hasRelatedWork W2360553097 @default.
- W4386490908 hasRelatedWork W2361349944 @default.
- W4386490908 hasRelatedWork W2775669459 @default.
- W4386490908 hasRelatedWork W2960369171 @default.
- W4386490908 isParatext "false" @default.
- W4386490908 isRetracted "false" @default.
- W4386490908 workType "article" @default.