Matches in SemOpenAlex for { <https://semopenalex.org/work/W3127089628> ?p ?o ?g. }
Showing items 1 to 84 of
84
with 100 items per page.
- W3127089628 endingPage "7008" @default.
- W3127089628 startingPage "7000" @default.
- W3127089628 abstract "Human behavior understanding is playing more and more important role in human-centered Industrial Internet of Video Things (IIoVT) system with the deep combination of artificial intelligence and video-based industrial Internet of Things. However, it requires expensively computational resources, including high-performance computing units and large memory, to train a deep computation model with a large number of parameters, which limits its effectiveness and efficiency for IIoVT applications. In this article, a tensor-train mechanism based deep model is presented for video human behavior understanding to meet the requirement of IIoVT applications. It can get competitive performance in accuracy and training efficiency with potentiality for combination of artificial intelligence and prefront IIoVT system. On the one hand, to achieve desirable accuracy, we improved the conventional CNN and adopted the recurrent neural network mechanism to enhance the video representation over time, which takes the correlation between consecutive deep feature into consideration. On the other hand, to enhance the inference capacity between the spatial and temporal features, we carry out the self-critical reinforcement learning mechanism in parameter learning stage. Meanwhile, to further reduce parameter storage size to meet requirement for the deployment of deep neural network and edge device, the tensor-train mechanism is used, which transforms the parameter matrix to a tensor space and carry out tensor decomposition mechanism to decrease the number of parameter generated in parameter training. Finally, we conduct extensive experiments to evaluate our scheme, and the results demonstrate that our method can improve the training efficiency and save the memory space for the deep computation model with better accuracy." @default.
- W3127089628 created "2021-02-15" @default.
- W3127089628 creator A5036284035 @default.
- W3127089628 creator A5041640890 @default.
- W3127089628 creator A5050167068 @default.
- W3127089628 creator A5060713232 @default.
- W3127089628 creator A5066930256 @default.
- W3127089628 date "2022-10-01" @default.
- W3127089628 modified "2023-10-01" @default.
- W3127089628 title "Hybrid Deep Model for Human Behavior Understanding on Industrial Internet of Video Things" @default.
- W3127089628 cites W1573040851 @default.
- W3127089628 cites W2139501017 @default.
- W3127089628 cites W2238723005 @default.
- W3127089628 cites W2321627895 @default.
- W3127089628 cites W2547835662 @default.
- W3127089628 cites W2613964630 @default.
- W3127089628 cites W2739107216 @default.
- W3127089628 cites W2810334075 @default.
- W3127089628 cites W2811266402 @default.
- W3127089628 cites W2930749509 @default.
- W3127089628 cites W2962934715 @default.
- W3127089628 cites W2963545907 @default.
- W3127089628 cites W2979723138 @default.
- W3127089628 cites W2981151606 @default.
- W3127089628 cites W3001393655 @default.
- W3127089628 cites W3006631416 @default.
- W3127089628 cites W3017790932 @default.
- W3127089628 cites W3022778813 @default.
- W3127089628 cites W3025245379 @default.
- W3127089628 cites W3033009913 @default.
- W3127089628 cites W3098745339 @default.
- W3127089628 doi "https://doi.org/10.1109/tii.2021.3058276" @default.
- W3127089628 hasPublicationYear "2022" @default.
- W3127089628 type Work @default.
- W3127089628 sameAs 3127089628 @default.
- W3127089628 citedByCount "7" @default.
- W3127089628 countsByYear W31270896282022 @default.
- W3127089628 countsByYear W31270896282023 @default.
- W3127089628 crossrefType "journal-article" @default.
- W3127089628 hasAuthorship W3127089628A5036284035 @default.
- W3127089628 hasAuthorship W3127089628A5041640890 @default.
- W3127089628 hasAuthorship W3127089628A5050167068 @default.
- W3127089628 hasAuthorship W3127089628A5060713232 @default.
- W3127089628 hasAuthorship W3127089628A5066930256 @default.
- W3127089628 hasConcept C105339364 @default.
- W3127089628 hasConcept C108583219 @default.
- W3127089628 hasConcept C111919701 @default.
- W3127089628 hasConcept C119857082 @default.
- W3127089628 hasConcept C147168706 @default.
- W3127089628 hasConcept C154945302 @default.
- W3127089628 hasConcept C41008148 @default.
- W3127089628 hasConcept C50644808 @default.
- W3127089628 hasConcept C97541855 @default.
- W3127089628 hasConceptScore W3127089628C105339364 @default.
- W3127089628 hasConceptScore W3127089628C108583219 @default.
- W3127089628 hasConceptScore W3127089628C111919701 @default.
- W3127089628 hasConceptScore W3127089628C119857082 @default.
- W3127089628 hasConceptScore W3127089628C147168706 @default.
- W3127089628 hasConceptScore W3127089628C154945302 @default.
- W3127089628 hasConceptScore W3127089628C41008148 @default.
- W3127089628 hasConceptScore W3127089628C50644808 @default.
- W3127089628 hasConceptScore W3127089628C97541855 @default.
- W3127089628 hasFunder F4320321001 @default.
- W3127089628 hasIssue "10" @default.
- W3127089628 hasLocation W31270896281 @default.
- W3127089628 hasOpenAccess W3127089628 @default.
- W3127089628 hasPrimaryLocation W31270896281 @default.
- W3127089628 hasRelatedWork W2795261237 @default.
- W3127089628 hasRelatedWork W3014300295 @default.
- W3127089628 hasRelatedWork W3164822677 @default.
- W3127089628 hasRelatedWork W4223943233 @default.
- W3127089628 hasRelatedWork W4225161397 @default.
- W3127089628 hasRelatedWork W4312200629 @default.
- W3127089628 hasRelatedWork W4360585206 @default.
- W3127089628 hasRelatedWork W4364306694 @default.
- W3127089628 hasRelatedWork W4380075502 @default.
- W3127089628 hasRelatedWork W4380086463 @default.
- W3127089628 hasVolume "18" @default.
- W3127089628 isParatext "false" @default.
- W3127089628 isRetracted "false" @default.
- W3127089628 magId "3127089628" @default.
- W3127089628 workType "article" @default.