Matches in SemOpenAlex for { <https://semopenalex.org/work/W4313403568> ?p ?o ?g. }
Showing items 1 to 76 of
76
with 100 items per page.
- W4313403568 abstract "Abstract Action recognition is described as the capability of determining the action that a human exhibit in the video. Latest innovations in either deep-learning or hand-crafted methods substantially increased the accuracy of action recognition. However, there are many issues, which keep action recognition task far from being solved. The task of human action recognition persists to be complicated and challenging due to the high complexity associated with human actions such as motion pattern variation, appearance variation, viewpoint variation, occlusions, background variation and camera motion. This paper presents a computational approach for human action recognition using video datasets through different stages: Detection, tracking of human and recognition of actions. Human detection and tracking are carried out using Gaussian Mixture Model (GMM) and Kalman filtering respectively. Different feature extraction techniques such as Scale Invariant Feature Transform (SIFT), Optical Flow Estimation, Bi-dimensional Empirical Mode Decomposition (BEMD), Discrete Wavelet Transform (DWT) are used to extract optimal features from the video frames. The features are fed to the Convolutional Neural Network classifier to recognize and classify the actions. Three datasets viz. KTH, Weizmann and Own created datasets are used to evaluate the performance of the developed method. Using SIFT, BEMD and DWT multiple feature extraction technique, the proposed method is called Hybrid Feature Extraction – Convolutional Neural Network based Video Action Recognition (HFE-CNN-VAR) method. The results of the work demonstrated that the HFE-CNN-VAR method enhanced the accuracy of action classification. The accuracy of classification is 99.33% for Weizmann dataset, 99.01% for KTH dataset and 90% for own dataset. Results of the experiment and comparative analysis shows that proposed approach surpasses when compared with other contemporary techniques." @default.
- W4313403568 created "2023-01-06" @default.
- W4313403568 creator A5018719467 @default.
- W4313403568 creator A5073045989 @default.
- W4313403568 date "2022-12-19" @default.
- W4313403568 modified "2023-10-15" @default.
- W4313403568 title "Human Action Detection and Recognition: A Pragmatic Approach using Multiple Feature Extraction Techniques and Convolutional Neural Networks" @default.
- W4313403568 cites W110553271 @default.
- W4313403568 cites W1994048688 @default.
- W4313403568 cites W2002261403 @default.
- W4313403568 cites W2041750176 @default.
- W4313403568 cites W2043329319 @default.
- W4313403568 cites W2085201020 @default.
- W4313403568 cites W2104474489 @default.
- W4313403568 cites W2125337786 @default.
- W4313403568 cites W2282295506 @default.
- W4313403568 cites W2345308841 @default.
- W4313403568 cites W2419376963 @default.
- W4313403568 cites W2547151594 @default.
- W4313403568 cites W2549801412 @default.
- W4313403568 cites W2618530766 @default.
- W4313403568 cites W2750113774 @default.
- W4313403568 cites W2766708151 @default.
- W4313403568 cites W2771390659 @default.
- W4313403568 cites W2775651131 @default.
- W4313403568 cites W2775870354 @default.
- W4313403568 cites W2790490299 @default.
- W4313403568 cites W2796096089 @default.
- W4313403568 cites W2810765467 @default.
- W4313403568 cites W3024934424 @default.
- W4313403568 cites W3143569692 @default.
- W4313403568 cites W4241752359 @default.
- W4313403568 doi "https://doi.org/10.21203/rs.3.rs-2379758/v1" @default.
- W4313403568 hasPublicationYear "2022" @default.
- W4313403568 type Work @default.
- W4313403568 citedByCount "0" @default.
- W4313403568 crossrefType "posted-content" @default.
- W4313403568 hasAuthorship W4313403568A5018719467 @default.
- W4313403568 hasAuthorship W4313403568A5073045989 @default.
- W4313403568 hasBestOaLocation W43134035681 @default.
- W4313403568 hasConcept C115961682 @default.
- W4313403568 hasConcept C153180895 @default.
- W4313403568 hasConcept C154945302 @default.
- W4313403568 hasConcept C155542232 @default.
- W4313403568 hasConcept C31972630 @default.
- W4313403568 hasConcept C41008148 @default.
- W4313403568 hasConcept C52622490 @default.
- W4313403568 hasConcept C61265191 @default.
- W4313403568 hasConcept C81363708 @default.
- W4313403568 hasConcept C95623464 @default.
- W4313403568 hasConceptScore W4313403568C115961682 @default.
- W4313403568 hasConceptScore W4313403568C153180895 @default.
- W4313403568 hasConceptScore W4313403568C154945302 @default.
- W4313403568 hasConceptScore W4313403568C155542232 @default.
- W4313403568 hasConceptScore W4313403568C31972630 @default.
- W4313403568 hasConceptScore W4313403568C41008148 @default.
- W4313403568 hasConceptScore W4313403568C52622490 @default.
- W4313403568 hasConceptScore W4313403568C61265191 @default.
- W4313403568 hasConceptScore W4313403568C81363708 @default.
- W4313403568 hasConceptScore W4313403568C95623464 @default.
- W4313403568 hasLocation W43134035681 @default.
- W4313403568 hasOpenAccess W4313403568 @default.
- W4313403568 hasPrimaryLocation W43134035681 @default.
- W4313403568 hasRelatedWork W1582226822 @default.
- W4313403568 hasRelatedWork W2022942246 @default.
- W4313403568 hasRelatedWork W2059299633 @default.
- W4313403568 hasRelatedWork W2064297726 @default.
- W4313403568 hasRelatedWork W2076289882 @default.
- W4313403568 hasRelatedWork W2344014954 @default.
- W4313403568 hasRelatedWork W2602506882 @default.
- W4313403568 hasRelatedWork W2621332360 @default.
- W4313403568 hasRelatedWork W2950902107 @default.
- W4313403568 hasRelatedWork W2995914718 @default.
- W4313403568 isParatext "false" @default.
- W4313403568 isRetracted "false" @default.
- W4313403568 workType "article" @default.