Matches in SemOpenAlex for { <https://semopenalex.org/work/W4385626877> ?p ?o ?g. }
- W4385626877 endingPage "13" @default.
- W4385626877 startingPage "1" @default.
- W4385626877 abstract "Appearance and motion are essential features in Video Salient Object Detection (VSOD) tasks. Most of the existing approaches utilize local features and thus fail to understand both the appearance as well as motion-specific semantics at the global level. Hence, these methods are unable to perform in unconstrained scenarios where multiple challenges, such as partial occlusion, motion blur, noise, clutter background, etc., exist. Moreover, these approaches require a large number of computational resources due to their complex structures, which limits their applicability to real-world deployment. To resolve these issues and to achieve a balance between accuracy and computational complexity, in this paper, a Dilation Separable Convolution Network (DSCNet) is proposed, which is equipped with Dilation Attention Fusion Module (DAFM), Bi-directional Cross-modality Fusion Module (BCFM), and Saliency Prediction Module (SPM) to extract enhanced multi-scaled motion and appearance features without increasing the model complexity. Further, a Bi-directional Separable Convolution Network (BSC-Net) equipped with a Separable Convolution Module (SCM)s and a FlowNet2.0 is proposed to utilize multi-scale contextual information across appearance cues and generate enhanced multi-scaled motion maps. For faster and better training of the DSCNet model, we propose a novel Stochastic Gradient-based Firefly Algorithm (SGFA), which adaptively balances the exploration and exploitation in multi-scaled, cross-modal embedded sub-spaces. With the help of the proposed SGFA algorithm, DSCNet+ model is constructed on top of DSCNet, which further improves the results in terms of the training speed as well as other evaluation metrics. The proposed models are evaluated on six benchmark datasets, and a detailed comparative study is provided with sixteen state-of-the-art (SOTA) models. One of the major highlights of the work is the significant performance of the proposed models on the most difficult DAVSOD-Diff dataset, which best reflects the challenging real-world scenarios." @default.
- W4385626877 created "2023-08-08" @default.
- W4385626877 creator A5034834663 @default.
- W4385626877 creator A5063067846 @default.
- W4385626877 creator A5082365220 @default.
- W4385626877 date "2023-01-01" @default.
- W4385626877 modified "2023-09-29" @default.
- W4385626877 title "Novel Dilated Separable Convolution Networks for Efficient Video Salient Object Detection in the Wild" @default.
- W4385626877 cites W2030994305 @default.
- W4385626877 cites W2138682569 @default.
- W4385626877 cites W2154943049 @default.
- W4385626877 cites W2212077366 @default.
- W4385626877 cites W2470139095 @default.
- W4385626877 cites W2511458122 @default.
- W4385626877 cites W2558027072 @default.
- W4385626877 cites W2560474170 @default.
- W4385626877 cites W2564998703 @default.
- W4385626877 cites W2591696292 @default.
- W4385626877 cites W2610147486 @default.
- W4385626877 cites W2738760021 @default.
- W4385626877 cites W2798823518 @default.
- W4385626877 cites W2799157347 @default.
- W4385626877 cites W2799239273 @default.
- W4385626877 cites W2890853604 @default.
- W4385626877 cites W2895340898 @default.
- W4385626877 cites W2916797271 @default.
- W4385626877 cites W2931853599 @default.
- W4385626877 cites W2957408986 @default.
- W4385626877 cites W2963548592 @default.
- W4385626877 cites W2965638232 @default.
- W4385626877 cites W2967199722 @default.
- W4385626877 cites W2984144959 @default.
- W4385626877 cites W2986056979 @default.
- W4385626877 cites W2996803365 @default.
- W4385626877 cites W2997217064 @default.
- W4385626877 cites W2997487053 @default.
- W4385626877 cites W2999458807 @default.
- W4385626877 cites W3034320401 @default.
- W4385626877 cites W3035487542 @default.
- W4385626877 cites W3097337310 @default.
- W4385626877 cites W3097815369 @default.
- W4385626877 cites W3104844437 @default.
- W4385626877 cites W3106773277 @default.
- W4385626877 cites W3109908659 @default.
- W4385626877 cites W3110030584 @default.
- W4385626877 cites W3136838953 @default.
- W4385626877 cites W3175841511 @default.
- W4385626877 cites W3196248320 @default.
- W4385626877 cites W3196444763 @default.
- W4385626877 cites W3202285299 @default.
- W4385626877 cites W3204643350 @default.
- W4385626877 cites W3207101999 @default.
- W4385626877 cites W4214542306 @default.
- W4385626877 cites W4221142306 @default.
- W4385626877 cites W4285058230 @default.
- W4385626877 cites W4286370722 @default.
- W4385626877 cites W4312526532 @default.
- W4385626877 doi "https://doi.org/10.1109/tim.2023.3302911" @default.
- W4385626877 hasPublicationYear "2023" @default.
- W4385626877 type Work @default.
- W4385626877 citedByCount "0" @default.
- W4385626877 crossrefType "journal-article" @default.
- W4385626877 hasAuthorship W4385626877A5034834663 @default.
- W4385626877 hasAuthorship W4385626877A5063067846 @default.
- W4385626877 hasAuthorship W4385626877A5082365220 @default.
- W4385626877 hasConcept C11413529 @default.
- W4385626877 hasConcept C114614502 @default.
- W4385626877 hasConcept C132094186 @default.
- W4385626877 hasConcept C153180895 @default.
- W4385626877 hasConcept C154945302 @default.
- W4385626877 hasConcept C179799912 @default.
- W4385626877 hasConcept C2776151529 @default.
- W4385626877 hasConcept C2780757906 @default.
- W4385626877 hasConcept C31972630 @default.
- W4385626877 hasConcept C33923547 @default.
- W4385626877 hasConcept C41008148 @default.
- W4385626877 hasConcept C45347329 @default.
- W4385626877 hasConcept C50644808 @default.
- W4385626877 hasConcept C554190296 @default.
- W4385626877 hasConcept C76155785 @default.
- W4385626877 hasConceptScore W4385626877C11413529 @default.
- W4385626877 hasConceptScore W4385626877C114614502 @default.
- W4385626877 hasConceptScore W4385626877C132094186 @default.
- W4385626877 hasConceptScore W4385626877C153180895 @default.
- W4385626877 hasConceptScore W4385626877C154945302 @default.
- W4385626877 hasConceptScore W4385626877C179799912 @default.
- W4385626877 hasConceptScore W4385626877C2776151529 @default.
- W4385626877 hasConceptScore W4385626877C2780757906 @default.
- W4385626877 hasConceptScore W4385626877C31972630 @default.
- W4385626877 hasConceptScore W4385626877C33923547 @default.
- W4385626877 hasConceptScore W4385626877C41008148 @default.
- W4385626877 hasConceptScore W4385626877C45347329 @default.
- W4385626877 hasConceptScore W4385626877C50644808 @default.
- W4385626877 hasConceptScore W4385626877C554190296 @default.
- W4385626877 hasConceptScore W4385626877C76155785 @default.
- W4385626877 hasLocation W43856268771 @default.
- W4385626877 hasOpenAccess W4385626877 @default.
- W4385626877 hasPrimaryLocation W43856268771 @default.