Matches in SemOpenAlex for { <https://semopenalex.org/work/W4367016695> ?p ?o ?g. }
Showing items 1 to 62 of
62
with 100 items per page.
- W4367016695 endingPage "14" @default.
- W4367016695 startingPage "1" @default.
- W4367016695 abstract "Video visual relation detection (VidVRD) aims at abstracting structured relations in the form of <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink><tex-math notation=LaTeX>$< $</tex-math></inline-formula> <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>subject-predicate-object</i> <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink><tex-math notation=LaTeX>$>$</tex-math></inline-formula> from videos. The triple formation makes the search space extremely huge and the distribution unbalanced. Usually, existing works predict the relationships from visual, spatial, and semantic cues. Among them, semantic cues are responsible for exploring the semantic connections between objects, which is crucial to transfer knowledge across relations. However, most of these works extract semantic cues via simply mapping the object labels to classified features, which ignore the contextual surroundings, resulting in poor performance for low-frequency relations. To alleviate these issues, we propose a novel network, termed Contextual Knowledge Embedded Relation Network (CKERN), to facilitate VidVRD through establishing contextual knowledge embeddings for detected object pairs in relations from two aspects: commonsense attributes and prior linguistic dependencies. Specifically, we take the pair as a query to extract relational facts in the commonsense knowledge base, then encode them to explicitly construct semantic surroundings for relations. In addition, the statistics of object pairs with different predicates distilled from large-scale visual relations are taken into account to represent the linguistic regularity of relations. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed model." @default.
- W4367016695 created "2023-04-27" @default.
- W4367016695 creator A5022081570 @default.
- W4367016695 creator A5087631670 @default.
- W4367016695 date "2023-01-01" @default.
- W4367016695 modified "2023-09-24" @default.
- W4367016695 title "Video Visual Relation Detection with Contextual Knowledge Embedding" @default.
- W4367016695 doi "https://doi.org/10.1109/tkde.2023.3270328" @default.
- W4367016695 hasPublicationYear "2023" @default.
- W4367016695 type Work @default.
- W4367016695 citedByCount "0" @default.
- W4367016695 crossrefType "journal-article" @default.
- W4367016695 hasAuthorship W4367016695A5022081570 @default.
- W4367016695 hasAuthorship W4367016695A5087631670 @default.
- W4367016695 hasConcept C124101348 @default.
- W4367016695 hasConcept C154945302 @default.
- W4367016695 hasConcept C204321447 @default.
- W4367016695 hasConcept C23123220 @default.
- W4367016695 hasConcept C25343380 @default.
- W4367016695 hasConcept C27511587 @default.
- W4367016695 hasConcept C2781238097 @default.
- W4367016695 hasConcept C30542707 @default.
- W4367016695 hasConcept C33923547 @default.
- W4367016695 hasConcept C41008148 @default.
- W4367016695 hasConcept C41608201 @default.
- W4367016695 hasConcept C45357846 @default.
- W4367016695 hasConcept C4554734 @default.
- W4367016695 hasConcept C85407183 @default.
- W4367016695 hasConcept C94375191 @default.
- W4367016695 hasConceptScore W4367016695C124101348 @default.
- W4367016695 hasConceptScore W4367016695C154945302 @default.
- W4367016695 hasConceptScore W4367016695C204321447 @default.
- W4367016695 hasConceptScore W4367016695C23123220 @default.
- W4367016695 hasConceptScore W4367016695C25343380 @default.
- W4367016695 hasConceptScore W4367016695C27511587 @default.
- W4367016695 hasConceptScore W4367016695C2781238097 @default.
- W4367016695 hasConceptScore W4367016695C30542707 @default.
- W4367016695 hasConceptScore W4367016695C33923547 @default.
- W4367016695 hasConceptScore W4367016695C41008148 @default.
- W4367016695 hasConceptScore W4367016695C41608201 @default.
- W4367016695 hasConceptScore W4367016695C45357846 @default.
- W4367016695 hasConceptScore W4367016695C4554734 @default.
- W4367016695 hasConceptScore W4367016695C85407183 @default.
- W4367016695 hasConceptScore W4367016695C94375191 @default.
- W4367016695 hasLocation W43670166951 @default.
- W4367016695 hasOpenAccess W4367016695 @default.
- W4367016695 hasPrimaryLocation W43670166951 @default.
- W4367016695 hasRelatedWork W1518161249 @default.
- W4367016695 hasRelatedWork W1584662895 @default.
- W4367016695 hasRelatedWork W2120460904 @default.
- W4367016695 hasRelatedWork W2123448637 @default.
- W4367016695 hasRelatedWork W2181698829 @default.
- W4367016695 hasRelatedWork W2551237228 @default.
- W4367016695 hasRelatedWork W2751404079 @default.
- W4367016695 hasRelatedWork W2787051473 @default.
- W4367016695 hasRelatedWork W2904134584 @default.
- W4367016695 hasRelatedWork W3107474891 @default.
- W4367016695 isParatext "false" @default.
- W4367016695 isRetracted "false" @default.
- W4367016695 workType "article" @default.