Matches in SemOpenAlex for { <https://semopenalex.org/work/W2767990578> ?p ?o ?g. }
Showing items 1 to 88 of
88
with 100 items per page.
- W2767990578 abstract "Video is one of the major media human uses to store information. As the recording and storing devices become cheaper, there are numerous videos generated nowadays. The unprecedentedly large volume creates considerable new requirements on accessing the videos. Therefore, how to perform video filtering, i.e. obtaining a set of relevant video clips from the video repository becomes a challenging research topic. In previous works, video filtering required user entering some texts to filter the irrelevant video clips, which made the video filtering methods same as the document filtering methods for a long time. However, there are three limitations of the text-based video filtering: (1) it dismisses the rich contents in the videos; (2) it is inapplicable when the texts are absent, incomplete or sparse; (3) it fails to support in-video filtering. These limitations make the text-based video filtering powerless after the new requirements emerge. In recent years, there sees a tendency that computer could parse more meaningful contents from the videos. These non-textual contents are complementary to the texts in many cases. Enlightened by that, existing video filtering research gradually shifts from text-based to non-textual-based. Under this direction, we study how to improve the video filtering systematically from three levels.Frame-level. We propose to use detected visual object to filter the videos. In previous works, the visual objects were obtained manually where human took the responsibility of identifying the visual objects and connecting them in the videos. The process of obtaining the visual objects is costly when the data keep changing. Therefore, we proposed to leverage the object detection to obtain the visual objects automatically for frame-level filtering. However, object detection itself is unable to identify and connect the visual objects like human. To achieve that, we proposed a hybrid method to identify and connect the visual objects, which is further divided into local merge, propagation and global merge. We examined the proposed method on a real-world dataset then studied two issues: (1) whether the identifications and connections were accurate, as well as (2) how the environment influenced the proposed method. The experimental results were promising and proved that using detected visual objects for frame-level filtering is feasible.Video-level. We discover a new small content set for surveillance video filtering. Surveillance video filtering, namely surveillance event detection (SED), is important for many safety and security applications. It aims to alarm the events from the surveillance videos. Different from classical video filtering which extracts video content vectors from diverse sources, SED is only able to leverage the motion contents. And the state-of-the-art content set for surveillance is made up of STIP and MoSIFT. In our study, we proposed a new content set by using dense trajectory (DT) and improved dense trajectory (IDT). According to our analysis, our new content set captures both the individual motions and crowd motions in the surveillance, which leads to higher filtering accuracy in our experiments. Based on the new content set, we investigated how feature transformation, codebook training, encoding process and vector normalization influence the filtering accuracy. The corresponding findings helped us win the TRECVID SED 2015 competition.User-level. We propose to leverage rich content set to filter the videos. User-level filtering, namely video recommendation, performs personalized filtering for individuals based on user collaboration and video content vectors. Previous works combined the user collaboration with texts to filter the videos. This usually makes the video filtering inaccurate when texts are scarce. In our study, we tried to make user collaboration work with state-of-the-art non-textual content vectors to filter the videos. We used diverse non-textual content vectors to represent the videos, and reproduced existing methods over them. Through the reproduction, we found all of the existing methods have significant drawbacks that limited the filtering accuracy. To address these problems, we proposed the collaborative embedding regression (CER) method to perform more accurate user-level video filtering. Based on CER, we further studied how to combine the results from multiple contents into a unified one. The experiments revealed the high accuracy of the proposed methods in different scenarios. Additionally, the simulation experiment showed that the filtering accuracy is improved when the texts are scarce." @default.
- W2767990578 created "2017-11-17" @default.
- W2767990578 creator A5030364281 @default.
- W2767990578 date "2017-09-05" @default.
- W2767990578 modified "2023-09-27" @default.
- W2767990578 title "Multi-level Video Filtering Using Non-textual Contents" @default.
- W2767990578 cites W1497265063 @default.
- W2767990578 cites W1950136256 @default.
- W2767990578 cites W1995903777 @default.
- W2767990578 cites W2036718463 @default.
- W2767990578 cites W2117696503 @default.
- W2767990578 cites W2118097920 @default.
- W2767990578 cites W2130043461 @default.
- W2767990578 cites W2140942692 @default.
- W2767990578 cites W2142521298 @default.
- W2767990578 cites W2157881433 @default.
- W2767990578 cites W2158592639 @default.
- W2767990578 cites W2162272882 @default.
- W2767990578 cites W2251084241 @default.
- W2767990578 cites W2509893387 @default.
- W2767990578 doi "https://doi.org/10.14264/uql.2017.761" @default.
- W2767990578 hasPublicationYear "2017" @default.
- W2767990578 type Work @default.
- W2767990578 sameAs 2767990578 @default.
- W2767990578 citedByCount "0" @default.
- W2767990578 crossrefType "dissertation" @default.
- W2767990578 hasAuthorship W2767990578A5030364281 @default.
- W2767990578 hasBestOaLocation W27679905782 @default.
- W2767990578 hasConcept C106131492 @default.
- W2767990578 hasConcept C111919701 @default.
- W2767990578 hasConcept C126042441 @default.
- W2767990578 hasConcept C154945302 @default.
- W2767990578 hasConcept C177264268 @default.
- W2767990578 hasConcept C186644900 @default.
- W2767990578 hasConcept C199360897 @default.
- W2767990578 hasConcept C202474056 @default.
- W2767990578 hasConcept C23123220 @default.
- W2767990578 hasConcept C2780310081 @default.
- W2767990578 hasConcept C2781238097 @default.
- W2767990578 hasConcept C31972630 @default.
- W2767990578 hasConcept C41008148 @default.
- W2767990578 hasConcept C49774154 @default.
- W2767990578 hasConcept C76155785 @default.
- W2767990578 hasConcept C98045186 @default.
- W2767990578 hasConceptScore W2767990578C106131492 @default.
- W2767990578 hasConceptScore W2767990578C111919701 @default.
- W2767990578 hasConceptScore W2767990578C126042441 @default.
- W2767990578 hasConceptScore W2767990578C154945302 @default.
- W2767990578 hasConceptScore W2767990578C177264268 @default.
- W2767990578 hasConceptScore W2767990578C186644900 @default.
- W2767990578 hasConceptScore W2767990578C199360897 @default.
- W2767990578 hasConceptScore W2767990578C202474056 @default.
- W2767990578 hasConceptScore W2767990578C23123220 @default.
- W2767990578 hasConceptScore W2767990578C2780310081 @default.
- W2767990578 hasConceptScore W2767990578C2781238097 @default.
- W2767990578 hasConceptScore W2767990578C31972630 @default.
- W2767990578 hasConceptScore W2767990578C41008148 @default.
- W2767990578 hasConceptScore W2767990578C49774154 @default.
- W2767990578 hasConceptScore W2767990578C76155785 @default.
- W2767990578 hasConceptScore W2767990578C98045186 @default.
- W2767990578 hasLocation W27679905781 @default.
- W2767990578 hasLocation W27679905782 @default.
- W2767990578 hasOpenAccess W2767990578 @default.
- W2767990578 hasPrimaryLocation W27679905781 @default.
- W2767990578 hasRelatedWork W1863298567 @default.
- W2767990578 hasRelatedWork W1975092302 @default.
- W2767990578 hasRelatedWork W2033502036 @default.
- W2767990578 hasRelatedWork W2054904802 @default.
- W2767990578 hasRelatedWork W2182939304 @default.
- W2767990578 hasRelatedWork W2385043999 @default.
- W2767990578 hasRelatedWork W2395529639 @default.
- W2767990578 hasRelatedWork W2570628162 @default.
- W2767990578 hasRelatedWork W2610533690 @default.
- W2767990578 hasRelatedWork W2793476612 @default.
- W2767990578 hasRelatedWork W2914259938 @default.
- W2767990578 hasRelatedWork W2922402504 @default.
- W2767990578 hasRelatedWork W2926312400 @default.
- W2767990578 hasRelatedWork W2939519298 @default.
- W2767990578 hasRelatedWork W2951718857 @default.
- W2767990578 hasRelatedWork W2998931802 @default.
- W2767990578 hasRelatedWork W3046787930 @default.
- W2767990578 hasRelatedWork W346865265 @default.
- W2767990578 hasRelatedWork W2126026781 @default.
- W2767990578 hasRelatedWork W2184644653 @default.
- W2767990578 isParatext "false" @default.
- W2767990578 isRetracted "false" @default.
- W2767990578 magId "2767990578" @default.
- W2767990578 workType "dissertation" @default.