Matches in SemOpenAlex for { <https://semopenalex.org/work/W2803485280> ?p ?o ?g. }
- W2803485280 endingPage "25" @default.
- W2803485280 startingPage "15" @default.
- W2803485280 abstract "Estimating salient areas of visual stimuli which are liable to attract viewers’ visual attention is a challenging task because of the high complexity of cognitive behaviors in the brain. Many researchers have been dedicated to this field and obtained many achievements. Some application areas, ranging from computer vision, computer graphics, to multimedia processing, can benefit from saliency detection, considering that the detected saliency has depicted the visual importance of different areas of the visual stimuli. As for the 360 degree visual stimuli, images and videos should record the whole scene in the 3D world, so the resolutions of panoramic images and videos are usually very high. However, when watching 360 degree stimuli, observers can only see part of the scene in the view port, which is presented to the eyes of the observers through the Head Mounted Display (HMD). So sending the whole video, or rendering the whole scene may result in the waste of resources. Thus if we can predict the current field of view, then focuses can be put to the streaming and rendering of the scene in the current field of view. Further more, if we can predict salient areas in the scene, then more fine processing can be done to the visually important areas. The prediction of salient regions for traditional images and videos have been extensively studied. However, conventional saliency prediction methods are not fully adequate for 360 degree contents, because 360 degree stimuli own some unique characteristics. Related study in this area is limited. In this paper, we study the problem of predicting head movement, head–eye motion, and scanpath of viewers when they are watching 360 degree images in the commodity HMDs. Three types of data are specifically analyzed. The first is the head movement data, which can be regarded as the movement of the view port. The second is the head–eye motion data which combines the motion of the head and the movement of the eye within the view port. The third is the scan-paths data of observers in the entire panorama which record the position information as well as the time information. And our model is designed to predict the saliency maps for the first two, and the scanpaths for the last one. Experimental results demonstrate the effectiveness of our model." @default.
- W2803485280 created "2018-06-01" @default.
- W2803485280 creator A5034718093 @default.
- W2803485280 creator A5043405654 @default.
- W2803485280 creator A5064168853 @default.
- W2803485280 date "2018-11-01" @default.
- W2803485280 modified "2023-10-10" @default.
- W2803485280 title "The prediction of head and eye movement for 360 degree images" @default.
- W2803485280 cites W1510835000 @default.
- W2803485280 cites W1537399482 @default.
- W2803485280 cites W1566135517 @default.
- W2803485280 cites W1983433674 @default.
- W2803485280 cites W1993721485 @default.
- W2803485280 cites W2006902234 @default.
- W2803485280 cites W2025520157 @default.
- W2803485280 cites W2028642107 @default.
- W2803485280 cites W2030031014 @default.
- W2803485280 cites W2031399404 @default.
- W2803485280 cites W2047264313 @default.
- W2803485280 cites W2055111849 @default.
- W2803485280 cites W2056380823 @default.
- W2803485280 cites W2057766517 @default.
- W2803485280 cites W2060875441 @default.
- W2803485280 cites W2066405128 @default.
- W2803485280 cites W2090405960 @default.
- W2803485280 cites W2096544166 @default.
- W2803485280 cites W2103189262 @default.
- W2803485280 cites W2103598646 @default.
- W2803485280 cites W2106848651 @default.
- W2803485280 cites W2120221190 @default.
- W2803485280 cites W2123118847 @default.
- W2803485280 cites W2135957164 @default.
- W2803485280 cites W2137053731 @default.
- W2803485280 cites W2148705846 @default.
- W2803485280 cites W2150593711 @default.
- W2803485280 cites W2158546915 @default.
- W2803485280 cites W2162216530 @default.
- W2803485280 cites W2168356304 @default.
- W2803485280 cites W2169949291 @default.
- W2803485280 cites W2529231049 @default.
- W2803485280 cites W2533370895 @default.
- W2803485280 cites W2576968754 @default.
- W2803485280 cites W2622036627 @default.
- W2803485280 cites W2624991012 @default.
- W2803485280 cites W2729179442 @default.
- W2803485280 cites W2731833015 @default.
- W2803485280 cites W2733904738 @default.
- W2803485280 cites W2743390484 @default.
- W2803485280 cites W2765448945 @default.
- W2803485280 cites W2767136499 @default.
- W2803485280 cites W2777280533 @default.
- W2803485280 cites W2791826171 @default.
- W2803485280 cites W2794680924 @default.
- W2803485280 cites W2794956947 @default.
- W2803485280 cites W2795350989 @default.
- W2803485280 cites W2796010929 @default.
- W2803485280 cites W2801570861 @default.
- W2803485280 cites W2810231665 @default.
- W2803485280 cites W2886299877 @default.
- W2803485280 cites W2962716998 @default.
- W2803485280 cites W2963339238 @default.
- W2803485280 cites W4239147634 @default.
- W2803485280 cites W4243022719 @default.
- W2803485280 doi "https://doi.org/10.1016/j.image.2018.05.010" @default.
- W2803485280 hasPublicationYear "2018" @default.
- W2803485280 type Work @default.
- W2803485280 sameAs 2803485280 @default.
- W2803485280 citedByCount "93" @default.
- W2803485280 countsByYear W28034852802018 @default.
- W2803485280 countsByYear W28034852802019 @default.
- W2803485280 countsByYear W28034852802020 @default.
- W2803485280 countsByYear W28034852802021 @default.
- W2803485280 countsByYear W28034852802022 @default.
- W2803485280 countsByYear W28034852802023 @default.
- W2803485280 crossrefType "journal-article" @default.
- W2803485280 hasAuthorship W2803485280A5034718093 @default.
- W2803485280 hasAuthorship W2803485280A5043405654 @default.
- W2803485280 hasAuthorship W2803485280A5064168853 @default.
- W2803485280 hasConcept C121332964 @default.
- W2803485280 hasConcept C121684516 @default.
- W2803485280 hasConcept C153050134 @default.
- W2803485280 hasConcept C154945302 @default.
- W2803485280 hasConcept C15744967 @default.
- W2803485280 hasConcept C169760540 @default.
- W2803485280 hasConcept C205711294 @default.
- W2803485280 hasConcept C24890656 @default.
- W2803485280 hasConcept C2775997480 @default.
- W2803485280 hasConcept C2776058522 @default.
- W2803485280 hasConcept C2780719617 @default.
- W2803485280 hasConcept C31972630 @default.
- W2803485280 hasConcept C41008148 @default.
- W2803485280 hasConceptScore W2803485280C121332964 @default.
- W2803485280 hasConceptScore W2803485280C121684516 @default.
- W2803485280 hasConceptScore W2803485280C153050134 @default.
- W2803485280 hasConceptScore W2803485280C154945302 @default.
- W2803485280 hasConceptScore W2803485280C15744967 @default.
- W2803485280 hasConceptScore W2803485280C169760540 @default.
- W2803485280 hasConceptScore W2803485280C205711294 @default.