Matches in SemOpenAlex for { <https://semopenalex.org/work/W1512974432> ?p ?o ?g. }
- W1512974432 abstract "Video camera systems are becoming popular in home environments, and they are often used in our daily lives to record family growth, small home parties, and so on. In home environments, the video contents, however, are greatly subjected to restrictions due to the fact that there is no production staff, such as a cameraman, editor, switcher, and so on, as with broadcasting or television stations. When we watch a broadcast or television video, the camera work helps us to not lose interest in or to understand its contents easily owing to the panning and zooming of the camera work. This means that the camera work is strongly associated with the events on video, and the most appropriate camera work is chosen according to the events. Through the camera work in combination with event recognition, more interesting and intelligible video content can be produced (Ariki et al., 2006). Audio has a key index in the digital videos that can provide useful information for video retrieval. In (Sundaram et al, 2000), audio features are used for video scene segmentation, in (Aizawa, 2005) (Amin et al, 2004), they are used for video retrieval, and in (Asano et al, 2006), multiple microphones are used for detection and separation of audio in meeting recordings. In (Rui et al, 2004), they describe an automation system to capture and broadcast lectures to online audience, where a two-channel microphone is used for locating talking audience members in a lecture room. Also, there are many approaches possible for the content production system, such as generating highlights, summaries, and so on (Ozeke et al, 2005) (Hua et al, 2004) (Adams et al, 2005) (Wu, 2004) for home video content. Also, there are some studies that focused on a facial direction and facial expression for a viewer’s behavior analysis. (Yamamoto, et al, 2006) proposed a system for automatically estimating the time intervals during which TV viewers have a positive interest in what they are watching based on temporal patterns in facial changes using the Hidden Markov Model. In this chapter, we are studying about home video editing based on audio and face emotion. In home environments, since it may be difficult for one person to record video continuously (especially for small home parties: just two persons), it will require the video content to be automatically recorded without a cameraman. However, it may result in a large volume of video content. Therefore, this will require digital camera work which uses virtual panning and zooming by clipping frames from hi-resolution images and controlling the frame size and position (Ariki et al, 2006). Source: Digital Video, Book edited by: Floriano De Rango, ISBN 978-953-7619-70-1, pp. 500, February 2010, INTECH, Croatia, downloaded from SCIYO.COM" @default.
- W1512974432 created "2016-06-24" @default.
- W1512974432 creator A5009283470 @default.
- W1512974432 creator A5040766342 @default.
- W1512974432 creator A5060353005 @default.
- W1512974432 date "2010-02-01" @default.
- W1512974432 modified "2023-09-26" @default.
- W1512974432 title "Video Editing Based on Situation Awareness from Voice Information and Face Emotion" @default.
- W1512974432 cites W127001119 @default.
- W1512974432 cites W1491523767 @default.
- W1512974432 cites W2005419509 @default.
- W1512974432 cites W2032730655 @default.
- W1512974432 cites W2101357796 @default.
- W1512974432 cites W2118680786 @default.
- W1512974432 cites W2120194878 @default.
- W1512974432 cites W2120954940 @default.
- W1512974432 cites W2126374091 @default.
- W1512974432 cites W2143092130 @default.
- W1512974432 cites W2145073242 @default.
- W1512974432 cites W2147547466 @default.
- W1512974432 cites W2163904290 @default.
- W1512974432 cites W2164598857 @default.
- W1512974432 cites W2165067398 @default.
- W1512974432 cites W2180187800 @default.
- W1512974432 doi "https://doi.org/10.5772/8040" @default.
- W1512974432 hasPublicationYear "2010" @default.
- W1512974432 type Work @default.
- W1512974432 sameAs 1512974432 @default.
- W1512974432 citedByCount "2" @default.
- W1512974432 countsByYear W15129744322012 @default.
- W1512974432 countsByYear W15129744322015 @default.
- W1512974432 crossrefType "book-chapter" @default.
- W1512974432 hasAuthorship W1512974432A5009283470 @default.
- W1512974432 hasAuthorship W1512974432A5040766342 @default.
- W1512974432 hasAuthorship W1512974432A5060353005 @default.
- W1512974432 hasBestOaLocation W15129744321 @default.
- W1512974432 hasConcept C108803254 @default.
- W1512974432 hasConcept C108944566 @default.
- W1512974432 hasConcept C110157686 @default.
- W1512974432 hasConcept C121332964 @default.
- W1512974432 hasConcept C124913957 @default.
- W1512974432 hasConcept C127413603 @default.
- W1512974432 hasConcept C137402728 @default.
- W1512974432 hasConcept C151211776 @default.
- W1512974432 hasConcept C15336307 @default.
- W1512974432 hasConcept C166142869 @default.
- W1512974432 hasConcept C178790620 @default.
- W1512974432 hasConcept C185592680 @default.
- W1512974432 hasConcept C26517878 @default.
- W1512974432 hasConcept C2778263558 @default.
- W1512974432 hasConcept C2778344882 @default.
- W1512974432 hasConcept C2779662365 @default.
- W1512974432 hasConcept C2780310081 @default.
- W1512974432 hasConcept C31972630 @default.
- W1512974432 hasConcept C38652104 @default.
- W1512974432 hasConcept C41008148 @default.
- W1512974432 hasConcept C49774154 @default.
- W1512974432 hasConcept C62520636 @default.
- W1512974432 hasConcept C65483669 @default.
- W1512974432 hasConcept C68115822 @default.
- W1512974432 hasConcept C76155785 @default.
- W1512974432 hasConcept C78762247 @default.
- W1512974432 hasConcept C87829876 @default.
- W1512974432 hasConceptScore W1512974432C108803254 @default.
- W1512974432 hasConceptScore W1512974432C108944566 @default.
- W1512974432 hasConceptScore W1512974432C110157686 @default.
- W1512974432 hasConceptScore W1512974432C121332964 @default.
- W1512974432 hasConceptScore W1512974432C124913957 @default.
- W1512974432 hasConceptScore W1512974432C127413603 @default.
- W1512974432 hasConceptScore W1512974432C137402728 @default.
- W1512974432 hasConceptScore W1512974432C151211776 @default.
- W1512974432 hasConceptScore W1512974432C15336307 @default.
- W1512974432 hasConceptScore W1512974432C166142869 @default.
- W1512974432 hasConceptScore W1512974432C178790620 @default.
- W1512974432 hasConceptScore W1512974432C185592680 @default.
- W1512974432 hasConceptScore W1512974432C26517878 @default.
- W1512974432 hasConceptScore W1512974432C2778263558 @default.
- W1512974432 hasConceptScore W1512974432C2778344882 @default.
- W1512974432 hasConceptScore W1512974432C2779662365 @default.
- W1512974432 hasConceptScore W1512974432C2780310081 @default.
- W1512974432 hasConceptScore W1512974432C31972630 @default.
- W1512974432 hasConceptScore W1512974432C38652104 @default.
- W1512974432 hasConceptScore W1512974432C41008148 @default.
- W1512974432 hasConceptScore W1512974432C49774154 @default.
- W1512974432 hasConceptScore W1512974432C62520636 @default.
- W1512974432 hasConceptScore W1512974432C65483669 @default.
- W1512974432 hasConceptScore W1512974432C68115822 @default.
- W1512974432 hasConceptScore W1512974432C76155785 @default.
- W1512974432 hasConceptScore W1512974432C78762247 @default.
- W1512974432 hasConceptScore W1512974432C87829876 @default.
- W1512974432 hasLocation W15129744321 @default.
- W1512974432 hasLocation W15129744322 @default.
- W1512974432 hasOpenAccess W1512974432 @default.
- W1512974432 hasPrimaryLocation W15129744321 @default.
- W1512974432 hasRelatedWork W1512974432 @default.
- W1512974432 hasRelatedWork W1908593991 @default.
- W1512974432 hasRelatedWork W2043771096 @default.
- W1512974432 hasRelatedWork W2114639874 @default.
- W1512974432 hasRelatedWork W2318748556 @default.
- W1512974432 hasRelatedWork W2371107363 @default.