Matches in SemOpenAlex for { <https://semopenalex.org/work/W1827898602> ?p ?o ?g. }
Showing items 1 to 86 of
86
with 100 items per page.
- W1827898602 abstract "A large amount of the information in conversations come from non-verbal cues such as facial expressions and body gesture. These cues are lost when we don't communicate face-to-face. But face-to-face communication doesn't have to happen in person. With video communication we can at least deliver information about the facial mimic and some gestures. This thesis is about video communication over distances; communication that can be available over networks with low capacity since the bitrate needed for video communication is low. A visual image needs to have high quality and resolution to be semantically meaningful for communication. To deliver such video over networks require that the video is compressed. The standard way to compress video images, used by H.264 and MPEG-4, is to divide the image into blocks and represent each block with mathematical waveforms; usually frequency features. These mathematical waveforms are quite good at representing any kind of video since they do not resemble anything; they are just frequency features. But since they are completely arbitrary they cannot compress video enough to enable use over networks with limited capacity, such as GSM and GPRS. Another issue is that such codecs have a high complexity because of the redundancy removal with positional shift of the blocks. High complexity and bitrate means that a device has to consume a large amount of energy for encoding, decoding and transmission of such video; with energy being a very important factor for battery-driven devices. Drawbacks of standard video coding mean that it isn't possible to deliver video anywhere and anytime when it is compressed with such codecs. To resolve these issues we have developed a totally new type of video coding. Instead of using mathematical waveforms for representation we use faces to represent faces. This makes the compression much more efficient than if waveforms are used even though the faces are person-dependent. By building a model of the changes in the face, the facial mimic, this model can be used to encode the images. The model consists of representative facial images and we use a powerful mathematical tool to extract this model; namely principal component analysis (PCA). This coding has very low complexity since encoding and decoding only consist of multiplication operations. The faces are treated as single encoding entities and all operations are performed on full images; no block processing is needed. These features mean that PCA coding can deliver high quality video at very low bitrates with low complexity for encoding and decoding. With the use of asymmetrical PCA (aPCA) it is possible to use only semantically important areas for encoding while decoding full frames or a different part of the frames. We show that a codec based on PCA can compress facial video to a bitrate below 5 kbps and still provide high quality. This bitrate can be delivered on a GSM network. We also show the possibility of extending PCA coding to encoding of high definition video." @default.
- W1827898602 created "2016-06-24" @default.
- W1827898602 creator A5047886846 @default.
- W1827898602 creator A5067512153 @default.
- W1827898602 creator A5089574582 @default.
- W1827898602 date "2006-01-01" @default.
- W1827898602 modified "2023-09-22" @default.
- W1827898602 title "Ultra low bit-rate video communication : video coding = pattern recognition" @default.
- W1827898602 cites W1514573824 @default.
- W1827898602 cites W1554981240 @default.
- W1827898602 cites W1586638664 @default.
- W1827898602 cites W2025881896 @default.
- W1827898602 cites W2046911213 @default.
- W1827898602 cites W2082229127 @default.
- W1827898602 cites W2099732180 @default.
- W1827898602 cites W2113841383 @default.
- W1827898602 cites W2124243699 @default.
- W1827898602 cites W2138494898 @default.
- W1827898602 cites W2147885303 @default.
- W1827898602 cites W2156516654 @default.
- W1827898602 cites W2164598857 @default.
- W1827898602 cites W2295661697 @default.
- W1827898602 hasPublicationYear "2006" @default.
- W1827898602 type Work @default.
- W1827898602 sameAs 1827898602 @default.
- W1827898602 citedByCount "3" @default.
- W1827898602 countsByYear W18278986022013 @default.
- W1827898602 crossrefType "journal-article" @default.
- W1827898602 hasAuthorship W1827898602A5047886846 @default.
- W1827898602 hasAuthorship W1827898602A5067512153 @default.
- W1827898602 hasAuthorship W1827898602A5089574582 @default.
- W1827898602 hasConcept C105795698 @default.
- W1827898602 hasConcept C106030495 @default.
- W1827898602 hasConcept C154945302 @default.
- W1827898602 hasConcept C161765866 @default.
- W1827898602 hasConcept C179518139 @default.
- W1827898602 hasConcept C202474056 @default.
- W1827898602 hasConcept C23431618 @default.
- W1827898602 hasConcept C28490314 @default.
- W1827898602 hasConcept C31972630 @default.
- W1827898602 hasConcept C33923547 @default.
- W1827898602 hasConcept C41008148 @default.
- W1827898602 hasConcept C49774154 @default.
- W1827898602 hasConcept C65483669 @default.
- W1827898602 hasConcept C9390403 @default.
- W1827898602 hasConceptScore W1827898602C105795698 @default.
- W1827898602 hasConceptScore W1827898602C106030495 @default.
- W1827898602 hasConceptScore W1827898602C154945302 @default.
- W1827898602 hasConceptScore W1827898602C161765866 @default.
- W1827898602 hasConceptScore W1827898602C179518139 @default.
- W1827898602 hasConceptScore W1827898602C202474056 @default.
- W1827898602 hasConceptScore W1827898602C23431618 @default.
- W1827898602 hasConceptScore W1827898602C28490314 @default.
- W1827898602 hasConceptScore W1827898602C31972630 @default.
- W1827898602 hasConceptScore W1827898602C33923547 @default.
- W1827898602 hasConceptScore W1827898602C41008148 @default.
- W1827898602 hasConceptScore W1827898602C49774154 @default.
- W1827898602 hasConceptScore W1827898602C65483669 @default.
- W1827898602 hasConceptScore W1827898602C9390403 @default.
- W1827898602 hasLocation W18278986021 @default.
- W1827898602 hasOpenAccess W1827898602 @default.
- W1827898602 hasPrimaryLocation W18278986021 @default.
- W1827898602 hasRelatedWork W1506992296 @default.
- W1827898602 hasRelatedWork W1512887434 @default.
- W1827898602 hasRelatedWork W1566954690 @default.
- W1827898602 hasRelatedWork W1872133261 @default.
- W1827898602 hasRelatedWork W2035607988 @default.
- W1827898602 hasRelatedWork W2036303451 @default.
- W1827898602 hasRelatedWork W2039935167 @default.
- W1827898602 hasRelatedWork W2145603613 @default.
- W1827898602 hasRelatedWork W2182052066 @default.
- W1827898602 hasRelatedWork W2530643864 @default.
- W1827898602 hasRelatedWork W2604968823 @default.
- W1827898602 hasRelatedWork W2943447321 @default.
- W1827898602 hasRelatedWork W2951357900 @default.
- W1827898602 hasRelatedWork W3092937345 @default.
- W1827898602 hasRelatedWork W3103498076 @default.
- W1827898602 hasRelatedWork W366071148 @default.
- W1827898602 hasRelatedWork W2211415157 @default.
- W1827898602 hasRelatedWork W2323765352 @default.
- W1827898602 hasRelatedWork W2336687497 @default.
- W1827898602 hasRelatedWork W3017708410 @default.
- W1827898602 isParatext "false" @default.
- W1827898602 isRetracted "false" @default.
- W1827898602 magId "1827898602" @default.
- W1827898602 workType "article" @default.