Matches in SemOpenAlex for { <https://semopenalex.org/work/W3040982654> ?p ?o ?g. }
- W3040982654 abstract "Over the past two decades, traditional block-based video coding has made remarkable progress and spawned a series of well-known standards such as MPEG-4, H.264/AVC and H.265/HEVC. On the other hand, deep neural networks (DNNs) have shown their powerful capacity for visual content understanding, feature extraction and compact representation. Some previous works have explored the learnt video coding algorithms in an end-to-end manner, which show the great potential compared with traditional methods. In this paper, we propose an end-to-end deep neural video coding framework (NVC), which uses variational autoencoders (VAEs) with joint spatial and temporal prior aggregation (PA) to exploit the correlations in intra-frame pixels, inter-frame motions and inter-frame compensation residuals, respectively. Novel features of NVC include: 1) To estimate and compensate motion over a large range of magnitudes, we propose an unsupervised multiscale motion compensation network (MS-MCN) together with a pyramid decoder in the VAE for coding motion features that generates multiscale flow fields, 2) we design a novel adaptive spatiotemporal context model for efficient entropy coding for motion information, 3) we adopt nonlocal attention modules (NLAM) at the bottlenecks of the VAEs for implicit adaptive feature extraction and activation, leveraging its high transformation capacity and unequal weighting with joint global and local information, and 4) we introduce multi-module optimization and a multi-frame training strategy to minimize the temporal error propagation among P-frames. NVC is evaluated for the low-delay causal settings and compared with H.265/HEVC, H.264/AVC and the other learnt video compression methods following the common test conditions, demonstrating consistent gains across all popular test sequences for both PSNR and MS-SSIM distortion metrics." @default.
- W3040982654 created "2020-07-16" @default.
- W3040982654 creator A5019706829 @default.
- W3040982654 creator A5038836690 @default.
- W3040982654 creator A5047516073 @default.
- W3040982654 creator A5048088572 @default.
- W3040982654 creator A5058572381 @default.
- W3040982654 creator A5073914656 @default.
- W3040982654 creator A5089957493 @default.
- W3040982654 date "2020-07-09" @default.
- W3040982654 modified "2023-09-23" @default.
- W3040982654 title "Neural Video Coding using Multiscale Motion Compensation and Spatiotemporal Context Model" @default.
- W3040982654 cites W1580389772 @default.
- W3040982654 cites W1861492603 @default.
- W3040982654 cites W2042829735 @default.
- W3040982654 cites W2099111195 @default.
- W3040982654 cites W2140199336 @default.
- W3040982654 cites W2146395539 @default.
- W3040982654 cites W2194775991 @default.
- W3040982654 cites W2560474170 @default.
- W3040982654 cites W2597747080 @default.
- W3040982654 cites W2604392022 @default.
- W3040982654 cites W2789625992 @default.
- W3040982654 cites W2892278106 @default.
- W3040982654 cites W2892806750 @default.
- W3040982654 cites W2893920456 @default.
- W3040982654 cites W2912268344 @default.
- W3040982654 cites W2935381027 @default.
- W3040982654 cites W2953318193 @default.
- W3040982654 cites W2962676454 @default.
- W3040982654 cites W2962750131 @default.
- W3040982654 cites W2962790638 @default.
- W3040982654 cites W2962891349 @default.
- W3040982654 cites W2963073821 @default.
- W3040982654 cites W2963149687 @default.
- W3040982654 cites W2963189365 @default.
- W3040982654 cites W2963449488 @default.
- W3040982654 cites W2963711615 @default.
- W3040982654 cites W2963782415 @default.
- W3040982654 cites W2964098744 @default.
- W3040982654 cites W2979873418 @default.
- W3040982654 cites W2981613960 @default.
- W3040982654 cites W2982853315 @default.
- W3040982654 cites W2984671549 @default.
- W3040982654 cites W2992051623 @default.
- W3040982654 cites W2997572967 @default.
- W3040982654 cites W2998444797 @default.
- W3040982654 cites W3001966324 @default.
- W3040982654 cites W3010647498 @default.
- W3040982654 cites W3016163932 @default.
- W3040982654 cites W3020741905 @default.
- W3040982654 cites W3034469748 @default.
- W3040982654 cites W3034802763 @default.
- W3040982654 cites W3102015846 @default.
- W3040982654 cites W3124452217 @default.
- W3040982654 cites W603908379 @default.
- W3040982654 doi "https://doi.org/10.48550/arxiv.2007.04574" @default.
- W3040982654 hasPublicationYear "2020" @default.
- W3040982654 type Work @default.
- W3040982654 sameAs 3040982654 @default.
- W3040982654 citedByCount "0" @default.
- W3040982654 crossrefType "posted-content" @default.
- W3040982654 hasAuthorship W3040982654A5019706829 @default.
- W3040982654 hasAuthorship W3040982654A5038836690 @default.
- W3040982654 hasAuthorship W3040982654A5047516073 @default.
- W3040982654 hasAuthorship W3040982654A5048088572 @default.
- W3040982654 hasAuthorship W3040982654A5058572381 @default.
- W3040982654 hasAuthorship W3040982654A5073914656 @default.
- W3040982654 hasAuthorship W3040982654A5089957493 @default.
- W3040982654 hasBestOaLocation W30409826541 @default.
- W3040982654 hasConcept C10161872 @default.
- W3040982654 hasConcept C105795698 @default.
- W3040982654 hasConcept C126838900 @default.
- W3040982654 hasConcept C128840427 @default.
- W3040982654 hasConcept C153180895 @default.
- W3040982654 hasConcept C154945302 @default.
- W3040982654 hasConcept C174493125 @default.
- W3040982654 hasConcept C179518139 @default.
- W3040982654 hasConcept C183115368 @default.
- W3040982654 hasConcept C31972630 @default.
- W3040982654 hasConcept C33923547 @default.
- W3040982654 hasConcept C41008148 @default.
- W3040982654 hasConcept C71924100 @default.
- W3040982654 hasConceptScore W3040982654C10161872 @default.
- W3040982654 hasConceptScore W3040982654C105795698 @default.
- W3040982654 hasConceptScore W3040982654C126838900 @default.
- W3040982654 hasConceptScore W3040982654C128840427 @default.
- W3040982654 hasConceptScore W3040982654C153180895 @default.
- W3040982654 hasConceptScore W3040982654C154945302 @default.
- W3040982654 hasConceptScore W3040982654C174493125 @default.
- W3040982654 hasConceptScore W3040982654C179518139 @default.
- W3040982654 hasConceptScore W3040982654C183115368 @default.
- W3040982654 hasConceptScore W3040982654C31972630 @default.
- W3040982654 hasConceptScore W3040982654C33923547 @default.
- W3040982654 hasConceptScore W3040982654C41008148 @default.
- W3040982654 hasConceptScore W3040982654C71924100 @default.
- W3040982654 hasLocation W30409826541 @default.
- W3040982654 hasOpenAccess W3040982654 @default.
- W3040982654 hasPrimaryLocation W30409826541 @default.
- W3040982654 hasRelatedWork W1508572252 @default.