Matches in SemOpenAlex for { <https://semopenalex.org/work/W2894865236> ?p ?o ?g. }
- W2894865236 endingPage "369" @default.
- W2894865236 startingPage "351" @default.
- W2894865236 abstract "We present a deep learning based volumetric approach for performance capture using a passive and highly sparse multi-view capture system. State-of-the-art performance capture systems require either pre-scanned actors, large number of cameras or active sensors. In this work, we focus on the task of template-free, per-frame 3D surface reconstruction from as few as three RGB sensors, for which conventional visual hull or multi-view stereo methods fail to generate plausible results. We introduce a novel multi-view Convolutional Neural Network (CNN) that maps 2D images to a 3D volumetric field and we use this field to encode the probabilistic distribution of surface points of the captured subject. By querying the resulting field, we can instantiate the clothed human body at arbitrary resolutions. Our approach scales to different numbers of input images, which yield increased reconstruction quality when more views are used. Although only trained on synthetic data, our network can generalize to handle real footage from body performance capture. Our method is suitable for high-quality low-cost full body volumetric capture solutions, which are gaining popularity for VR and AR content creation. Experimental results demonstrate that our method is significantly more robust and accurate than existing techniques when only very sparse views are available." @default.
- W2894865236 created "2018-10-12" @default.
- W2894865236 creator A5005369320 @default.
- W2894865236 creator A5006442099 @default.
- W2894865236 creator A5007091110 @default.
- W2894865236 creator A5019560977 @default.
- W2894865236 creator A5025046851 @default.
- W2894865236 creator A5065092032 @default.
- W2894865236 creator A5076090670 @default.
- W2894865236 creator A5076768218 @default.
- W2894865236 creator A5081407963 @default.
- W2894865236 date "2018-01-01" @default.
- W2894865236 modified "2023-09-28" @default.
- W2894865236 title "Deep Volumetric Video From Very Sparse Multi-view Performance Capture" @default.
- W2894865236 cites W1482274752 @default.
- W2894865236 cites W1496316025 @default.
- W2894865236 cites W1517656524 @default.
- W2894865236 cites W1541388462 @default.
- W2894865236 cites W158943247 @default.
- W2894865236 cites W1593593811 @default.
- W2894865236 cites W1629010235 @default.
- W2894865236 cites W1644641054 @default.
- W2894865236 cites W1967554269 @default.
- W2894865236 cites W1989191365 @default.
- W2894865236 cites W1992475172 @default.
- W2894865236 cites W2005984284 @default.
- W2894865236 cites W2006262794 @default.
- W2894865236 cites W2040436296 @default.
- W2894865236 cites W2044618760 @default.
- W2894865236 cites W2058676365 @default.
- W2894865236 cites W2075402943 @default.
- W2894865236 cites W2081927584 @default.
- W2894865236 cites W2082145490 @default.
- W2894865236 cites W2089365261 @default.
- W2894865236 cites W2093768878 @default.
- W2894865236 cites W2109752307 @default.
- W2894865236 cites W2110434318 @default.
- W2894865236 cites W2113507517 @default.
- W2894865236 cites W2117888987 @default.
- W2894865236 cites W2119781527 @default.
- W2894865236 cites W2121253532 @default.
- W2894865236 cites W2122578066 @default.
- W2894865236 cites W2122633688 @default.
- W2894865236 cites W2125710345 @default.
- W2894865236 cites W2129404737 @default.
- W2894865236 cites W2131536073 @default.
- W2894865236 cites W2134484928 @default.
- W2894865236 cites W2142540472 @default.
- W2894865236 cites W2146506577 @default.
- W2894865236 cites W2148151066 @default.
- W2894865236 cites W2161778590 @default.
- W2894865236 cites W2167085613 @default.
- W2894865236 cites W2195191570 @default.
- W2894865236 cites W2207600644 @default.
- W2894865236 cites W2215643317 @default.
- W2894865236 cites W2339787603 @default.
- W2894865236 cites W2342277278 @default.
- W2894865236 cites W2348664362 @default.
- W2894865236 cites W2461005315 @default.
- W2894865236 cites W2483862638 @default.
- W2894865236 cites W2495603374 @default.
- W2894865236 cites W2518246072 @default.
- W2894865236 cites W2532511219 @default.
- W2894865236 cites W2544612547 @default.
- W2894865236 cites W2565662353 @default.
- W2894865236 cites W2573098616 @default.
- W2894865236 cites W2576289912 @default.
- W2894865236 cites W2579126418 @default.
- W2894865236 cites W2596210417 @default.
- W2894865236 cites W2598591334 @default.
- W2894865236 cites W2599802623 @default.
- W2894865236 cites W2604493845 @default.
- W2894865236 cites W2611820369 @default.
- W2894865236 cites W2623517302 @default.
- W2894865236 cites W2737762407 @default.
- W2894865236 cites W2738835886 @default.
- W2894865236 cites W2749324691 @default.
- W2894865236 cites W2753872511 @default.
- W2894865236 cites W2802758546 @default.
- W2894865236 cites W2962731536 @default.
- W2894865236 cites W2963600949 @default.
- W2894865236 cites W2963739349 @default.
- W2894865236 cites W2963995996 @default.
- W2894865236 cites W3102132650 @default.
- W2894865236 cites W4250595268 @default.
- W2894865236 doi "https://doi.org/10.1007/978-3-030-01270-0_21" @default.
- W2894865236 hasPublicationYear "2018" @default.
- W2894865236 type Work @default.
- W2894865236 sameAs 2894865236 @default.
- W2894865236 citedByCount "88" @default.
- W2894865236 countsByYear W28948652362018 @default.
- W2894865236 countsByYear W28948652362019 @default.
- W2894865236 countsByYear W28948652362020 @default.
- W2894865236 countsByYear W28948652362021 @default.
- W2894865236 countsByYear W28948652362022 @default.
- W2894865236 countsByYear W28948652362023 @default.
- W2894865236 crossrefType "book-chapter" @default.
- W2894865236 hasAuthorship W2894865236A5005369320 @default.