Matches in SemOpenAlex for { <https://semopenalex.org/work/W2342006632> ?p ?o ?g. }
- W2342006632 endingPage "514" @default.
- W2342006632 startingPage "501" @default.
- W2342006632 abstract "Marker-less motion capture has seen great progress, but most state-of-the-art approaches fail to reliably track articulated human body motion with a very low number of cameras, let alone when applied in outdoor scenes with general background. In this paper, we propose a method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. The new algorithm combines the strengths of a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through an unified pose optimization energy. The discriminative part-based pose detection method is implemented using Convolutional Networks (ConvNet) and estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials serve as the basis of a probabilistic extraction of pose constraints for tracking by using weighted sampling from a pose posterior that is guided by the model. In the final energy, we combine these constraints with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, since joint detection with a trained ConvNet is fast, and since our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras. Our method is efficient and lends itself to implementation on parallel computing hardware, such as GPUs. We test our method extensively and show its advantages over related work on many indoor and outdoor data sets captured by ourselves, as well as data sets made available to the community by other research labs. The availability of good evaluation data sets is paramount for scientific progress, and many existing test data sets focus on controlled indoor settings, do not feature much variety in the scenes, and often lack a large corpus of data with ground truth annotation. We therefore further contribute with a new extensive test data set called <i>MPI-MARCOnI</i> for indoor and outdoor marker-less motion capture that features <inline-formula><tex-math notation=LaTeX>$12$</tex-math> </inline-formula> scenes of varying complexity and varying camera count, and that features ground truth reference data from different modalities, ranging from manual joint annotations to marker-based motion capture results. Our new method is tested on these data, and the data set will be made available to the community." @default.
- W2342006632 created "2016-06-24" @default.
- W2342006632 creator A5010680659 @default.
- W2342006632 creator A5019742155 @default.
- W2342006632 creator A5020328677 @default.
- W2342006632 creator A5037151839 @default.
- W2342006632 creator A5038832828 @default.
- W2342006632 creator A5051534545 @default.
- W2342006632 creator A5073861650 @default.
- W2342006632 creator A5080976060 @default.
- W2342006632 creator A5083141819 @default.
- W2342006632 date "2017-03-01" @default.
- W2342006632 modified "2023-10-12" @default.
- W2342006632 title "MARCOnI—ConvNet-Based MARker-Less Motion Capture in Outdoor and Indoor Scenes" @default.
- W2342006632 cites W1508437923 @default.
- W2342006632 cites W1551519658 @default.
- W2342006632 cites W1571716163 @default.
- W2342006632 cites W1952857803 @default.
- W2342006632 cites W1975961009 @default.
- W2342006632 cites W1994529670 @default.
- W2342006632 cites W2008009569 @default.
- W2342006632 cites W2014905483 @default.
- W2342006632 cites W2019660985 @default.
- W2342006632 cites W2020163092 @default.
- W2342006632 cites W2023633446 @default.
- W2342006632 cites W2026753976 @default.
- W2342006632 cites W2030536784 @default.
- W2342006632 cites W2032481801 @default.
- W2342006632 cites W2036545421 @default.
- W2342006632 cites W2045798786 @default.
- W2342006632 cites W2049381231 @default.
- W2342006632 cites W2071882725 @default.
- W2342006632 cites W2079846689 @default.
- W2342006632 cites W2080873731 @default.
- W2342006632 cites W2092146246 @default.
- W2342006632 cites W2097151019 @default.
- W2342006632 cites W2099333815 @default.
- W2342006632 cites W2100526149 @default.
- W2342006632 cites W2110645484 @default.
- W2342006632 cites W2112324691 @default.
- W2342006632 cites W2112796928 @default.
- W2342006632 cites W2113325037 @default.
- W2342006632 cites W2118025528 @default.
- W2342006632 cites W2119350939 @default.
- W2342006632 cites W2121969814 @default.
- W2342006632 cites W2123503110 @default.
- W2342006632 cites W2127689830 @default.
- W2342006632 cites W2128271252 @default.
- W2342006632 cites W2131263044 @default.
- W2342006632 cites W2135826343 @default.
- W2342006632 cites W2143487029 @default.
- W2342006632 cites W2146506577 @default.
- W2342006632 cites W2151103935 @default.
- W2342006632 cites W2152926413 @default.
- W2342006632 cites W2156094778 @default.
- W2342006632 cites W2157939923 @default.
- W2342006632 cites W2161778590 @default.
- W2342006632 cites W2161969291 @default.
- W2342006632 cites W2168415715 @default.
- W2342006632 cites W2169738563 @default.
- W2342006632 cites W2171125807 @default.
- W2342006632 cites W2172156083 @default.
- W2342006632 cites W2172192658 @default.
- W2342006632 cites W2535410496 @default.
- W2342006632 cites W276900368 @default.
- W2342006632 cites W3141021069 @default.
- W2342006632 cites W3141038539 @default.
- W2342006632 cites W4205409207 @default.
- W2342006632 doi "https://doi.org/10.1109/tpami.2016.2557779" @default.
- W2342006632 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/27116731" @default.
- W2342006632 hasPublicationYear "2017" @default.
- W2342006632 type Work @default.
- W2342006632 sameAs 2342006632 @default.
- W2342006632 citedByCount "56" @default.
- W2342006632 countsByYear W23420066322016 @default.
- W2342006632 countsByYear W23420066322017 @default.
- W2342006632 countsByYear W23420066322018 @default.
- W2342006632 countsByYear W23420066322019 @default.
- W2342006632 countsByYear W23420066322020 @default.
- W2342006632 countsByYear W23420066322021 @default.
- W2342006632 countsByYear W23420066322022 @default.
- W2342006632 countsByYear W23420066322023 @default.
- W2342006632 crossrefType "journal-article" @default.
- W2342006632 hasAuthorship W2342006632A5010680659 @default.
- W2342006632 hasAuthorship W2342006632A5019742155 @default.
- W2342006632 hasAuthorship W2342006632A5020328677 @default.
- W2342006632 hasAuthorship W2342006632A5037151839 @default.
- W2342006632 hasAuthorship W2342006632A5038832828 @default.
- W2342006632 hasAuthorship W2342006632A5051534545 @default.
- W2342006632 hasAuthorship W2342006632A5073861650 @default.
- W2342006632 hasAuthorship W2342006632A5080976060 @default.
- W2342006632 hasAuthorship W2342006632A5083141819 @default.
- W2342006632 hasConcept C10161872 @default.
- W2342006632 hasConcept C104114177 @default.
- W2342006632 hasConcept C114614502 @default.
- W2342006632 hasConcept C153180895 @default.
- W2342006632 hasConcept C154945302 @default.
- W2342006632 hasConcept C31972630 @default.