Matches in SemOpenAlex for { <https://semopenalex.org/work/W3100014584> ?p ?o ?g. }
Showing items 1 to 52 of
52
with 100 items per page.
- W3100014584 endingPage "100146" @default.
- W3100014584 startingPage "100146" @default.
- W3100014584 abstract "Recent advances in deep learning have greatly simplified the measurement of animal behavior and advanced our understanding of how animals and humans behave. The article previewed here provides readers with an excellent overview of the topic of motion capture with deep learning and will be of interest to the wider data science community. Recent advances in deep learning have greatly simplified the measurement of animal behavior and advanced our understanding of how animals and humans behave. The article previewed here provides readers with an excellent overview of the topic of motion capture with deep learning and will be of interest to the wider data science community. In “A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives,” published in Neuron on October 14, 2020,1Mathis A. Schneider S. Lauer J. Mathis M.W. A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives.Neuron. 2020; 108: 44-65Abstract Full Text Full Text PDF PubMed Scopus (9) Google Scholar Mathis et. al. do exactly what they say in the title. They provide a clear, thorough, and engaging discussion about the techniques and technologies available for motion capture of the positions of animals without the need for marking the subjects. In the article, the authors review the budding field of motion capture with deep learning, discuss the principles of those algorithms, highlight their potential as well as pitfalls for experimentalists, and provide a glimpse into the future. A great deal can be learned from videos of animals in the wild, where the animal behavior is not influenced by their knowledge that they are being observed. The low cost and size of digital video cameras means that there has been an explosion in the amount of footage captured, providing great opportunities but also significant challenges when it comes to exploring and analyzing this data. Extracting behavioral measurements non-invasively from video is a hard computational problem. Deep learning has tremendously advanced researchers’ ability to predict posture from video footage directly. The article starts with an overview of the principles of deep learning methods for markerless motion capture, including an overview of algorithms, datasets and data augmentation, model architectures, loss functions, and optimization. The discussion then moves into scope and applications and how the (current) packages work before moving into practical considerations for pose estimation (with deep learning). The authors include a very helpful and extensive discussion of potential pitfalls and how to avoid them, which is of value to anyone interested in markerless pose estimation and includes practical illustrations on the impact of corruption and data augmentation with the data. The question of “what to do with motion capture data?” is also discussed, as is the more focused topic of pose estimation specifically for neuroscience. The authors also provide glossaries of relevant terms from deep learning and hardware and provide an extensive list of over 180 references. This article is therefore of interest not only to neuroscientists, but to other researchers and data scientists working—or wishing to work—with pose estimation from video. Fundamentally, raw video is a collection of pixels that are static in their location and have varying value over time. Categorizing objects in the footage as being collections of pixels which are moving or being changed in conjunction is possible, but better representations exist for analyzing behavior by identifying the properties of objects in the images, such as location, scale, and orientation. Pose-estimation algorithms are highly flexible functions that map video frames onto the coordinates of body parts. The identity of the body parts has semantically defined meaning (e.g., head, lower arm, upper arm, etc.), and the algorithms can group those pieces to assemble an individual. This allows the algorithms to identify the posture of multiple individuals simultaneously. “Keypoint-based pose estimation” takes advantage of the fact that humans’ and many other animals’ motion is determined by the geometric shapes and structures formed by several pendulum-like movements of the extremities relative to a joint. A common demonstration of this idea is in the use of stick figure diagrams to illustrate poses, where lines represent limbs and bodies. Decomposing objects into keypoints with semantic meaning (for example, body parts in videos of human or animal subjects) allows researchers to convert a high-dimensional video signal into a collection of time series describing the movement of each keypoint, which is far easier to analyze. Keyoint-based pose estimation is also semantically meaningful for investigating behavior, as, for example, it is possible to tell from the alignment of the head relative to the body which way an animal is looking. Previously, motion capture systems relied on physical markers to infer keypoints from videos. This was achieved by manually enhancing or physically marking the areas of interest (by colors, LEDs, or reflective markers). Markerless pose-estimation algorithms directly map raw video input to these coordinates without the need for physical tags. Marker-based pose estimation requires special preparation and equipment, while the markerless method can be applied post hoc but typically requires ground truth annotations of example images (i.e., a training set). Markerless techniques also permit the extraction of additional keypoints from the same (and different) video at a later stage, something that is not possible with markers. Pose estimation with deep learning relieves the user of the long and slow effort of the digitization of keypoints but doesn’t remove it completely. With markerless tracking, a much smaller dataset needs to be annotated, saving time, and the resulting algorithms can then be applied to new videos. Many new tools are specifically being developed to aid users of pose-estimation packages to analyze movement and behavioral outputs in a high-throughput manner. These include time series analysis and supervised and unsupervised learning tools and are discussed in more detail in the article. Many of these packages existed before deep learning became commonplace and can now be leveraged more extensively with deep learning technology. Markerless motion capture has a great deal of future potential as it can excel in dealing with complicated scenes, diverse animals, and whatever camera is available. The limiting factor is the ability of the human algorithm developer to be able to reliably label keypoints—you need to be able to see what you want to track. Past experiments on motion capture were limited by the restrictions of computer vision, meaning that the environment had to be simplified dramatically, with the resulting effect that the animals being studied were in a very artificial environment even in the laboratory (i.e., no bedding, white or black walls, high contrast). Deep learning-based pose estimation allows researchers to study behavior in the real world, with all the variety of backgrounds and environments associated with it, providing us with a real and uninfluenced view. As the authors conclude, neuroscience and artificial intelligence (AI) have a long history of influencing each other, and research in neuroscience will likely contribute to making AI more robust. The analysis of animal motion is a highly interdisciplinary field, with a long tradition, at the intersection of biomechanics, computer vision, medicine, and robotics. The recent advances in deep learning have greatly simplified the measurement of animal behavior and will greatly advance our understanding of how animals and humans behave. This article provides readers with an excellent overview of the topic of motion capture with deep learning and will be of interest to the wider data science community. A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and PerspectivesMathis et al.NeuronOctober 14, 2020In BriefMathis et al. provide a timely introduction to deep learning tools for motion capture, highlighting their core principles and potential pitfalls. Full-Text PDF" @default.
- W3100014584 created "2020-11-23" @default.
- W3100014584 creator A5048560710 @default.
- W3100014584 date "2020-11-01" @default.
- W3100014584 modified "2023-09-26" @default.
- W3100014584 title "Preview of: A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives" @default.
- W3100014584 cites W3081605501 @default.
- W3100014584 doi "https://doi.org/10.1016/j.patter.2020.100146" @default.
- W3100014584 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/7691378" @default.
- W3100014584 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/33294876" @default.
- W3100014584 hasPublicationYear "2020" @default.
- W3100014584 type Work @default.
- W3100014584 sameAs 3100014584 @default.
- W3100014584 citedByCount "0" @default.
- W3100014584 crossrefType "journal-article" @default.
- W3100014584 hasAuthorship W3100014584A5048560710 @default.
- W3100014584 hasBestOaLocation W31000145841 @default.
- W3100014584 hasConcept C104114177 @default.
- W3100014584 hasConcept C154945302 @default.
- W3100014584 hasConcept C178790620 @default.
- W3100014584 hasConcept C185592680 @default.
- W3100014584 hasConcept C2777563447 @default.
- W3100014584 hasConcept C41008148 @default.
- W3100014584 hasConceptScore W3100014584C104114177 @default.
- W3100014584 hasConceptScore W3100014584C154945302 @default.
- W3100014584 hasConceptScore W3100014584C178790620 @default.
- W3100014584 hasConceptScore W3100014584C185592680 @default.
- W3100014584 hasConceptScore W3100014584C2777563447 @default.
- W3100014584 hasConceptScore W3100014584C41008148 @default.
- W3100014584 hasIssue "8" @default.
- W3100014584 hasLocation W31000145841 @default.
- W3100014584 hasLocation W31000145842 @default.
- W3100014584 hasLocation W31000145843 @default.
- W3100014584 hasOpenAccess W3100014584 @default.
- W3100014584 hasPrimaryLocation W31000145841 @default.
- W3100014584 hasRelatedWork W1821542529 @default.
- W3100014584 hasRelatedWork W1830151936 @default.
- W3100014584 hasRelatedWork W2104996629 @default.
- W3100014584 hasRelatedWork W2105769806 @default.
- W3100014584 hasRelatedWork W2144043954 @default.
- W3100014584 hasRelatedWork W2294598463 @default.
- W3100014584 hasRelatedWork W2511137960 @default.
- W3100014584 hasRelatedWork W2687972263 @default.
- W3100014584 hasRelatedWork W2799206379 @default.
- W3100014584 hasRelatedWork W3107474891 @default.
- W3100014584 hasVolume "1" @default.
- W3100014584 isParatext "false" @default.
- W3100014584 isRetracted "false" @default.
- W3100014584 magId "3100014584" @default.
- W3100014584 workType "article" @default.