Matches in SemOpenAlex for { <https://semopenalex.org/work/W1503195289> ?p ?o ?g. }
Showing items 1 to 45 of
45
with 100 items per page.
- W1503195289 abstract "In recent years, robots demonstrating an aping or imitating function have been proposed. Such functions can estimate the actions of a human being by using a non-contact method and reproduce the same actions. However, very few systems can imitate the behaviour of the hand or the fingers (Bernardin et al., 2005). On the other hand, reports in Neuroscience state that mirror neurons (Rizzolatti et al., 1996; Gallese et al., 1996), which participate in the actions imitated by chimpanzees, are activated only for actions of the hand and fingers, such as cracking peanut shells, placing a sheet of paper over an object, tearing the sheet into pieces, etc.. Moreover, since human beings perform intelligent actions using their hands, it is important to attempt to artificially imitate the actions of hands and fingers in order to understand the dexterity and intelligence of human hands from an engineering viewpoint. The object actions in the case where one “looks at an action performed by others and imitates it” include not only grasping or manipulating objects but also the actions involving “imitating shapes and postures of hands and fingers” of others such as sign language and dancing motions. The latter two types of actions are often more complicated and require dexterity as compared to the former actions. In the action of imitating “the shape of hands and fingers” such as sign language, it is essential to estimate the shape of the hands and fingers. Furthermore, as compared to the imitation of the actions of the lower limbs, estimating the shape of the hands is significantly more important and difficult. In particular, human hands, which have a multi-joint structure, change their shapes in a complicated manner and often perform actions with self-occlusion in which the portion of one’s own body renders other portions invisible. In the case of an artificial system, we can utilize a multi-camera system that records a human hand for imitative behaviours by surrounding it with a large number of cameras. However, all the animals that mimic the motions of others have only two eyes. To realize the reproduction of the actions of hands and fingers by imitating their behaviour, it is desirable to adopt a single-eye or double-eye system construction. To roughly classify conventional hand posture estimation systems, the following two types of approaches can be used. The first approach is a 3D-model-based approach (Rehg & Kanade, 1994; Kameda & Minoh, 1996; Lu et al., 2003) that consists of extracting the local characteristics, or silhouette, in an image recorded using a camera and fitting a 3D hand model, which has been constructed in advance in a computer, to it. The second approach is a 2D-appearance-based approach (Athitos & Scarloff, 2002; Hoshino & Tanimoto, 2005) that" @default.
- W1503195289 created "2016-06-24" @default.
- W1503195289 creator A5079916400 @default.
- W1503195289 date "2007-06-01" @default.
- W1503195289 modified "2023-10-01" @default.
- W1503195289 title "Copycat Hand - Robot Hand Generating Imitative Behaviour at High Speed and with High Accuracy" @default.
- W1503195289 cites W117943111 @default.
- W1503195289 cites W1635989058 @default.
- W1503195289 cites W2098500169 @default.
- W1503195289 cites W2141437475 @default.
- W1503195289 cites W2141733693 @default.
- W1503195289 doi "https://doi.org/10.5772/4874" @default.
- W1503195289 hasPublicationYear "2007" @default.
- W1503195289 type Work @default.
- W1503195289 sameAs 1503195289 @default.
- W1503195289 citedByCount "2" @default.
- W1503195289 countsByYear W15031952892017 @default.
- W1503195289 countsByYear W15031952892020 @default.
- W1503195289 crossrefType "book-chapter" @default.
- W1503195289 hasAuthorship W1503195289A5079916400 @default.
- W1503195289 hasBestOaLocation W15031952891 @default.
- W1503195289 hasConcept C130191384 @default.
- W1503195289 hasConcept C154945302 @default.
- W1503195289 hasConcept C41008148 @default.
- W1503195289 hasConceptScore W1503195289C130191384 @default.
- W1503195289 hasConceptScore W1503195289C154945302 @default.
- W1503195289 hasConceptScore W1503195289C41008148 @default.
- W1503195289 hasLocation W15031952891 @default.
- W1503195289 hasLocation W15031952892 @default.
- W1503195289 hasOpenAccess W1503195289 @default.
- W1503195289 hasPrimaryLocation W15031952891 @default.
- W1503195289 hasRelatedWork W1436644918 @default.
- W1503195289 hasRelatedWork W1563668253 @default.
- W1503195289 hasRelatedWork W2024385268 @default.
- W1503195289 hasRelatedWork W2126064811 @default.
- W1503195289 hasRelatedWork W2613429440 @default.
- W1503195289 hasRelatedWork W2748952813 @default.
- W1503195289 hasRelatedWork W2899084033 @default.
- W1503195289 hasRelatedWork W2991511628 @default.
- W1503195289 hasRelatedWork W4283011426 @default.
- W1503195289 hasRelatedWork W4327671695 @default.
- W1503195289 isParatext "false" @default.
- W1503195289 isRetracted "false" @default.
- W1503195289 magId "1503195289" @default.
- W1503195289 workType "book-chapter" @default.