Matches in SemOpenAlex for { <https://semopenalex.org/work/W4372054471> ?p ?o ?g. }
Showing items 1 to 51 of
51
with 100 items per page.
- W4372054471 abstract "Full text Figures and data Side by side Abstract Editor's evaluation Introduction Results Discussion Methods Data availability References Decision letter Author response Article and author information Metrics Abstract Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker’s goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field. Editor's evaluation This important study provides new information about the statistics of retinal motion patterns generated by human participants physically walking a straight path in real terrains that differ in ruggedness. State-of-the-art eye, head and body tracking allowed simultaneous assessment of eye movements, head movements and gait. Compelling evidence was provided for an asymmetrical gradient of flow speeds during the gait cycle of walking, tied predominantly to vertical gaze angle, together with a radial motion direction distribution tied mostly to horizontal gaze angle. This work, by describing fundamental properties of human visual motion statistics during natural behavior, should be of great interest to scientists who seek to understand the neural computations performed by walking humans, given certain behavioral goals. https://doi.org/10.7554/eLife.82410.sa0 Decision letter Reviews on Sciety eLife's review process Introduction A moving observer traveling through a stationary environment generates a pattern of motion that is commonly referred to as optic flow (Gibson, 1950; Koenderink, 1986). While optic flow is often thought of as a simple pattern of expansive motion centered on the direction of heading, this will be true for the retinal motion pattern only in the case of linear motion with gaze centered on heading direction, a condition only rarely met in natural behavior. The actual retinal motion pattern is much more complex and depends on both the three-dimensional structure of the environment and the motion of the eye through space, which in turn depends on the location of the point of gaze in the scene and the gait-induced oscillations of the body. The pervasive presence of self-motion makes it likely that the structure of motion processing systems is shaped by these patterns at both evolutionary and developmental timescales. This makes it important to understand the statistics of the actual motion patterns generated in the context of natural behavior. While much is known about motion sensitivity in the visual pathways, it is not known how those properties are linked to behavior and how they might be shaped by experience. To do this, it is necessary to measure the actual retinal motion input in the context of natural behavior. A similar point was made by Bonnen et al., 2020, who demonstrated that an understanding of the retinal images resulting from binocular viewing geometry allowed a better understanding of the way that cortical neurons might encode the 3D environment. Despite many elegant theoretical analyses of the way that observer motion generates retinal flow patterns, a detailed understanding has been limited by the difficulties in recording the visual input during locomotion in natural environments. In this article, we measure eye and body movements during locomotion in a variety of natural terrains and explore how they shape the properties of the retinal input. A number of studies have examined motion patterns generated by cameras moving through natural environments (Betsch et al., 2005; Zanker and Zeil, 2005), but these data do not accurately reflect the patterns incident on the human retinae because the movement of the cameras does not mimic the movements of the head, nor does it take into account the location of gaze. In natural locomotion, walkers gaze at different locations depending on the complexity of the terrain and the consequent need to find stable footholds (Matthis et al., 2018). Thus, task goals indirectly affect the motion input. In addition, natural locomotion is not linear. Instead, the head moves through a complex trajectory in space during the gait cycle, while the point of gaze remains stable in the environment, and this imparts a complex pattern of rotation and expansion on the retinal flow as recently described by Matthis et al., 2021. Retinal motion is generated by the compensatory rotations of the eye in space while the body moves forward during a step, and gaze is held at a fixed location in space. To characterize the properties of this motion and how it depends on gaze behavior, we simultaneously recorded gaze and image data while subjects walked in a variety of different natural terrains. In addition, to fully characterize the retinal motion we reconstructed a 3D representation of the terrain. This links the eye and body movements to the particular terrain and consequently allows calculation of the motion patterns on the retinae. Previous work on the statistics of retinal motion by Calow and Lappe simulated retinal flow patterns using estimates of gaze location and gait oscillations, together with a database of depth images (Calow and Lappe, 2007; Calow and Lappe, 2008). However, since terrain is a profound influence on gaze deployment, the in situ data collection strategy we use here allows measurement of how gaze location varies with terrain, consequently allowing a more precise and realistic evaluation of the natural statistics than in previous studies. In this article, we focus on the interactions between gaze, body, and the resulting motion patterns. We find a stereotyped pattern of gaze behavior that emerges due to the constraints of the task, and this pattern of gaze, together with gait-induced head movements, drives much of the variation in the resulting visual motion patterns. Most importantly, because walkers stabilize gaze location in the world, the motion statistics result from the motion of the eye in space as it is carried forward by the body while counter-rotating to maintain stability. In this article, we calculate the statistics of the retinal image motion across the visual field. In addition, we describe the effects of changes in both vertical and lateral gaze angle and also the effects of natural terrain structure, independent of gaze location. Thus, a quantitative description of retinal image statistics requires an understanding of the way the body interacts with the world. Results Eye movements, first-person scene video, and body movements were recorded using a Pupil Labs mobile eye tracker and a Motion Shadow full-body IMU-based capture system. Eye movements were recorded at 120 Hz. The scene camera recorded at 30 Hz with 1920 × 1080 pixel resolution and 100 deg diagonal field of view. The Shadow motion capture system recorded at 100 Hz and was used to estimate joint positions and orientations of a full 3D skeleton. Participants walked over a range of terrains two times in each direction. Examples of the terrains are shown in Figure 2a. In addition, a representation of the 3D terrain structure was reconstructed from the sequence of video images using photogrammetry, as described below in the section on optic flow estimation. Details of the procedure for calibrating and extracting integrated gaze, body, and terrain data are described in ‘Methods’, as well as in Matthis et al., 2018 and Matthis et al., 2021. Oculomotor patterns during locomotion Because it is important for understanding how the retinal motion patterns are generated, we first describe the basic pattern of eye movements during locomotion as has been described previously (Imai et al., 2001; Grasso et al., 1998; Authié et al., 2015). Figure 1a shows a schematic of the typical eye movement pattern. When the terrain is complex, subjects mostly direct gaze toward the ground a few steps ahead (Matthis et al., 2018). This provides visual information to guide upcoming foot placement. As the body moves forward, the subject makes a sequence of saccades to locations further along the direction of travel. Following each saccade, gaze location is held approximately stable in the scene for periods of 200–300 ms so that visual information about upcoming foothold locations can be acquired while the subject moves forward during a step. A video example of the gaze patterns during locomotion, together with the corresponding retina-centered images and traces of eye-in-head angle, is given in Video 1. This video is taken from Matthis et al., 2021, who collected a subset of the data used in this article. While the walker is fixating and holding gaze stable at a particular location on the ground, the eye rotates slowly to offset the forward motion of the body. Figure 1b shows an excerpt of the vertical component of gaze during this characteristic gaze pattern. Stabilization is most likely accomplished by the vestibular-ocular reflex, although other eye movement systems might also be involved. This is discussed further below and in Matthis et al., 2021. Figure 1 Download asset Open asset Characteristic oculomotor behavior during locomotion. (a) Schematic of a saccade and subsequent gaze stabilization during locomotion when looking at the nearby ground. In the top left, the walker makes a saccade to an object further along the path. In the middle panel, the walker fixates (holds gaze) at this location for a time. The right panel shows the gaze angle becoming more normal to the ground plane during stabilization. (b) Excerpt of vertical gaze angle relative to gravity during a period of saccades and subsequent stabilization. As participants move forward while looking at the nearby ground, they make sequences of saccades (indicated by the gaps in the trace) to new locations, followed by fixations where gaze is held stable at a location in the world while the body moves forward along the direction of travel (indicated by the lower velocity green traces). The higher velocity saccades were detected as described in the text based on both horizontal and vertical velocity and acceleration. These are followed by slower counter-rotations of the eye in the orbit in order to maintain gaze at a fixed location in the scene (the gray time slices). Video 1 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Gaze behavior during locomotion. Visualization of visual input and eye and body movements during natural locomotion. During the periods when gaze location is approximately stable in the scene, the retinal image expands and rotates, depending on the direction of the eye in space, carried by the body. It is these motion patterns that we examine here. An illustration of the retinal motion patterns resulting from forward movement accompanied by gaze stabilization is shown in Video 2, which also shows the stark difference between retinal motion patterns and motion relative to the head. This movie is also taken from Matthis et al., 2021. We segmented the image into saccades and fixations using an eye-in-orbit velocity threshold of 65 deg/s and an acceleration threshold of 5 deg/s2. We use the term ‘fixation’ here to refer to the periods of stable gaze in the world separated by saccades, ‘gaze’ is the direction of the eye in the scene. and gaze location is where that vector intersects the ground plane. Note that historically the term ‘fixation’ has been used to refer to the situation where the head is fixed and the eye is stable in the orbit. However, whenever the head is moving and gaze is fixed on a stable location in the world, the eye will rotate in the orbit. Since head movements are ubiquitous in normal vision, we use the term ‘fixation’ here to refer to the periods of stable gaze in the world separated by saccades, even when the eye rotates in the orbit as a consequence of stabilization mechanisms (for a review, see Lappi, 2016 for a discussion of the issues in defining fixations in natural behavior). The velocity threshold is quite high in order to accommodate the smooth counter-rotations during stabilization. Saccadic eye movements induce considerably higher velocities, but saccadic suppression and image blur render this information less useful for locomotor guidance, and the neural mechanisms underlying motion analysis during saccades are not well understood (McFarland et al., 2015). We consider the retinal motion generated by saccades separately, as described in ‘Methods.’. Video 2 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Visual motion during locomotion. Visualization of eye and head centered visual motion during natural locomotion. Incomplete gaze stabilization during a fixation will add image motion. Analysis of image slippage during the fixations revealed that stabilization of gaze location in the world was very good (see Figure 10 in ‘Methods’). Retinal image slippage during fixations had a mode of 0.26 deg and a median of 0.83 deg. This image slippage reflects not only incomplete stabilization but also eye-tracker noise and some small saccades misclassified as fixations, so it is most likely an overestimate. In order to simplify the analysis, we first ignore image slip during a fixation and do the analysis as if gaze were fixed at the initial location for the duration of the fixation. In ‘Methods,’ we evaluate the impact of this idealization and show that it is modest. There is variation in how far ahead subjects direct gaze between terrain types, as has been observed previously (Matthis et al., 2018), although the pattern of saccades followed by stabilizing eye movements is conserved. We summarize this behavior by measuring the angle of gaze relative to gravity and plot gaze angle distributions for the different terrain types in Figure 2. Consistent with previous observations, gaze location is moved closer to the body in the more complex terrains, with the median gaze angle in rocky terrain being approximately 45 deg, about 2–3 steps ahead, and that on pavement being to far distances, a little below the horizontal. Note that the distributions are all quite broad and sensitive to changes in the terrain, such as that between a paved road and a flat dirt path. Subtle changes like this presumably affect variation in the nature of the visual information needed for foot placement. Individual subject histograms are shown in ‘Methods.’ There is most variability between subjects in the bark and flat terrains, as might be expected from individual trade-offs between energetic costs, stability, and other factors. The bimodality of most of the distributions reflects the observation that subjects alternate between near and far viewing, presumably for different purposes (e.g. path planning versus foothold finding). These changes in gaze angle, in conjunction with the movements of the head, have an important effect on retinal motion speeds, as will be shown below. Thus, motion input indirectly stems from behavioral goals. Figure 2 Download asset Open asset Gaze behavior depends on terrain. (a) Example images of the five terrain types. Sections of the hiking path were assigned to one of the five terrain types. The Pavement terrain included the paved parts of the hiking path, while the Flat terrain included the parts of the trail which were composed of flat packed earth. The Medium terrain had small irregularities in the path as well as loose rocks and pebbles. The Bark terrain (though similar to the Medium terrain) was given a separate designation as it was generally flatter than the Medium terrain but large pieces of bark and occasional tree roots were strewn across the path. Finally, the Rocks terrain had significant path irregularities which required attention to locate stable footholds. (b) Histograms of vertical gaze angle (angle relative to the direction of gravity) across different terrain types. In very flat, regular terrain (e.g. pavement, flat) participant gaze accumulates at the horizon (90°). With increasing terrain complexity participants shift gaze downward (30°–60°). Data are averaged over 10 subjects for rocky terrain and 8 subjects for the other terrains. Shaded error bars are ±1 SEM. Individual subject data are shown in ‘Methods’. Speed and direction distributions during gaze stabilization The way the eye moves in space during the fixations, together with gaze location in the scene, jointly determines the retinal motion patterns. Therefore, we summarize the direction and speed of the stabilizing eye movements in Figure 3a and b. Figure 3a shows the distribution of movement speeds, and Figure 3b shows the distribution of gaze directions (rotations of the eye in the orbit). Rotations are primarily downward as the body moves forward, with rightward and leftward components resulting from both body sway and fixations to the left or right of the future path, occasioned by the need to change direction or navigate around an obstacle. There are a small number of upward eye movements resulting from vertical gait-related motion of the head, and possibly some small saccades that were misclassified as fixations. These movements, together with head trajectory and the depth structure of the terrain, determine the retinal motion. Note that individual differences in walking speed and gaze location relative to the body will affect these measurements, which are pooled over all terrains and subjects. Our goal here is simply to illustrate the general properties of the movements as the context for the generation of the retinal motion patterns. Figure 3 Download asset Open asset Eye rotations during stabilization. (a) The distribution of speeds during periods of stabilization (i.e. eye movements that keep point of gaze approximately stable in the scene). (b) A polar histogram of eye movement directions during these stabilizing movements. 270 deg corresponds to straight down in eye centered coordinates, while 90 deg corresponds to straight up. Stabilizing eye movements are largely in the downward direction, reflecting the forward movement of the body. Some upward eye movements occur and may be due to misclassification of small saccades or variation in head movements relative to the body. Shaded region shows ±1 SEM across 10 subjects. Optic flow estimation In order to approximate retinal motion input to the visual system, we first use a photogrammetry package called Meshroom to estimate a 3D triangle mesh representation of the terrain structure, as well as a 3D trajectory through the terrain using the head camera video images. Using Blender (Blender Online Community, 2021), the 3D triangle mesh representations of the terrain are combined with the spatially aligned eye position and direction data. A virtual camera is then placed at the eye location and oriented in the same direction as the eye, and a depth image is acquired using Blender’s built in z-buffer method. Thus, the depth image input at each frame of the recording is computed. These depth values per location on the virtual imaging surface are mapped to retinal coordinates based on their positions relative to the principal point of the camera. Thus, approximate depth at each location in visual space is known. Visual motion in eye coordinates can then be computed by tracking the movement of projections of 3D locations in the environment onto an image plane orthogonal to gaze, resulting from translation and rotation of the eye (see Longuet-Higgins and Prazdny, 1980 for generalized approach). The retinal motion signal is represented as a 2D grid where grid points (x,y), correspond to polar retinal coordinates (θ,ϕ) by the relationship θ=atan2(y,x) ϕ=x2+y2 Thus, eccentricity in visual angle is mapped linearly to the image plane as a distance from the point of gaze. At each (x,y) coordinate, there is a corresponding speed in degs and direction atan2(Δx,Δy) of movement. Average motion speed and direction statistics Subjects’ gaze angle modulates the pattern of retinal motion because of the planar structure of the environment (Koenderink and van Doorn, 1976). However, we first consider the average motion signal across all the different terrain types and gaze angles. We will then explore the effects of gaze angle and terrain more directly. The mean flow fields for speed and direction, averaged across subjects, for all terrains, are shown in Figure 4. While there will be inevitable differences between subjects caused by the different geometry as a result of different subject heights and idiosyncratic gait patterns, we have chosen to first average the data across subjects since the current goal is to describe the general properties of the flow patterns resulting from natural locomotion across a ground plane. Individual subject data are shown in ‘Methods.’. Figure 4 Download asset Open asset Speed and direction of retinal motion signals as a function of retinal position. (a) Average speed of retinal motion signals as a function of retinal position. Speed is color mapped (blue = slow, red = fast). The average is computed across all subjects and terrain types. Speed is computed in degrees of visual angle per second. (b) Speed distributions at five points in the visual field at the fovea and four cardinal locations. The modal speed increases in all four cardinal locations, though more prominently in the upper/lower visual fields. Speed variability also increases in the periphery in comparable ways. (c) Average retinal flow pattern as a function of retinal position. The panel shows the integral curves of the flow field (black) and retinal flow vectors (green). Direction is indicated by the angle of the streamline drawn at particular location. Vector direction corresponds to the direction in a 2D projection of visual space, where eccentricity from the direction of gaze in degrees is mapped linearly to distance in polar coordinates in the 2D projection plane. (d) Histogram of the average retinal motion directions (in c) as a function of polar angle. Error bars in (b) and (d) are ±1 SEM over 9 subjects. Figure 4a shows a map of the average speed at each visual field location (speed is color mapped with blue being the lowest velocity and yellow being the highest, and the contour lines indicate equal speed). This visualization demonstrates the low speeds near the fovea with increasing speed as a function of eccentricity, a consequence of gaze stabilization. Both the mean and variance of the distributions increase with eccentricity as shown by the speed distributions in Figure 4b. The increase is not radially symmetric. The lower visual field has steeper increase as a function of eccentricity compared to the upper visual field. This is a consequence of the increasing visual angle of the ground plane close to the walker. The left and right visual field speeds are even lower than the upper visual field since the ground plane rotates in depth around a horizontal axis defined by the fixation point (see Figure 2). Average speeds in the lower visual field peak at approximately 28.8 deg/s (at 45 deg eccentricity), whereas the upper peaks at 13.6 deg/s. Retinal motion directions in Figure 4c are represented by unit vectors. The average directions of flow exhibit a radially expansive pattern as expected from the viewing geometry. However, the expansive motion (directly away from center) is not radially symmetric. Directions are biased toward the vertical, with only a narrow band in the left and right visual field exhibiting leftward or rightward motion. This can be seen in the histogram in Figure 4d, which peaks at 90 deg and 270 deg. Again, this pattern results from a combination of the forward motion, the rotation in depth of the ground plane around the horizontal axis defined by the fixation point, and the increasing visual angle of the ground plane. Effects of horizontal and vertical gaze angle on motion patterns Averaging the data across the different terrains does not accurately reflect the average motion signals a walker might be exposed to in general as it is weighted by the amount of time the walker spends in different terrains. It also obscures the effect of gaze angle in the different terrains. Similarly, averaging over the gait cycle obscures the effect of the changing angle between the eye and the head direction in space as the body moves laterally during a normal step. We therefore divided the data by gaze angle to reveal the effects of varying horizontal and vertical gaze angle. Vertical gaze angle, the angle of gaze in world coordinates relative to gravity, is driven by different terrain demands that cause the subject to direct gaze closer or further from the body. Vertical gaze angles were binned between 60 and 90 deg, and between 17 and 45 deg. This reflects the top and bottom third of the distribution of vertical gaze angles. We did not calculate separate plots for individual subjects in this figure as the goal is to show the kind and approximate magnitude of the transformation imposed by horizontal and vertical eye rotations. The effect of the vertical component of gaze angle can be seen in Figure 5. As gaze is directed more toward the horizon, the pattern of increasing speed as a function of eccentricity becomes more radially asymmetric, with the peak velocity ranging from less than 5 deg/s in the upper visual field to speeds in the range of 20–40 deg/s in the lower visual fields. (Compare top and bottom panels of Figure 4a and b.) This pattern may be the most frequent one experienced by walkers to the extent that smooth terrains are most common (see the distributions of gaze angles for flat and pavement terrain in Figure 2). As gaze is lowered to the ground, these peaks move closer together, the variance of the distributions increase in the upper/left/right fields, and the distribution of motion speeds becomes more radially symmetric. There is some effect on the spatial pattern of motion direction as well, with the density of downward motion vectors increasing at gaze angles closer to the vertical. Figure 5 Download asset Open asset Effect of vertical gaze angle on retinal motion speed and direction. This analysis compares the retinal motion statistics for upper (60°–90°) vs. lower vertical gaze angles (17°–45°). The upper vertical gaze angles correspond to far fixations while the lower vertical gaze angles correspond to fixations closer to the body. (a) Average motion speeds across the visual field. (b) Five example distributions are shown as in Figure 4. Looking at the ground near the body (i.e. lower vertical gaze angles) reduces the asymmetry between upper and lower visual fields. Peak speeds in the lower visual field are reduced, while speeds are increased in the upper visual field. (c) Average retinal flow patterns for upper and lower vertical gaze angles. (d) Histograms of the average directions plotted in (c). While still peaking for vertical directions, the distribution of directions becomes more uniform as walkers look to more distant locations. Data are pooled across subjects. Horizontal gaze angle is defined relative to the direction of travel. For a particular frame of the recording, the head velocity vector projected into a horizontal plane normal to gravity is treated as 0 deg, and the angle relative to this vector of the gaze angle projected into the same plane is the horizontal angle (clockwise being positive when viewed from above). Horizontal gaze angle changes stem both from looks off the path, and from the lateral movement of the body during a step. Body sway accounts for about ±12 deg of rotation of the eye in the orbit. Fixations to the right and left of the travel path deviate by about ±30 deg of visual angle. Data for all subjects were binned for horizontal gaze angles between –180 to –28 deg and from +28 to +180 deg. These bins represent the top and bottom eighths of the distribution of horizontal gaze angles. The effect of these changes can be seen in Figure 6. Changes in speed distributions are shown on the left (Figure 6a and b). The main effect is the tilt of the equal speed contour lines in opposite directions, although speed distributions at the five example locations are not affected very much. Changes in horizontal angle primarily influence the spatial pattern of motion direction. This can be seen in the right side of Figure 6 (in c and d), where rightward or leftward gaze introduces clockwise or counterclockwise rotation in addition to expansion. This makes motion directions more perpendicular to the radial direction of the retinal location of the motion as gaze becomes more eccentric relative to the translation direction. This corresponds to the curl signal introduced by the lateral sway of the body during locomotion or by fixations off the path (Matthis et al., 2021). An example o" @default.
- W4372054471 created "2023-05-07" @default.
- W4372054471 creator A5016714587 @default.
- W4372054471 date "2022-10-18" @default.
- W4372054471 modified "2023-09-24" @default.
- W4372054471 title "Editor's evaluation: Retinal motion statistics during natural locomotion" @default.
- W4372054471 doi "https://doi.org/10.7554/elife.82410.sa0" @default.
- W4372054471 hasPublicationYear "2022" @default.
- W4372054471 type Work @default.
- W4372054471 citedByCount "0" @default.
- W4372054471 crossrefType "peer-review" @default.
- W4372054471 hasAuthorship W4372054471A5016714587 @default.
- W4372054471 hasBestOaLocation W43720544711 @default.
- W4372054471 hasConcept C104114177 @default.
- W4372054471 hasConcept C105795698 @default.
- W4372054471 hasConcept C118487528 @default.
- W4372054471 hasConcept C154945302 @default.
- W4372054471 hasConcept C166957645 @default.
- W4372054471 hasConcept C205649164 @default.
- W4372054471 hasConcept C2776608160 @default.
- W4372054471 hasConcept C2780827179 @default.
- W4372054471 hasConcept C33923547 @default.
- W4372054471 hasConcept C41008148 @default.
- W4372054471 hasConcept C71924100 @default.
- W4372054471 hasConceptScore W4372054471C104114177 @default.
- W4372054471 hasConceptScore W4372054471C105795698 @default.
- W4372054471 hasConceptScore W4372054471C118487528 @default.
- W4372054471 hasConceptScore W4372054471C154945302 @default.
- W4372054471 hasConceptScore W4372054471C166957645 @default.
- W4372054471 hasConceptScore W4372054471C205649164 @default.
- W4372054471 hasConceptScore W4372054471C2776608160 @default.
- W4372054471 hasConceptScore W4372054471C2780827179 @default.
- W4372054471 hasConceptScore W4372054471C33923547 @default.
- W4372054471 hasConceptScore W4372054471C41008148 @default.
- W4372054471 hasConceptScore W4372054471C71924100 @default.
- W4372054471 hasLocation W43720544711 @default.
- W4372054471 hasOpenAccess W4372054471 @default.
- W4372054471 hasPrimaryLocation W43720544711 @default.
- W4372054471 hasRelatedWork W1830151936 @default.
- W4372054471 hasRelatedWork W1926323357 @default.
- W4372054471 hasRelatedWork W1983321931 @default.
- W4372054471 hasRelatedWork W2104996629 @default.
- W4372054471 hasRelatedWork W2144043954 @default.
- W4372054471 hasRelatedWork W2294598463 @default.
- W4372054471 hasRelatedWork W2511137960 @default.
- W4372054471 hasRelatedWork W2687972263 @default.
- W4372054471 hasRelatedWork W38039148 @default.
- W4372054471 hasRelatedWork W4236599619 @default.
- W4372054471 isParatext "false" @default.
- W4372054471 isRetracted "false" @default.
- W4372054471 workType "peer-review" @default.