Matches in SemOpenAlex for { <https://semopenalex.org/work/W4313013080> ?p ?o ?g. }
Showing items 1 to 51 of
51
with 100 items per page.
- W4313013080 abstract "Article Figures and data Abstract Editor's evaluation Introduction Results Discussion Materials and methods Data availability References Decision letter Author response Article and author information Metrics Abstract Many species of animals exhibit an intuitive sense of number, suggesting a fundamental neural mechanism for representing numerosity in a visual scene. Recent empirical studies demonstrate that early feedforward visual responses are sensitive to numerosity of a dot array but substantially less so to continuous dimensions orthogonal to numerosity, such as size and spacing of the dots. However, the mechanisms that extract numerosity are unknown. Here, we identified the core neurocomputational principles underlying these effects: (1) center-surround contrast filters; (2) at different spatial scales; with (3) divisive normalization across network units. In an untrained computational model, these principles eliminated sensitivity to size and spacing, making numerosity the main determinant of the neuronal response magnitude. Moreover, a model implementation of these principles explained both well-known and relatively novel illusions of numerosity perception across space and time. This supports the conclusion that the neural structures and feedforward processes that encode numerosity naturally produce visual illusions of numerosity. Taken together, these results identify a set of neurocomputational properties that gives rise to the ubiquity of the number sense in the animal kingdom. Editor's evaluation The current manuscript presents a computational model of numerosity estimation. The model relies on center-surround contrast filters at different spatial scales with divisive normalization between their responses. Using dot arrays as visual stimuli, the summed normalized responses of the filters are sensitive to numerosity and insensitive to the low-level visual features of dot size and spacing. Importantly, the model provides an explanation of various spatial and temporal illusions in visual numerosity perception. https://doi.org/10.7554/eLife.80990.sa0 Decision letter Reviews on Sciety eLife's review process Introduction Humans have an intuitive sense of number that allows numerosity estimation without counting (Dehaene, 2011). The prevalence of number sense across phylogeny and ontogeny (Feigenson et al., 2004) suggests common neural mechanisms that allow the extraction of numerosity information from a visual scene. While earlier empirical work highlighted the parietal cortex for numerosity representation (Nieder, 2016), growing evidence suggests that numerosity is processed at a much earlier stage. A recent study, using high-temporal resolution electroencephalography together with a novel stimulus design, demonstrated that early visual cortical activity is uniquely sensitive to the number (abbreviated as N) of a dot array in the absence of any behavioral response, but much less so to nonnumerical dimensions that are orthogonal to number (i.e., size and spacing, abbreviated as Sz and Sp, respectively; see Figure 1A; Park et al., 2016). Subsequent behavioral and neural studies showed that this early cortical sensitivity to numerosity indicates feedforward activity in visual areas V1, V2, and V3 (Fornaciai et al., 2017; Fornaciai and Park, 2021; Fornaciai and Park, 2018). These results suggest that numerosity is a basic currency of perceived magnitude early in the visual stream. Figure 1 Download asset Open asset Stimulus design and computational methods. (A) Properties of magnitude dimensions represented in three orthogonal axes defined by log-scaled number (N), size (Sz), and spacing (Sp) (Table 1). (B) Schematic illustration of the computational process from a dot-array image to the driving input (i.e., the model without divisive normalization), D, of the simulated neurons, versus the normalized response (i.e., the model with divisive normalization), R. A bitmap image of a dot array was fed into a convolutional layer with DoG filters in six different sizes (Equation 1). The resulting values, after half wave rectification, represented the driving input. Neighborhood weight, defined by η, was multiplied by the driving input across all the neurons across all the filter sizes, the summation of which served as the normalization factor (see Equations 2 and 3). This illustration of η is showing the case where r is defined by twice the size of the sigma for the DoG kernel. DOG, difference-of-Gaussians. Table 1 Mathematical relationship between various magnitude dimensions. DimensionAs a function of n, rd, rfAs a function of N, Sz, SpIndividual area (IA)πrd2log(IA)=1/2log(Sz)−1/2log(N)Total area (TA)n×πrd2log(TA)=1/2log(Sz)+1/2log(N)Field area (FA)πrf2log(FA)=1/2log(Sp)+1/2log(N)Sparsity (Spar)πrf2/nlog(Spar)=1/2log(Sp)−1/2log(N)Individual perimeter (IP)2πrdlog(IP)=log(2π)+1/4log(Sz)−1/4log(N)Total perimeter (TP)n×2πrdlog(TP)=log(2π)+1/4log(Sz)+3/4log(N)Coverage (Cov)n×rd2/rf2log(Cov)=1/2log(Sz)−1/2log(Sp)Closeness (Close)π2×rd2×rf2log(Close)=1/2log(Sz)+1/2log(Sp) Note: n=number; rd=radius of individual dot; rf=radius of the invisible circular field in which the dots are drawn. Nevertheless, it is unclear how feedforward neural activity creates a representation of numerosity within these brain regions. Specifically, the view of numerosity as a discrete number of items seems incompatible with the primary modes of information processing in the brain, such as firing rates and population codes, which are continuous. Indeed, some authors assume that continuous nonnumerical magnitude information is encoded first and integrated to produce the representation of numerosity (Dakin et al., 2011; Gebuis et al., 2016; Leibovich et al., 2017). In contradiction, however, recent empirical studies demonstrate that the magnitude of visual cortical activity is most sensitive to number and is relatively insensitive to other continuous dimensions such as size and spacing of a dot array (DeWind et al., 2019; Park, 2018; Paul et al., 2022; Van Rinsveld et al., 2020). What explains this insensitivity to spacing and size effects, despite robust sensitivity to number? Previous computational modeling studies offer some hints to this question. The computational model of Dehaene and Changeux, 1993 explains numerosity detection based on several neurocomputational principles. That model (hereafter D&C) assumes a one-dimensional linear retina (each dot is a line segment), and responses are normalized across dot size via a convolution layer that represents combinations of two attributes: (1) dot size, as captured by difference-of-Gaussians contrast filters of different widths; and (2) location, by centering filters at different positions. In the convolution layer, the filter that matches the size of each dot dominates the neuronal activity at the location of the dot owing to a winner-take-all lateral inhibition process. To indicate numerosity, a summation layer pools the total activity over all the units in the convolution layer. While the D&C model provided a proof of concept for numerosity detection, it has several limitations as outlined in the discussion. Of these, the most notable is that strong winner-take-all in the convolution layer discretizes visual information (e.g., discrete locations and discrete sizes yielding a literal count of dots), which is implausible for early vision. As a result, the output of the model is completely insensitive to anything other than number in all situations, which is inconsistent with empirical data (Park et al., 2021). Recently, several deep-network-based models have been applied to numerosity perception (Creatore et al., 2021; Kim et al., 2021; Nasr et al., 2019; Stoianov and Zorzi, 2012; Testolin et al., 2020). Stoianov and Zorzi, 2012 developed a hierarchical generative model of the sensory input (images of object arrays) and demonstrated that after learning to generate its own sensory input, some units in the hidden layer were sensitive to numerosity irrespective of total area while other units were sensitive to total area irrespective of numerosity. This suggests an unsupervised learning mechanism for efficient coding of the sensory data that can extract statistical regularities of the input images. The authors provided some suggestions as to the specific neurocomputational principle(s) underlying the success of this model. For example, the first hidden layer developed center-surround representations of different sizes and the second layer developed a pattern of inhibitory connections to units in the first layer that encoded cumulative area. However, the development of center-surround detectors based on unsupervised learning is a common observation (Bell and Sejnowski, 1997), indicating that such results are not unique to displays of dot arrays, and are instead a natural byproduct of learning in the visual system. In a more recent study, Kim et al., 2021 found that sensitivity and selectivity to numerosity were well captured in a completely untrained convolutional neural network (AlexNet) (Krizhevsky et al., 2012), suggesting that a repeated process of convolution and pooling is capable of normalizing continuous dimensions and extracting numerosity information as a statistical regularity of an image. However, these are ‘black box’ models, and it is not always clear how these models work; these models contain many mechanisms, and it is not clear which mechanisms are crucial for producing numerosity-sensitive units. Rather than applying a complex multilayer learning model, we distill the neurocomputational principles that enable the visual system to be sensitive to numerosity while remaining relatively insensitive to nonnumerical visual features. These principles are simulated in a single-layer model that does not need to be trained. Consistent with prior work, we hypothesize that center-surround contrast filters at different spatial scales play an important role in numerosity perception. In addition to this ‘convolution’ of the input, most prior proposals entail some form of pooling or normalization (e.g., normalization between center-surround units). This can emerge across layers of visual processing, as often assumed in ‘max pooling’ layers of a convolutional neural network (Scherer et al., 2010), or it can occur within a layer, as in the strong winner-take-all lateral inhibition used in the Dehaene and Changeux, 1993 model. Furthermore, some models contain both within-layer normalization and between-layer max pooling (Krizhevsky et al., 2012). Although the functional form of within-layer normalization is similar to between-layer max pooling, it differs anatomically, placing the normalized response earlier in visual processing. In determining the neural mechanisms that are core to numerosity, we note that a moderate level of within-layer normalization is consistent with ‘divisive normalization’ (Carandini and Heeger, 2011), in which the response of each neuron reflects its driving input divided by the summation of responses from anatomically surrounding neurons (i.e., a normalization pool). This normalization is not as extreme as winner-take-all normalization and tends to preserve visual precision through graded activation responses. In the case of early vision, the normalization pool is spatially determined by retinotopic positions. Divisive normalization is known to exist throughout the cortex, reflecting the shunting inhibition of inhibitory interneurons that limit neural activation within a patch of cortex (Carandini and Heeger, 2011). A wealth of evidence indicates that divisive normalization is ubiquitous across species and brain systems and hence thought to be a fundamental computation of many neural circuits. Thus, any theory of numerosity perception would be remiss not to include the effect of within-layer divisive normalization. To determine the contribution of divisive normalization to numerosity encoding, we implemented an untrained neural network with versus without divisive normalization as applied to center-surround filters at different spatial scales (e.g., as in V1) (Figure 1B). The output simulates the summation of synchronized postsynaptic activity of a large population of neurons at a pre-decisional stage, consistent with previous work (Fornaciai et al., 2017; Park et al., 2016). Our results show that (1) hierarchically organized multiple center-surround filters of varying size make the network insensitive to spacing and that (2) divisive normalization implemented across network units makes the network additionally insensitive to size. Divisive normalization not only occurs over space but also over time (Huber and O’Reilly, 2003). Thus, we additionally implemented temporal divisive normalization to test if it explains the contextual effects of numerosity perception (Burr and Ross, 2008; Park et al., 2021). Results Center-surround convolution captures total pixel intensities and eliminates the effect of spacing Images of dot arrays that varied systematically across number, size, and spacing (see Materials and methods) were fed into a convolutional layer with difference-of-Gaussians (DoG) filters in six different sizes. The driving input, D, for each filter was the convolution of a DoG with the display image, in other words a weighted sum of local pixel intensities (Figure 1B). The summed driving input in each filter size showed different effects as a function of number, size, and spacing (Figure 2A), but when the driving input was summed across all filter sizes it was most strongly modulated by both number and size equally but not by spacing (Figure 2B), suggesting that the neural activity tracks total area (TA; see Table 1; Figure 2—figure supplement 1). The effect of spacing existed in the fourth and sixth largest filter sizes, largely indicating effects of field area and density, respectively (Figure 2A); however, the effects in these two filter sizes were in opposite directions, which made the overall effect very small. These results illustrate that having multiple filter sizes is key to normalizing the spacing dimension. In sum, the driving input of the convolutional layer primarily captured total pixel intensity of the image regardless of the spatial configuration of dots. Figure 2 with 3 supplements see all Download asset Open asset Simulation results showing the effects of number (N), size (Sz), and spacing (Sp) on the driving input and normalized response of the network units. (A) Summed driving input (ΣD) separately for each of the six filter sizes as a function of N, Sz, and Sp (see Materials and methods for the specific values of s). (B) ΣD across all filters is modulated by both number and size but not by spacing. (C) Summed normalized response (ΣR) showed a near elimination of the Sz effect leaving only the effect of N. The results were simulated using r=2σ and γ=2, but effects of Sz and Sp were negligible across all the tested model parameters (Figure 2—figure supplement 2). The value s on the horizontal axis indicates a median value for each dimension (see Materials and methods). Divisive normalization nearly eliminates the effect of size We next added divisive normalization to the center-surround model, with different parameter values (neighborhood size and amplification factor) to determine the conditions under which divisive normalization might reduce or eliminate the effect of size and whether it might alter the absence of spacing effects in the driving input. Driving input was normalized by the normalization factor defined by a weighted summation of neighboring neurons and filter sizes (Equation 2). The summed normalized responses, ΣR, were strongly modulated by number but much less so, if any, by size and spacing (Figure 2C). The pattern of results was largely consistent across different parameter values for neighborhood size (r) and amplification factor (γ) of the normalization model (Figure 2—figure supplement 2); therefore, we chose moderate values of r (=2) and γ (=2) for subsequent simulations. As one way to quantify these modulatory effects, a simple linear regression with ΣR as the dependent variable with mean-centered values of N as the independent variable (as well as Sz and Sp in separate regression models) was performed. Then, the slope estimate was divided by the intercept estimate, so that these effects could be easily compared across different sets of images (see Figure 2—figure supplement 3). This baseline-adjusted regression slope for N, Sz, and Sp was 0.5771, 0.0646, and 0.0321, respectively. A multiple regression model with summed normalized responses as the dependent measure and the three orthogonal dimensions (N, Sz, and Sp) as the independent variables revealed a much larger coefficient estimate for N (b=13.68) than for Sz (b=1.541) and for Sp (b=0.7809). In sum, a modest degree of divisive normalization eliminated the effect of size and, at the same time, did not alter the absence of spacing effects. Divisive normalization across space explains various visual illusions Next, we considered if the center-surround model with divisive normalization also explains some of the most well-known visual illusions of numerosity perception. If so, this would support the hypothesis that these visual illusions reflect early visual processing at the level of numerosity encoding, without requiring any downstream processing. In other words, early vision may be the root cause of both numerosity encoding and numerosity visual illusions. Empirical studies have long shown that irregularly spaced arrays (compared with regularly spaced arrays) and arrays with spatially grouped items (compared with ungrouped items) are all underestimated (Frith and Frit, 1972; Ginsburg, 1976; van Oeffelen and Vos, 1982). These illusions were indeed captured by the inclusion of divisive normalization. Irregular arrays yielded a 5.98% reduction (Cohen’s d=4.23) and grouped arrays yielded a 2.99% reduction (d=10.02) of normalized response (Figure 3A–B). Note that, in the absence of divisive normalization, there was either no effect or an effect in the opposite direction (Figure 3—figure supplement 1). The underestimation effects in the normalized response can be explained by greater normalization when neurons with overlapping normalization neighborhoods are activated, with this greater overlap occurring in subregions of the images for irregular or grouped dots. This explanation is functionally similar to one provided by the ‘occupancy model’ (Allik and Tuulmets, 1991), but our results demonstrate that these effects emerge naturally within early visual processing. Figure 3 with 7 supplements see all Download asset Open asset Simulation of numerosity illusions. Normalized response of the network units influenced by the (A) regularity, (B) grouping, and (C) heterogeneity of dot arrays, as well as by (D) adaptation and (E) context. Error bars represent one standard deviation of the normalized response across simulations; however, the error bars in most cases were too small to be visualized. Spatial normalization effects (A, B, and C) were simulated with r=2 and γ=2. Temporal normalization effects (D, E) used these same parameters values in combination with ω=8 and δ=1. A relatively understudied visual illusion is the effect of heterogeneity of dot size on numerosity perception. A recent behavioral study demonstrated that the point of subjective equality was about 5.5% lower in dot arrays with heterogeneous sizes compared with dot arrays with homogeneous sizes (Lee et al., 2016). Consistent with this behavioral phenomenon, our simulations revealed that greater heterogeneity leads to greater underestimation (Figure 3C). As compared to the homogeneous array, a moderately heterogeneous array (labeled ‘less heterogeneous’) yielded a 1.14% reduction (d=2.43) and the more heterogeneous array yielded a 5.87% reduction (d=8.11) in the magnitude of the normalized response. This occurs because the summed normalized response of a single dot saturates as dot area increases (Figure 3—figure supplement 2), which interacts with the heterogeneity of the dot array. As heterogeneity is manipulated by making some dots larger and other dots smaller while keeping total area and numerosity constant, this saturating effect makes the overall normalized response smaller as a greater number of dots deviates from the average size (the gains from making some dots larger is not as great as the losses from making some dots smaller). As in the case of other illusions, the same analysis in the absence of divisive normalization fails to produce this illusion (Figure 3—figure supplement 1). Divisive normalization across time explains numerosity adaptation and context effects One of the most well-known visual illusions in numerosity perception is the adaptation effect (Burr and Ross, 2008). We reasoned that numerosity adaptation might reflect divisive normalization across time, similar to adaptation with light or odor (Carandini and Heeger, 2011), which shifts the response curve and produces a contrast aftereffect. Closely related to temporal adaptation, the recently discovered temporal contextual effect of numerosity perception is an amplified neural response to changes in one dimension (e.g., changes in dot size) when observers experience a trial sequence with only changes in that dimension (Park et al., 2021). Therefore, we also applied the model with temporal normalization to the context effect. We modeled temporal divisive normalization for a readout neuron that is driven by the sum of the normalized responses across all units, ΣR. This summed total response (now referred to as M) was temporally normalized (M*) by the recency weighted average of the driving input (Equation 4). Temporal normalization shifts the sigmoid response curve horizontally along the dimension of M to maximize the sensitivity of M* based on the recent history of stimulation. Provided that the constant in the denominator is approximately equal to the current trial’s response, the results of spatial normalization reported above would not change by also introducing temporal normalization. Temporal normalization was assessed for cases of a target array of 10 dots after observing an array of 5, 10, or 20 dots with the model parameters of ω=8 and δ=1 (Figure 3D) in 32 simulations. Similar to behavioral results (Aagten-Murphy and Burr, 2016), the target of 10 dots was underestimated by 28.9% (d=18.04) when the adaptor was more numerous than the target and was overestimated by 26.6% (d=14.06) when the adaptor was less numerous than the target. This pattern held across all tested model parameters (Figure 3—figure supplement 3). It is important to note that the model does not ‘know’ the number of dots in the adaptor image. Instead, temporal divisive normalization compares the spatially normalized response of the current image to that of the adaptor image and because the spatially normalized response is primarily sensitive to variation in number, there is a contrast effect (e.g., ‘adapt high’ reduces the response to the current image). Indeed, because the normalized response is less sensitive to variation in size or spacing, no adaptation effect emerges for those variables (Figure 3—figure supplement 4 and Figure 3—figure supplement 5). These results confirm that divisive normalization across space and time naturally produces numerosity adaptation. Using the same model and parameters of temporal normalization (Equation 4), we tested if it can also explain longer-sequence context effects. Studies show that the effect of size is negligible in the context of a trial sequence that varies size, spacing, and number (Park et al., 2016), but that the effect of size becomes apparent when number and spacing are held constant while varying only size (Park et al., 2021). We simulated each of these contexts: the model saw a total of 400 dot arrays that varied across number, size, and spacing or else it saw 400 dot arrays that differed only in size (Figure 3E). A total of 128 simulations were run for each context. In the context where all dimensions varied, the three levels of Sz had no linear association with M*; the 95th percentile confidence interval of the ordinary-least-square linear slope of M* as a function of Sz was [–0.0243, 0.0182], which includes 0. In contrast, in the context where only size varied, M* was positively correlated with Sz; slope confidence interval of [0.00315, 0.00359], which excludes 0. This pattern held across all tested model parameters (Figure 3—figure supplement 6). This phenomenon can be explained by the adaptive shifting of the sigmoid response curve across trials. In the former case, because recent trials are often of larger or smaller total response as compared to the current trial, the normalization for the current trial is more often pushed to the nonlinear parts of the normalization curve (e.g., closer to ceiling and floor effects). Thus, the temporally normalized response is relatively insensitive to the small effect of size (keeping in mind that the effect of size is made small by spatial divisive normalization). In contrast, when only size varies across trials, the total response of recent trials is more likely to be well-matched to the total response of the current trial. As a result, the small effect of size is magnified in light of this temporal stability. Discussion Despite the ubiquity of number sense across animal species, it was previously unclear how unadulterated perceptual responses produce the full variety of numerosity perception effects. Recent empirical studies demonstrate that feedforward neural activity in early visual areas is uniquely sensitive to the numerosity but much less so, if any, to the dimension of size and spacing, which are continuous nonnumerical dimensions that are orthogonal to numerosity. Despite recent advances showing that numerosity information can be extracted from a deep neural network (Kim et al., 2021; Nasr et al., 2019; Stoianov and Zorzi, 2012), precisely how early visual areas normalize the effects of size and spacing was unclear. The current study identified the key neurocomputational principles involved in this process. First, the implementation of hierarchically organized multiple sizes of center-surround filters effectively normalizes spacing owing to offsetting factors (Figure 4A). On the one hand, relatively smaller filters that roughly match or are slightly bigger than each dot produce a greater response when the dots are farther apart because their off-surround receptive fields (RFs) do not overlap. On the other hand, relatively larger filters that cover most of the array produce a greater response when the dots are closer together because stimulation at the center of the on-surround RFs is maximized. When summing these opposing effects, which occur at different center-surround filter sizes, the overall neural activity is relatively invariant to spacing. Second, the implementation of divisive normalization reduces the effect of size by reducing activity at larger filter sizes that have overlapping normalization neighborhoods (Figure 4B). More specifically, increase in size produces greater overall unnormalized activity because more filters (e.g., both larger and smaller) are involved in responding to larger dots whereas only smaller filters respond to small dots (Figure 2B). However, normalization dampens this increase. Critically, divisive normalization is a within-layer effect, reflecting recurrent inhibition between center-surround filters owing to inhibitory interneurons. Thus, the effect of dot size is eliminated in early visual responses. In sum, contrast filters at different spatial scales and divisive normalization naturally increases sensitivity to the number of items in a visual scene. Because these neurocomputational principles are commonly found in visual animals, this suggests that visual perception of numerosity is a natural, emergent phenomenon. Figure 4 Download asset Open asset Simplified schematics explaining the mechanisms underlying the normalization of size and spacing. (A) As spacing increases (from top to middle row) the response of small size center-surround filters increases (red and blue) whereas the response of large size center-surround filters decreases (green), with these effects counteracting each other in the total response. (B) As dot size increases (from top to middle row), more filters are involved in responding to the dots thereby increasing the unnormalized response (red and blue), but this results in a greater overlap in the neighborhoods and increases the normalization factor (yellow). These counteracting effects eliminate the size effect. A key result from the current model is that the summed normalized output of the neuronal activity is sensitive to numerosity but shows little variation with size and spacing. This pattern is consistent with neural studies finding similar results for the summed response of V1, V2, and V3 in the absence of any behavioral judgment (Fornaciai et al., 2017; Fornaciai and Park, 2018; Paul et al., 2022). However, this pattern is different than the behavior of prior deep neural network-based models of numerosity perception, which revealed many units in the deep layers that were sensitive to nonnumerical dimensions, along with a few that were numerosity sensitive (or selective). Although the few units that were sensitive to numerosity could explain behavior, the abundance of simulated neurons sensitive to nonnumerical dimensions is inconsistent with population-level neural activity, which fails to show sensitivity to these nonnumerical dimensions in early visual cortex (DeWind et al." @default.
- W4313013080 created "2023-01-05" @default.
- W4313013080 creator A5040668657 @default.
- W4313013080 date "2022-07-20" @default.
- W4313013080 modified "2023-09-25" @default.
- W4313013080 title "Editor's evaluation: A visual sense of number emerges from divisive normalization in a simple center-surround convolutional network" @default.
- W4313013080 doi "https://doi.org/10.7554/elife.80990.sa0" @default.
- W4313013080 hasPublicationYear "2022" @default.
- W4313013080 type Work @default.
- W4313013080 citedByCount "0" @default.
- W4313013080 crossrefType "peer-review" @default.
- W4313013080 hasAuthorship W4313013080A5040668657 @default.
- W4313013080 hasBestOaLocation W43130130801 @default.
- W4313013080 hasConcept C111472728 @default.
- W4313013080 hasConcept C136886441 @default.
- W4313013080 hasConcept C138885662 @default.
- W4313013080 hasConcept C144024400 @default.
- W4313013080 hasConcept C154945302 @default.
- W4313013080 hasConcept C185592680 @default.
- W4313013080 hasConcept C19165224 @default.
- W4313013080 hasConcept C2779463800 @default.
- W4313013080 hasConcept C2780586882 @default.
- W4313013080 hasConcept C41008148 @default.
- W4313013080 hasConcept C8010536 @default.
- W4313013080 hasConceptScore W4313013080C111472728 @default.
- W4313013080 hasConceptScore W4313013080C136886441 @default.
- W4313013080 hasConceptScore W4313013080C138885662 @default.
- W4313013080 hasConceptScore W4313013080C144024400 @default.
- W4313013080 hasConceptScore W4313013080C154945302 @default.
- W4313013080 hasConceptScore W4313013080C185592680 @default.
- W4313013080 hasConceptScore W4313013080C19165224 @default.
- W4313013080 hasConceptScore W4313013080C2779463800 @default.
- W4313013080 hasConceptScore W4313013080C2780586882 @default.
- W4313013080 hasConceptScore W4313013080C41008148 @default.
- W4313013080 hasConceptScore W4313013080C8010536 @default.
- W4313013080 hasLocation W43130130801 @default.
- W4313013080 hasOpenAccess W4313013080 @default.
- W4313013080 hasPrimaryLocation W43130130801 @default.
- W4313013080 hasRelatedWork W121567045 @default.
- W4313013080 hasRelatedWork W1578309724 @default.
- W4313013080 hasRelatedWork W2063185616 @default.
- W4313013080 hasRelatedWork W2090485996 @default.
- W4313013080 hasRelatedWork W2186481386 @default.
- W4313013080 hasRelatedWork W2356313285 @default.
- W4313013080 hasRelatedWork W2516800609 @default.
- W4313013080 hasRelatedWork W2533072256 @default.
- W4313013080 hasRelatedWork W2794115703 @default.
- W4313013080 hasRelatedWork W4238075012 @default.
- W4313013080 isParatext "false" @default.
- W4313013080 isRetracted "false" @default.
- W4313013080 workType "peer-review" @default.