Matches in SemOpenAlex for { <https://semopenalex.org/work/W2019990018> ?p ?o ?g. }
Showing items 1 to 54 of
54
with 100 items per page.
- W2019990018 endingPage "224" @default.
- W2019990018 startingPage "215" @default.
- W2019990018 abstract "The goal of neuroeconomics is a model of human decision-making that unifies corresponding theories from economics, psychology and cognitive neuroscience (Glimcher 2011). The field is still rather nascent. Yet, hundreds of studies over the last 10–15 years have led to a series of path-breaking insights into human decision-making, empirical regularities have been established and a conceptual framework has started to emerge. This article highlights some of the methodological challenges neuroeconomic research faces and describes the data on which neuroeconomic studies are typically based. It then outlines some of the major types of data analysis and points out a few of the issues related to the interpretation of results. The presentation will necessarily be extremely selective and the reader is urged to consult the references for a more complete and in-depth coverage of the topics. An account of the history of neuroeconomics is given in Glimcher et al. (2009). Glimcher (2011) outlines a conceptual framework for the neuroeconomics research program. In Section 2, I will discuss some methodological issues neuroeconomists face. In Section 3, I will describe the types of data captured in typical neuroeconomics studies, with a focus on functional magnetic resonance imaging (fMRI). Data analysis will be discussed in Section 4 and interpretation in Section 5. Section 6 concludes. The aim of neuroeconomics is to unify economic theories with theories from psychology and cognitive neuroscience to improve our understanding of human decision-making. A natural question to arise is whether such a research program is possible, both from a conceptual and from a practical perspective. At this stage, the answer to this question is far from obvious. Once we assume that the neuroeconomics research program is feasible, we will want to ask how likely it is to succeed. Finally, we want to know whether neuroeconomics can strengthen its parent disciplines, including economics. Ultimately, all of these questions are empirical and only time will tell the answer. In this section, I will discuss some of the conceptual challenges neuroeconomists encounter. Neuroeconomists aim to explain how a change in environmental conditions results in a choice by identifying the physical mechanisms underlying the decision. In contrast to most other approaches to the study of human decision-making, neuroeconomics explicitly takes into account brain activation and other neurobiological data to constrain and test models. Thus, neuroeconomic theories will have a wider set of logical primitives than traditional economic theories that includes objects (natural kinds) in common with traditional economics (for example, choices and certain environmental variables), as well as objects from psychology and neuroscience (for example, neural firing rates). Neuroeconomists take the position that the higher levels of analysis of human decision-making, economics and psychology, are at least partially reducible to lower levels of analysis, such as neurobiology. In other words, they assume that there are at least some regularities that homomorphically map some economic objects to psychological and neurobiological objects; that is, there are reductive linkages between the economic level of analysis and the psychological and neurobiological levels of analysis. This is equivalent to assuming that at least some of the features of economic explanation of behaviour can be predicted from the levels of psychological and neurobiological analysis. Other features of higher-level explanation may be ‘emergent’ at the higher levels and cannot be predicted from lower levels. Whether such a partial reduction of economic analysis to psychological or neurobiological analysis is possible or not is an empirical question. In the meantime, we can ask ourselves how likely partial reducibility is. For there to be no reductive linkages at all, all the features at the higher levels of explanation, economics and psychology, must be emergent properties at the higher levels. That is, all of economics and psychology must be unpredictable from neurobiology in principle. As Glimcher (2011) points out, this is only true under an extreme set of assumptions, and to the best of our knowledge, such an extreme degree of unpredictability, or non-reducibility, has never before been observed in science. If we take the position that the economic level of analysis can be reduced to the psychological and neurobiological level of analysis, at least partially, we should then ask ourselves how attractive the neuroeconomic research program is to economists, say. The consideration of other levels of analysis of human behaviour (that is, psychological or neurobiological) might be quite unattractive to an economist if the observation of choices was sufficient to develop and test a complete account of human behaviour. Given the state of neoclassical, revealed-preference models, such behavioural sufficiency seems unlikely though. But even if choices were sufficient, the neuroeconomics research program might have its merits as it might provide a short cut in the development of a complete account of decision-making. The conception of knowledge whereby different levels of analysis of phenomena can be linked through partial reduction is sometimes called ‘consilience’ (Wilson 1998). The hope is that the process of linking the different levels of analysis of human behaviour will strengthen all of the parent disciplines. Many other areas in science underwent such a process with great success. For example, researchers have used physics and chemistry to explain the structure of cells or genetics and chemistry to explain certain aspects of evolutionary biology. Linking the economic level of analysis with the psychological and neurobiological levels of analysis requires reductive linkages between the different levels; that is, homomorphisms between the logical objects at the higher level of analysis and those at the lower level of analysis. Some of the objects of economic theories, like choices, expected values or beliefs, may be easier to map into the lower levels. Others, like preferences, may be more difficult to link. Sometimes, these attempts of reductive linkages will encounter theoretical (ontological) and practical limits, in which case a redefinition of objects may become necessary. The development of a conceptual framework for neuroeconomics, including a set of hypotheses linking economics with psychology and neurobiology, is still in its infancy. The reader is referred to Glimcher (2011) for an outline of what such a framework may look like. Neuroeconomists want to unravel the neural mechanism underlying decisions, from stimulus presentation through to choice, feedback and learning. Most neuroeconomic studies involve a decision-making task, some of them refinements of tasks developed in experimental economics, in which participants make a series of choices between one or more choice options. Unlike in traditional economics experiments, however, researchers measure neuronal activity while participants are completing the task, typically using brain-imaging technologies. In addition, researchers typically also measure participants’ brain structure. Sometimes, additional physiological measurements are taken, like heart rate, blood pressure or DNA, depending on the theory that is being tested. The basic information-processing units of the central nervous system are neurons, very small self-sustaining units about one-thousandth of a centimetre in diameter that process and transmit information by chemical signalling. The brain contains tens of billions of them. Neurons come in different shapes, sizes and typical patterns of communication. Chemical signalling between neurons occurs via synapses, specialised connections with other cells. Most neuroeconomic studies model neuronal activity and, in the remainder of the article, we will focus on those models.1 The most direct measure of neuronal activity is action potentials, all-or-none electrochemical pulses. The rate of action potentials can be measured by placing microelectrodes either inside a neuron or next to the neuron's cell body in the extracellular space, a technique called ‘single-unit recording’. Some of the most important discoveries in neuroscience have been generated by studies using this technique. Due to its invasive nature, single-unit recording is rarely applied in humans. There are a number of non-invasive techniques available though to measure neuronal activity. Electroencephalography (EEG) uses electrodes that are applied to the scalp to measures changes in the electrical field in the brain region underneath. Magnetoencephalography (MEG) measures changes in the magnetic fields induced by neuronal activity. Both EEG and MEG have very high temporal resolution (milliseconds range) but are most sensitive to superficial cortical signals. Functional magnetic resonance imaging relies on a difference in the magnetic susceptibility of oxygenated and deoxygenated blood to measure a blood oxygen level-dependent (BOLD) signal, which in turn is related to brain activity. Functional magnetic resonance imaging has very high spatial resolution (1–10 mm) but low temporal resolution (one-to-two seconds). Spatial and temporal resolutions have to be traded off in the acquisition protocol. In general, the higher the spatial resolution, the lower the temporal resolution. Compared to the other imaging techniques described here, fMRI has a substantial advantage in resolving small structures and those that are deep in the brain, including several that are involved in reward processing. The technologies described above differ widely in both upfront investments required and maintenance costs. Electroencephalography systems require low upfront investments and have low maintenance costs, whereas MEG and fMRI systems require very high investments (typically, several million dollars) and are expensive to maintain (several hundred-thousand dollars per year). In addition, they require infrastructure that is typically only available in hospitals or purpose-built research facilities. Functional magnetic resonance imaging measures physiological changes in the brain that are correlated with neuronal activity. A person is placed in a strong and highly structured magnetic field and subjected to brief radiofrequency pulses of energy. Different chemical substrates respond to these pulses as a function of the local magnetic field. This allows the scanner to measure the chemical structure of brain tissue at any location inside the brain with very high precision. The physical properties of the signal measured by a MRI scanner are now very well understood.2 Relating information about the local chemical structure of the brain to neuronal activity, however, is significantly more complicated. Magnetic resonance imaging scanners cannot detect the local shifts in electrical equilibrium produced by the brain because these shifts lie well beyond the resolution of the devices. Instead, scanners measure brain activity indirectly by observing small changes in the local chemical environment induced by neuronal activity. When a brain cell becomes active, it consumes energy. This demand for energy leads to an increase in blood flow. The response of the vascular system to increased energy demand is now well characterised and approximates a linear process. More precisely, the vascular system responds to an impulse in demand with a graded increase in blood flow. This process is called the ‘haemodynamic response’. It starts about two seconds after the impulse in demand and peaks at a delay of about six seconds. The haemodynamic response regulates the density of the molecule haemoglobin which carries oxygen to the cells. Haemoglobin has a magnetic signature that can be measured by the brain scanner. The BOLD signal contrast, therefore, is a consequence of a series of indirect effects. The MRI scanner thus allows the researcher to measure the haemodynamic response as a time series at almost any location in the brain. However, signal-to-noise considerations limit the precision of this measurement. In practice, with each measurement the scanner detects the local oxygenation of the blood in little cubes of brain tissue. The cubes are known as ‘voxels’ and are typically a few millimetres on each side. Hence, the BOLD signal in each voxel is an estimate of the average metabolic demand by all the neurons within that voxel, about 10 million neurons. This measurement is repeated at intervals of 1–10 seconds, intervals known as ‘repetition times’ (TRs). It enables researchers to construct a time series that reports average metabolic activity in each voxel in the brain. A typical brain scan contains 50 000 voxels; that is, 50 000 measurement points. The BOLD signal is acquired in slices, typically between 20 and 40. Each slice contains several thousand voxels. All slices acquired in one TR are called a ‘volume’. A scan therefore comprises multiple volumes, one per TR, which in turn contain a set of slices, with thousands of voxels each. A volume allows the researcher to reconstruct three-dimensional images of brain activation (that is, BOLD signals in different parts of the brain). Neuroeconomic studies are usually based on an event-related design in which each stimulus is presented as an individual event, or trial. There are two critical considerations in designing the structure of an experiment. One is related to the shape of the haemodynamic response function and the other is related to the signal-to-noise ratio of MRI scans. The BOLD signal measured by the scanner is the convolution of the neuronal activity researchers are interested in, with a 20 second-long ‘haemodynamic response function’ that approximately takes the form of a gamma function. The BOLD signal response sets in about two seconds after a given neural region is stimulated and peaks at around six seconds. If the interval between stimuli is sufficiently long (that is, greater than about 10 seconds), the haemodynamic response will decay to baseline after each stimulus. If stimuli are presented closer together, the haemodynamic responses of the different stimuli have to be separated using special analysis procedures. There is a lower limit though on the time interval between two stimuli, below which it becomes very difficult, if not impossible, to distinguish the neural activities of the different events in the BOLD signal measurement. Hence, the spacing of events of interest in a trial is critical. Another consideration is related to the low signal-to-noise ratio of the BOLD signal. Functional magnetic resonance imaging data have a very low signal-to-noise ratio. First, the measured BOLD signal is very small compared with total intensity of the MRI signal. Second, the task-related change in the BOLD signal is very small compared with other sources of spatial or temporal variability across and within images. In order to get a reliable signal, researchers typically administer a relatively large number of trials to each participant, repeating the same stimulus multiple times. The above considerations, combined with the high cost of scanning, significantly constrain the design of a fMRI experiment and represent one of the biggest challenges in neuroeconomic studies. Stimuli are typically administered through a pair of goggles or through a screen at the back of the scanner that the participant can view through a set of mirrors. The participant usually responds by pressing a button on a small console. A fMRI dataset is a large panel that contains a time series of measurements for each of the voxels and each of the participants. An experimental session typically lasts between 20 minutes and 2 hours per participant and measurements are taken at one-to-two second intervals. Most fMRI studies have a sample size (number of participants) of between 10 and 30. With scanning costs coming down, the size of datasets of fMRI studies is likely to increase in the future. In addition to functional data (that is, brain activation data acquired while the participants perform an experimental task), researchers typically also acquire a set of high-resolution anatomical images of the brain that are used to precisely locate brain regions. Researchers typically also acquire a set of behavioural data that includes characteristics of stimuli and responses. Behavioural data are usually analysed with standard econometric techniques. In this section, we will sketch the analysis of fMRI data.3 In order to minimise variability in the BOLD signal that is unrelated to the stimuli administered, the data acquired are preprocessed to compensate for specific sources of variability. Preprocessing can be considered separately from other aspects of data analysis because it is generally applied to all fMRI data in a similar manner, independent of the experimental design. The first few images of every scan are typically discarded to avoid saturation effects. In the first step, data are corrected for slight differences in the timing of data acquisition so that each data point reflects a measurement at exactly the same time point within one TR (volume). Second, data are corrected for head motion during data acquisition. The spatial and temporal corrections just described ensure that each voxel contains data from a single brain region, as sampled at regular intervals throughout the entire time series. Functional data are then mapped onto high-resolution and high-contrast structural (anatomical) images. This mapping is achieved by co-registration algorithms linking the functional images to high-resolution structural images from the same participant. Furthermore, in order to be able to compare data across participants, images of each participant's brain must be transformed so that they are all the same size and shape. This process is called ‘normalisation’. Most studies use the BOLD signal change as the primary dependent variable. Hypotheses will relate manipulations of one or more independent variables to changes in the BOLD signal. An experiment will contain at least two conditions, an experimental condition and a control condition, that will differ in the values of the independent variables. Following the onset of brief neuronal activity, the haemodynamic response function takes about six seconds to rise to its maximum. After the cessation of neural activity, it falls over a period of 5–10 seconds and then stabilises at a below-baseline level for an extended interval. The consistency of this pattern allows the researcher to predict the change in fMRI activation that should be evoked in an active voxel. With the help of standard correlation analysis, the researcher can quantify how well the observed data match a canonical haemodynamic response. This allows the researcher to identify voxels in which the fMRI time course reflects underlying neuronal activity. In this type of analysis, statistical tests are conducted on each voxel to evaluate its significance relative to the experimental hypothesis. Voxelwise analysis is typically based on a general linear model (GLM). The data are represented as a two-dimensional data matrix. The spatial structure of the data is not used in voxelwise analysis since the values of the parameter weights and the error term are calculated independently for each voxel. Instead, all voxels in a volume are arranged along one dimension. A design matrix specifies the linear model to be evaluated. The parameter matrix indicates the beta values for a given voxel. The regressors in the design matrix represent the hypothesised contributors to the fMRI time course. In the GLM, regressors associated with specific hypotheses are known as ‘experimental regressors’. In neuroeconomics, researchers will often use characteristics of stimuli, like payoff magnitude or probabilities, or related economic concepts, like expected values or variance, as covariates. Experimental conditions may be coded as indicators. In addition to regressors of interest, the design matrix often includes additional regressors associated with known non-experimental sources of variability (‘nuisance regressors’). It is important that the predictions of BOLD signal activation in the design matrix accurately reflect changes in the brain associated with the experimental manipulations. Most importantly, the design matrix should take into account the shape of the haemodynamic response following the stimulation of neuronal activity. Researchers typically create a design matrix by convolving predicted neuronal activity with a standard haemodynamic response function. The parameters provide an estimate of the relative signal amplitude evoked by the experimental manipulation; that is, the size of the response in that voxel. To obtain a statistic, the value of the parameter is divided by the residual error. Under the null hypothesis, this quantity follows an F-distribution. When analysing fMRI data, researchers want to evaluate hypotheses about brain function. As fMRI provides no information about absolute levels of activation, only about changes of activation between two conditions, most research hypotheses involve comparison of activation between two conditions. The statistical evaluation of whether the experimental manipulation evokes a significant change in activation is called a ‘contrast’. To test an experimental hypothesis, the researcher evaluates whether the experimental manipulation caused a significant change in the parameter weights of the GLM model. The form of the hypothesis determines the form of the contrast, or which parameter weight contributes to the test statistic. A contrast evaluates whether a set of regressors causes significant changes in the BOLD signal. The parameter weights are multiplied by the chosen contrast weight. The resulting quantity is scaled by the residual error and the scaled value is then evaluated (usually with a t-test) against a null hypothesis of zero. These single-condition contrasts are often referred to as ‘main effects’ of a condition. Sometimes, research hypotheses involve combining several different contrasts by using a F-test. One of the central, and most obvious, problems of fMRI data analysis is that of multiple comparisons. Given that researchers run regression models on tens of thousands of voxels, the probability of having no false-positives in a fMRI dataset, at the confidence levels typically employed, is close to zero. Therefore, researchers correct their results. A common approach for the correction of multiple comparisons involves minimising the number of false-positive results by controlling for the family-wise error rate. The method typically employed is Bonferroni correction, which holds constant the overall probability of a false-positive, given the number of statistical tests conducted. To implement Bonferroni correction, the significance threshold, or alpha-value, is decreased proportionately to the number of independent statistical tests. Another approach to the correction of multiple comparisons is to use information about cluster sizes of any active clusters of voxels. Using cluster-size-thresholding, a researcher adopts a relatively liberal alpha-value for voxelwise comparisons and then increases the conservatism of the test by only counting clusters as significant if they are as large as some threshold. Voxelwise analysis is appropriate for finding out which brain regions are part of a pattern of fMRI activation. If one is interested in what pattern of activation occurs in a particular brain region, region-of-interest (ROI) analysis is more appropriate. The establishment of a ROI is usually based on an a priori expectation about the likely involvement of that brain region in a task. If anatomical ROIs are chosen beforehand, then they can provide an unbiased estimate of activation within a given brain area. Such analysis has several advantages over voxelwise methods. Most importantly, because there always are many fewer ROIs than voxels, the total number of statistical tests is greatly reduced, ameliorating the need for correction for multiple comparisons. Second, since each ROI combines data from many voxels, there will be a corresponding increase in the signal-to-noise ratio, to the extent that the ROI is functionally homogeneous. So far, we have focused on the issue of identifying areas of activation within a single participant's brain. Yet, nearly all fMRI studies involve multiple participants. To analyse data at group level, most fMRI studies use a multi-stage random-effects analysis. Since the combination from multiple participants almost always treats independent variables as having fixed effects at earlier stages of analysis, random-effects analyses (at the inter-participant level) can also be considered mixed-effects analyses if information about the variability at the participant level is carried up to the inter-participant level. The analysis typically proceeds in three steps. In the first step, the researcher calculates summary statistics (for example, parameter estimates for regressors of interest) for the data from each run from each participant, independently. Then, in the second-level analysis, these statistics are combined from all runs performed with each participant, using a fixed-effects analysis. If a voxelwise approach is used, this creates a statistical map of the contrasts of interest for each participant. In the third level of analysis, the distribution of data from all the participants is itself tested for significance. As a rough approximation, this can be done using a t-test that evaluates whether the participants’ summary statistics are drawn from a distribution with a mean of zero. More powerful approaches incorporate variances from earlier levels to better estimate the true significance of the effects. Combining statistical tests from all voxels in the brain, researchers construct a statistical map of brain activation. These statistical maps are usually colour-coded according to the probability value associated with the t-value for each voxel. The maps are usually displayed on top of a base image that illustrates brain anatomy and they typically summarise the core results of a study. Researchers have at their disposal many other methods to analyse fMRI data to investigate, among other factors, connectivity of brain regions or the structure of brain activation patterns. Techniques for connectivity analysis include structural equation modelling and dynamic causal modelling, which infers not just connectivity itself but also task-related changes in connectivity (Friston, Harrison and Penny 2003). Pattern analysis is typically performed using multi-voxel pattern classification (Hampton and O’Doherty 2007; Tusche, Bode and Haynes 2010). To date, neuroeconomic studies link behavioural, or economic, data with neurobiological data largely by correlating economic variables with brain activation. Such analyses allow researchers to infer which economic variables are represented in the brain, at what time during the decision-process they are represented, and where in the brain they are represented. Researchers might also use economic variables to split their dataset, into easy and difficult decisions for example, and then compare brain activation in those sub-sets to investigate which brain networks are involved in the different conditions. To date, fMRI work has been pursuing three (not mutually exclusive) goals: (i) neuroanatomical localisation of cognitive processes; (ii) testing of theories of cognition; and (iii) testing of neural models (Coltheart 2009). It is probably fair to say that most existing studies in neuroeconomics focus on the first goal, although the aim clearly is to achieve the second and the third goals. At a practical level, fMRI analyses test whether a set of regressors is correlated with significant changes in the BOLD signal, with the aim to characterise the cognitive processes involved in the experimental condition. Many statistical methods of standard fMRI analyses are closely related to standard econometric methods and very similar issues arise when interpreting statistical results. In this section, we will be focusing on a set of select points that have been of particular concern in fMRI analyses. Results of fMRI analyses are often criticised on the basis that they are correlational and that it is impossible to infer that the activated regions are necessary or sufficient for the engagement of a given cognitive process; that is, the brain mechanism underlying behaviour in a given experiment. However, such criticism of the correlational nature of analyses often rests on an overly strict definition of hypothesis-testing. All hypotheses are based on the principle that the experimental manipulation causes changes in the dependent variable. However, the chain of causation does not have to be fully elaborated. Correlation is not equivalent to meaninglessness. When interpreting the results of standard fMRI analyses, researchers typically employ one of two types of inference: ‘forward’ inference and ‘reverse’ inference (Henson 2006; Poldrack 2006). Forward inference refers to the use of qualitatively different patterns of neuronal activity to distinguish between competing cognitive processes (Henson 2006). The idea rests on the assumption that a researcher can design experimental conditions that differ in the presence of a cognitive process according to one theory, but not another. If that is the case, then the observation of distinct patterns of neuronal activity associated with those conditions constitutes evidence in favour of the first theory. Sometimes, however, the reasoning takes the following form (Poldrack 2006): In the present study, when task comparison A was presented, brain area Z was active. In other studies, when cognitive process X was putatively engaged, then brain area Z was active. Thus, the activity of area Z in the present study demonstrates engagement of cognitive process X by task comparison A. Such a chain of arguments is called a reverse inference. It reasons from the presence of brain activation backwards to the engagement of a particular mental process. Without further auxiliary assumptions, the deduction is invalid. Reverse inference can nevertheless provide important information about brain processes. Using Bayesian approaches, for example, lets one characterise the factors that affect the quality of a reverse inference. In particular, confidence in reverse inference can be improved by increasing the selectivity of response in the brain ROI and by increasing the prior probability of the cognitive process in question. In this way, reverse inference can be used to at least generate new hypotheses (Poldrack 2006). A related issue is the fact that many decision processes seem to involve a large number of brain regions simultaneously and ‘computations’ often seem to involve feed-forward processes across widely distributed regions. Examples include the ‘computation’ of value and the ‘generation’ of time in the brain. Furthermore, many neural processes act in the range of tens-to-hundreds of milliseconds, well below the temporal resolution of fMRI. It requires clever experimental set-ups, in combination with new types of data analysis, and potentially new types of data to gain a better understanding of the physical mechanism through which a perception of a stimulus causes a choice. Another concern with regards to the interpretation of fMRI results is the presence of confounding factors, particularly hidden causal factors. If there are only two experimental conditions and they differ in one property only, then any change in the dependent variable can be confidently attributed to the change in that property. Often, conditions differ in more than one way and there may be multiple explanations for experimental effects. This is often overseen in the interpretation of fMRI results. As in any other experimental design, this problem is difficult, but possible, to master by selecting good experimental and control conditions. A final issue with the interpretation of fMRI analyses to highlight is empirical validity and generaliseability of experimental tasks. Many tasks are highly stylised and, in many cases, it is not clear how they generalise to behaviour outside the laboratory. A major challenge for neuroeconomists will be the development of experimental tasks that are predictive of important types of behaviour in the ‘real world’. It is early days for neuroeconomics. While the research program has not yet proposed an alternative theory of human decision-making, it has lead to tremendous insights into human behaviour, including economic choices. It is too early though to render a judgement on the neuroeconomics research program. No doubt, any new model neuroeconomists propose will be evaluated relative to existing models of decision-making, including any of the models economics has proposed so far. A model can be evaluated in any number of ways. One of the criteria is parsimony. Another one is explanatory power, the degree to which the model explains the phenomenon of interest. I suppose that it will be the latter criterion, explanatory power of human behaviour inside and outside the laboratory, by which the success of the neuroeconomic research program will be judged. If it succeeds, we will have a better model of human decision-making and all the parent disciplines will benefit from it." @default.
- W2019990018 created "2016-06-24" @default.
- W2019990018 creator A5077035407 @default.
- W2019990018 date "2011-06-01" @default.
- W2019990018 modified "2023-10-15" @default.
- W2019990018 title "Neuroeconomics: Investigating the Neurobiology of Choice" @default.
- W2019990018 cites W1964859800 @default.
- W2019990018 cites W2025476272 @default.
- W2019990018 cites W2103388697 @default.
- W2019990018 cites W2151433332 @default.
- W2019990018 doi "https://doi.org/10.1111/j.1467-8462.2011.00638.x" @default.
- W2019990018 hasPublicationYear "2011" @default.
- W2019990018 type Work @default.
- W2019990018 sameAs 2019990018 @default.
- W2019990018 citedByCount "6" @default.
- W2019990018 countsByYear W20199900182012 @default.
- W2019990018 countsByYear W20199900182014 @default.
- W2019990018 countsByYear W20199900182015 @default.
- W2019990018 countsByYear W20199900182017 @default.
- W2019990018 countsByYear W20199900182020 @default.
- W2019990018 countsByYear W20199900182022 @default.
- W2019990018 crossrefType "journal-article" @default.
- W2019990018 hasAuthorship W2019990018A5077035407 @default.
- W2019990018 hasConcept C15744967 @default.
- W2019990018 hasConcept C169760540 @default.
- W2019990018 hasConcept C180747234 @default.
- W2019990018 hasConcept C188147891 @default.
- W2019990018 hasConcept C188660851 @default.
- W2019990018 hasConceptScore W2019990018C15744967 @default.
- W2019990018 hasConceptScore W2019990018C169760540 @default.
- W2019990018 hasConceptScore W2019990018C180747234 @default.
- W2019990018 hasConceptScore W2019990018C188147891 @default.
- W2019990018 hasConceptScore W2019990018C188660851 @default.
- W2019990018 hasIssue "2" @default.
- W2019990018 hasLocation W20199900181 @default.
- W2019990018 hasOpenAccess W2019990018 @default.
- W2019990018 hasPrimaryLocation W20199900181 @default.
- W2019990018 hasRelatedWork W1559548768 @default.
- W2019990018 hasRelatedWork W1971138776 @default.
- W2019990018 hasRelatedWork W2033415439 @default.
- W2019990018 hasRelatedWork W2059174210 @default.
- W2019990018 hasRelatedWork W2336787700 @default.
- W2019990018 hasRelatedWork W2735341844 @default.
- W2019990018 hasRelatedWork W2748952813 @default.
- W2019990018 hasRelatedWork W2899084033 @default.
- W2019990018 hasRelatedWork W4211181435 @default.
- W2019990018 hasRelatedWork W831272787 @default.
- W2019990018 hasVolume "44" @default.
- W2019990018 isParatext "false" @default.
- W2019990018 isRetracted "false" @default.
- W2019990018 magId "2019990018" @default.
- W2019990018 workType "article" @default.