Matches in SemOpenAlex for { <https://semopenalex.org/work/W4244280299> ?p ?o ?g. }
Showing items 1 to 53 of
53
with 100 items per page.
- W4244280299 endingPage "478" @default.
- W4244280299 startingPage "472" @default.
- W4244280299 abstract "Previous articleNext article FreeCommentAndrew CaplinAndrew CaplinNew York University and NBER Search for more articles by this author New York University and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreIt is a privilege to discuss the vibrant field of survey measurement of probabilistic economic expectations in which Manski has played the essential pioneering role. Validations and other research developments related to these new measurements have been richly presented by Manski (2004), Hurd (2009), and now in Manski’s extensive article for the NBER Macroeconomics Annual. In my review, I take this material for granted and focus on some interesting facets of the intellectual background to this important field of research, stress the importance of improved measurement of beliefs, and present a very brief “best case” analysis of future developments. I focus on the importance of systematizing our understanding of “errors” in survey responses: I use the quote marks to indicate that this is generally hard to define, let alone to measure. The approach I suggest that has the most promise in this regard takes seriously how attentive survey respondents are to the questions posed.OriginsQualitative measurement of subjective beliefs has a long history in psychological research. In a typical survey question in this tradition, respondents place future events in such discrete categories as “possible,” “likely,” “unlikely,” and so forth. For social scientists engaged in quantitative research, these questions are fundamentally unsatisfactory. What do these scales mean to each respondent? If, as is sometimes the case, the respondent has something akin to a subjective probability of 80% in mind, is this possible, likely, very likely, or otherwise? Why would one expect different respondents to have the same subjective scales? Why would one expect these scales to be independent of context? Why not ask for quantitative answers to probability questions rather than insisting that they be boxed and camouflaged in some idiosyncratic and local manner?Given the ambiguities of the questions in the psychological tradition, it is hardly surprising that there was a move to quantification. Brier (1950) pioneered in the development of probabilistic weather forecasts to get beyond the ambiguity in claiming that it was “quite likely” to rain. In this case, the drive to quantification was statistical. By tracking past conditions and weather outcomes, the forecaster can directly check how often conditions analogous to those currently in force gave rise to rain in the past. This is naturally measured in proportionate terms, making the probabilistic statement a natural summary of the likely implications of current (and past) conditions for the weather tomorrow. Translating this into vague language has few obvious advantages.While the drive to quantification of probabilities in weather forecasting was pragmatic in nature, the corresponding drive in economics derived more from developments in economic theory. It was associated with advances in the theory of choice under uncertainty, in particular, expected utility theory. In fact, the early literature on belief measurement in experiments was reviewed by Savage (1971) in making his proposal for a “proper” scoring rule. With regard to survey measurement, Haavelmo was among the first to suggest the potential value of responses to quantitative questions on subjective beliefs concerning future values outcomes:It is my belief that if we can develop more explicit and a priori convincing economic models in terms of these variables … then ways and means can and will eventually be found to obtain actual measurements of such data.(Haavelmo 1958, 357)It is interesting to note that Haavelmo suggested first developing “a priori convincing” models of belief formation before trying to obtain the corresponding measurements. Luckily, the profession has not followed the proposed sequential strategy. Indeed, it is hard to know in what sense and how we could become “convinced” by the a priori theories that we have developed to this point, including the Bayesian hypothesis and rational expectations. Rather than waiting for conviction to arrive, researchers wisely started to diversify their research portfolios by simultaneously developing methods for measuring subjective beliefs. This is the body of research that Manski surveys in his comprehensive paper.The first to implement quantitative methods of belief measurement was Juster (1966) in the context of future car purchases. Juster considered how responses to traditional yes/no questions on purchase intentions (buy or not buy) were interpreted by respondents. He imagined that the response might reflect which was perceived as more likely. But this limits predictive power. One would expect there to be many who do in the end buy cars who were ex ante less than 50% likely to, and others who do not despite ex ante odds higher than 50%. Juster noted this likely problem with existing measures, and concluded that it would likely be more predictive to let consumers indicate their purchase probabilities in more granular fashion. Even if not everyone has precise subjective beliefs, allowing more refined answers has the potential to reveal more features of the distribution of subjective likelihoods, and thereby improve the quality of the signal that the survey provides the data analyst. So it turned out. Juster conducted a survey to gather just such data on the subjective likelihood of making such a purchase and found that indeed the resulting answers were better predictors of purchase behavior than were the yes/no responses to questions on purchase intentions.The profession was very slow to internalize the importance of Juster’s approach and its broad ramifications. One reason for this was skepticism in the value of such measurement in the broader profession. The traditional viewpoint among economists is that the legitimate subject of our field is choice behavior. In fact, the subjective expected utility theory of Savage even defines beliefs by the properties of idealized choice data. Absent incentives, survey responses are just cheap (and unmodeled) talk. Not having a solid theory of what the answers to unincentivized survey questions mean, what is the point in posing them?The problem with such a priori arguments against new forms of measurement is that they keep closed research doors that may be important to open. The scientifically correct way to deal with such arguments is therefore to ignore them. That is precisely what the profession in fact did, due to far-sighted leadership of key surveys. The Health and Retirement Survey (HRS) was particularly central. Rather than one funeral at a time, much of the professional resistance to belief measurement was overcome one HRS wave at a time, for which credit is due to its founding fathers, Richard Suzman, Tom Juster, and Bob Willis. The first wave of the HRS was in 1992, and it is no coincidence that this literature took off at approximately the same time. It is only through placement of expectations questions on all waves of the HRS and other panel surveys that their full value is coming to be appreciated. Corresponding questions are now posed in household panel surveys worldwide. As a result of widespread adoption of the quantitative probability measures, the qualitative approach has largely been replaced, even within psychometrics (see Budescu and Wallsten 1995).Why Measure Beliefs Well, and How?Manski and associates were key in designing, implementing, and testing the first really custom-designed survey questions on subjective beliefs (Manski 1990; Dominitz and Manski 1997a, 1997b). These questions are designed around the beliefs that subjective expected utility theory places center stage in choice theory. Of equal importance is the fact that they were designed to allow application of estimation methods originally developed for standard random utility models of discrete choice. There is no imposed categorization at all. For a discrete event that may or may not occur, the typical question seeks a percentage chance response on the [0, 100] scale. When the question concerns the future value of some measurable quantity, such as the Dow Jones Index, the same format is used to elicit points on the distribution function or the probability that the event will lie in some interval. By varying the threshold one can, in principle, get very close to recovering the full distribution of beliefs. It is this distribution that can then be used in subsequent analysis of choice behaviors.Given the backdrop of professional skepticism, it was important in the early days of the literature to validate (in broad terms) both the internal consistency and the external validity of these belief measurements. Many such validation exercises have now been conducted, and (for the most part) the verdict is positive, as detailed in Manski (2004). A few simple and central examples: Hurd and McGarry (1995, 2002) show that individuals and groups with higher subjective survival probabilities live longer, while Hurd and Rohwedder (2012) show that differences in expectations of future stock prices predict the direction of future stock purchases and sales.While the literature demonstrates clearly the high information content of answers to probabilistic survey questions, their increasing use makes it important to identify sources of error. The best-studied error is apparent overuse of the 50% focal answer. Efforts to model this are producing further innovations in measurement (Bruine de Bruin and Carman 2012; Manski and Molinari 2010). At the same time, there is awareness that details of the response interface impact responses. Numerical probability scales themselves are not universally loved or understood. Visual aids are being developed to present probabilistic constructs in as unambiguous a manner as possible (e.g., the “bins and balls” format of Delavande and Rohwedder [2008]). There are also many fine details to work out in terms of specifying the future variable of interest to an audience that is not trained in the professional jargon of our field and its acceptable ambiguities (GDP?). To reduce the scale of this challenge, cognitive interviews have been employed to great effect to improve the design of survey questions on inflation expectations (Armantier et al. 2013).As measurement improves, so richer theories can be tested. There is now a virtuous circle setting up in which advances in measurement are liberating researchers to confidently measure changes over time in probabilistic beliefs. This allows critical new questions to be answered related to updating. For example, Wiswall and Zafar (2015a, 2015b) use sequential surveys to understand how provision of objective information on returns to schooling alters beliefs.The methodological message is more general: the tasks of improving theory and improving measurement are better conducted in parallel than in series. We are at the very earliest days of this virtuous circle, and there is much cause for optimism about our ability to improve understanding of beliefs in coming decades as advances in theory and measurement are increasingly coordinated.Modeling and Measuring Errors in Probabilistic Survey ResponsesIn the best case, future research will increasingly systematize our understanding of errors in survey response and how they relate to behavior. In part, this will give us understanding into systematic departures from the standard models of beliefs and updating. The profession (rightly, in my opinion) will not abandon Bayesian updating and rational expectations without solid evidence that alternative models better explain behavior. There are good reasons to doubt that standard behavioral data will be sufficiently compelling to launch alternative models with any confidence. However, if alternative models are supported by complementary data from surveys and from behavior, this will greatly enhance their credibility and likelihood of survival in the intellectual marketplace.Currently there are two key hurdles to joint use of market and survey data to produce general purpose and better empirically grounded models of beliefs. The first is theoretical and a form of the Lucas critique. Models of beliefs must be suitable for policy analysis. Even if we infer belief dynamics in a given market from patterns in behavior and in survey responses, we do not know that these patterns will be robust to changes in policy and institutions. As in other fields, the need to understand counterfactual patterns drives a need to model why beliefs evolve as they do in a given setting, and how changes to that setting would impact the resulting pattern of beliefs. The second hurdle is the point noted above: we do not as yet understand patterns of errors in survey responses. We must be very confident in the interpretation of answers before we treat them as strongly suggestive of new models of belief formation. Maybe what we are uncovering is more the pattern in errors in responses to questions about subjective beliefs than actual features of behaviorally relevant belief dynamics.In the best case that I envisage, we will make progress in both of the above areas in the upcoming years and decades. The reason for this is that economists are finally placing the modeling of attention center stage, in large part inspired by the work of Sims (1998, 2003). The rational inattention model that he introduced, which involves costs based on Shannon entropy, can be generalized in many ways (e.g., Caplin and Dean 2015; Caplin, Dean, and Leahy 2017). All that is really essential to recognize is that thinking is costly, so that we economize on its use. This can be very helpful in understanding patterns in the beliefs that we hold about the outside world: Why bother being well informed about the stock market if one does not have money to invest in it? It can be equally helpful in understanding survey responses, which may result from incomplete consideration of the corresponding reality. In years to come, we will increasingly model and measure patterns of errors in survey response that are revealing of attentional effort. This will teach us a great deal about how to interpret responses, and about how best to use them in fitting models of the evolution of the beliefs that determine market outcomes.EndnoteFor acknowledgments, sources of research support, and disclosure of the author’s material financial relationships, if any, please see http://www.nber.org/chapters/c13908.ack.ReferencesArmantier, O., W. Bruine de Bruin, S. Potter, G. Topa, W. van der Klaauw, and B. Zafar. 2013. “Measuring Inflation Expectations.” Annual Review of Economics 5:273–301.First citation in articleCrossrefGoogle ScholarBrier, G. W. 1950. “Verification of Forecasts Expressed in Terms of Probability.” Monthly Weather Review 78:1–3.First citation in articleCrossrefGoogle ScholarBruine de Bruin, W., and K. G. Carman. 2012. “Measuring Risk Perceptions: What Does the Excessive Use of 50% Mean?” Medical Decision Making 32:232–36.First citation in articleCrossrefGoogle ScholarBudescu, D. V., and T. S. Wallsten. 1995. “Processing Linguistic Probabilities: General Principles and Empirical Evidence.” In Decision Making from a Cognitive Perspective, Psychology of Learning and Motivation: Advances in Research and Theory, vol. 32, ed. J. Busemeyer, R. Hastie, and D. L. Medin, 275–318. Cambridge, MA: Academic Press.First citation in articleGoogle ScholarCaplin, A., and M. Dean. 2015. “Revealed Preference, Rational Inattention, and Costly Information Acquisition.” American Economic Review 105 (7): 2183–203.First citation in articleCrossrefGoogle ScholarCaplin, A., M. Dean, and J. Leahy. 2017. “Rationally Inattentive Behavior: Characterizing and Generalizing Shannon Entropy.” NBER Working Paper no. 23652, Cambridge, MA.First citation in articleCrossrefGoogle ScholarDelavande, A., and S. Rohwedder. 2008. “Eliciting Subjective Probabilities in Internet Surveys.” Public Opinion Quarterly 72 (5): 866–91.First citation in articleCrossrefGoogle ScholarDominitz, J., and C. Manski. 1997a. “Perceptions of Economic Insecurity: Evidence from the Survey of Economic Expectations.” Public Opinion Quarterly 61:261–87.First citation in articleCrossrefGoogle Scholar———. 1997b. “Using Expectations Data to Study Subjective Income Expectations.” Journal of the American Statistical Association 92:855–67.First citation in articleCrossrefGoogle ScholarHaavelmo, T. 1958. “The Role of the Econometrician in the Advancement of Economic Theory.” Econometrica 35:1–7.First citation in articleGoogle ScholarHurd, M. 2009. “Subjective Probabilities in Household Surveys.” Annual Review of Economics 1:543–64.First citation in articleCrossrefGoogle ScholarHurd, M., and K. McGarry. 1995. “Evaluation of the Subjective Probabilities of Survival in the Health and Retirement Study.” Journal of Human Resources 30:S268–92.First citation in articleCrossrefGoogle Scholar———. 2002. “The Predictive Validity of Subjective Probabilities of Survival.” Economic Journal 112:966–85.First citation in articleCrossrefGoogle ScholarHurd, M., and S. Rohwedder. 2012. “Stock Price Expectations and Stock Trading.” NBER Working Paper no. 17973, Cambridge, MA.First citation in articleGoogle ScholarJuster, T. 1966. “Consumer Buying Intentions and Purchase Probability: An Experiment in Survey Design.” Journal of the American Statistical Association 61:658–96.First citation in articleCrossrefGoogle ScholarManski, C. 1990. “The Use of Intentions Data to Predict Behavior: A Best-Case Analysis.” Journal of the American Statistical Association 85 (412): 934–40.First citation in articleCrossrefGoogle Scholar———. 2004. “Measuring Expectations.” Econometrica 72:1329–76.First citation in articleCrossrefGoogle ScholarManski, C., and F. Molinari. 2010. “Rounding Probabilistic Expectations in Surveys.” Journal of Business and Economic Statistics 28:219–31.First citation in articleCrossrefGoogle ScholarSavage, L. J. 1971. “Elicitation of Personal Probabilities and Expectations.” Journal of the American Statistical Association 66 (336): 783–801.First citation in articleCrossrefGoogle ScholarSims, C. A. 1998. “Stickiness.” Carnegie-Rochester Conference Series on Public Policy 49 (1): 317–56.First citation in articleCrossrefGoogle Scholar———. 2003. “Implications of Rational Inattention.” Journal of Monetary Economics 50 (3): 665–90.First citation in articleCrossrefGoogle ScholarWiswall, M., and B. Zafar. 2015a. “Determinants of College Major Choice: Identification Using an Information Experiment.” Review of Economic Studies 82 (2): 791–824.First citation in articleCrossrefGoogle Scholar———. 2015b. “How Do College Students Respond to Public Information about Earnings?” Journal of Human Capital 9 (2): 117–69.First citation in articleLinkGoogle Scholar Previous articleNext article DetailsFiguresReferencesCited by NBER Macroeconomics Annual Volume 322017 Sponsored by the National Bureau of Economic Research (NBER) Article DOIhttps://doi.org/10.1086/696062 © 2018 by the National Bureau of Economic Research. All rights reserved.PDF download Crossref reports no articles citing this article." @default.
- W4244280299 created "2022-05-12" @default.
- W4244280299 creator A5058539967 @default.
- W4244280299 date "2018-04-01" @default.
- W4244280299 modified "2023-10-14" @default.
- W4244280299 title "Comment" @default.
- W4244280299 cites W1210092515 @default.
- W4244280299 cites W1507789815 @default.
- W4244280299 cites W1972190631 @default.
- W4244280299 cites W1982271591 @default.
- W4244280299 cites W1998274472 @default.
- W4244280299 cites W2026007934 @default.
- W4244280299 cites W2057763875 @default.
- W4244280299 cites W2073241381 @default.
- W4244280299 cites W2108915661 @default.
- W4244280299 cites W2117367485 @default.
- W4244280299 cites W2124482936 @default.
- W4244280299 cites W2135539075 @default.
- W4244280299 cites W2152904780 @default.
- W4244280299 cites W2742758214 @default.
- W4244280299 cites W3121397177 @default.
- W4244280299 cites W3123430860 @default.
- W4244280299 cites W3124038895 @default.
- W4244280299 cites W3125230084 @default.
- W4244280299 cites W4238337041 @default.
- W4244280299 cites W4252127527 @default.
- W4244280299 doi "https://doi.org/10.1086/696062" @default.
- W4244280299 hasPublicationYear "2018" @default.
- W4244280299 type Work @default.
- W4244280299 citedByCount "0" @default.
- W4244280299 crossrefType "journal-article" @default.
- W4244280299 hasAuthorship W4244280299A5058539967 @default.
- W4244280299 hasConcept C162324750 @default.
- W4244280299 hasConceptScore W4244280299C162324750 @default.
- W4244280299 hasLocation W42442802991 @default.
- W4244280299 hasOpenAccess W4244280299 @default.
- W4244280299 hasPrimaryLocation W42442802991 @default.
- W4244280299 hasRelatedWork W1502198272 @default.
- W4244280299 hasRelatedWork W1986173648 @default.
- W4244280299 hasRelatedWork W1998718379 @default.
- W4244280299 hasRelatedWork W2006758266 @default.
- W4244280299 hasRelatedWork W2017540542 @default.
- W4244280299 hasRelatedWork W2054677056 @default.
- W4244280299 hasRelatedWork W2061514737 @default.
- W4244280299 hasRelatedWork W2073254488 @default.
- W4244280299 hasRelatedWork W2084227502 @default.
- W4244280299 hasRelatedWork W2899084033 @default.
- W4244280299 hasVolume "32" @default.
- W4244280299 isParatext "false" @default.
- W4244280299 isRetracted "false" @default.
- W4244280299 workType "article" @default.