Matches in SemOpenAlex for { <https://semopenalex.org/work/W3123436326> ?p ?o ?g. }
- W3123436326 endingPage "C68" @default.
- W3123436326 startingPage "C1" @default.
- W3123436326 abstract "We revisit the classic semi‐parametric problem of inference on a low‐dimensional parameter θ0 in the presence of high‐dimensional nuisance parameters η0. We depart from the classical setting by allowing for η0 to be so high‐dimensional that the traditional assumptions (e.g. Donsker properties) that limit complexity of the parameter space for this object break down. To estimate η0, we consider the use of statistical or machine learning (ML) methods, which are particularly well suited to estimation in modern, very high‐dimensional cases. ML methods perform well by employing regularization to reduce variance and trading off regularization bias with overfitting in practice. However, both regularization bias and overfitting in estimating η0 cause a heavy bias in estimators of θ0 that are obtained by naively plugging ML estimators of η0 into estimating equations for θ0. This bias results in the naive estimator failing to be N−1/2 consistent, where N is the sample size. We show that the impact of regularization bias and overfitting on estimation of the parameter of interest θ0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman‐orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters to estimate θ0; (2) making use of cross‐fitting, which provides an efficient form of data‐splitting. We call the resulting set of methods double or debiased ML (DML). We verify that DML delivers point estimators that concentrate in an N−1/2‐neighbourhood of the true parameter values and are approximately unbiased and normally distributed, which allows construction of valid confidence statements. The generic statistical theory of DML is elementary and simultaneously relies on only weak theoretical requirements, which will admit the use of a broad array of modern ML methods for estimating the nuisance parameters, such as random forests, lasso, ridge, deep neural nets, boosted trees, and various hybrids and ensembles of these methods. We illustrate the general theory by applying it to provide theoretical properties of the following: DML applied to learn the main regression parameter in a partially linear regression model; DML applied to learn the coefficient on an endogenous variable in a partially linear instrumental variables model; DML applied to learn the average treatment effect and the average treatment effect on the treated under unconfoundedness; DML applied to learn the local average treatment effect in an instrumental variables setting. In addition to these theoretical applications, we also illustrate the use of DML in three empirical examples." @default.
- W3123436326 created "2021-02-01" @default.
- W3123436326 creator A5015565298 @default.
- W3123436326 creator A5042942366 @default.
- W3123436326 creator A5050184772 @default.
- W3123436326 creator A5068383641 @default.
- W3123436326 creator A5079863455 @default.
- W3123436326 creator A5086970490 @default.
- W3123436326 creator A5091025078 @default.
- W3123436326 date "2018-01-16" @default.
- W3123436326 modified "2023-10-16" @default.
- W3123436326 title "Double/debiased machine learning for treatment and structural parameters" @default.
- W3123436326 cites W1460189015 @default.
- W3123436326 cites W1520595697 @default.
- W3123436326 cites W1730512236 @default.
- W3123436326 cites W1886275324 @default.
- W3123436326 cites W1963874316 @default.
- W3123436326 cites W1977675844 @default.
- W3123436326 cites W1981809679 @default.
- W3123436326 cites W1984547044 @default.
- W3123436326 cites W1988734172 @default.
- W3123436326 cites W2000008805 @default.
- W3123436326 cites W2014373672 @default.
- W3123436326 cites W2018543390 @default.
- W3123436326 cites W2022450888 @default.
- W3123436326 cites W2028995298 @default.
- W3123436326 cites W2045640115 @default.
- W3123436326 cites W2046977371 @default.
- W3123436326 cites W2051507405 @default.
- W3123436326 cites W2058248640 @default.
- W3123436326 cites W2062579211 @default.
- W3123436326 cites W2069119359 @default.
- W3123436326 cites W2074523758 @default.
- W3123436326 cites W2078658856 @default.
- W3123436326 cites W2079597004 @default.
- W3123436326 cites W2099730522 @default.
- W3123436326 cites W2100532505 @default.
- W3123436326 cites W2101746104 @default.
- W3123436326 cites W2104780889 @default.
- W3123436326 cites W2111162388 @default.
- W3123436326 cites W2116581043 @default.
- W3123436326 cites W2120846249 @default.
- W3123436326 cites W2129086422 @default.
- W3123436326 cites W2143966988 @default.
- W3123436326 cites W2148119495 @default.
- W3123436326 cites W2148596757 @default.
- W3123436326 cites W2150291618 @default.
- W3123436326 cites W2152001792 @default.
- W3123436326 cites W2163162137 @default.
- W3123436326 cites W2171443468 @default.
- W3123436326 cites W2244523147 @default.
- W3123436326 cites W2308122693 @default.
- W3123436326 cites W2398799638 @default.
- W3123436326 cites W2765398882 @default.
- W3123436326 cites W2886798606 @default.
- W3123436326 cites W2949148940 @default.
- W3123436326 cites W2950845368 @default.
- W3123436326 cites W2952150717 @default.
- W3123436326 cites W2952248799 @default.
- W3123436326 cites W2963278901 @default.
- W3123436326 cites W3098302339 @default.
- W3123436326 cites W3099550161 @default.
- W3123436326 cites W3103221895 @default.
- W3123436326 cites W3121832289 @default.
- W3123436326 cites W3122193054 @default.
- W3123436326 cites W3123785084 @default.
- W3123436326 cites W3124166904 @default.
- W3123436326 cites W3125188740 @default.
- W3123436326 cites W3125441349 @default.
- W3123436326 cites W3150893739 @default.
- W3123436326 cites W4233056867 @default.
- W3123436326 cites W4235461144 @default.
- W3123436326 cites W4247571494 @default.
- W3123436326 cites W4248240383 @default.
- W3123436326 cites W67506904 @default.
- W3123436326 cites W81897278 @default.
- W3123436326 doi "https://doi.org/10.1111/ectj.12097" @default.
- W3123436326 hasPublicationYear "2018" @default.
- W3123436326 type Work @default.
- W3123436326 sameAs 3123436326 @default.
- W3123436326 citedByCount "558" @default.
- W3123436326 countsByYear W31234363262016 @default.
- W3123436326 countsByYear W31234363262017 @default.
- W3123436326 countsByYear W31234363262018 @default.
- W3123436326 countsByYear W31234363262019 @default.
- W3123436326 countsByYear W31234363262020 @default.
- W3123436326 countsByYear W31234363262021 @default.
- W3123436326 countsByYear W31234363262022 @default.
- W3123436326 countsByYear W31234363262023 @default.
- W3123436326 crossrefType "journal-article" @default.
- W3123436326 hasAuthorship W3123436326A5015565298 @default.
- W3123436326 hasAuthorship W3123436326A5042942366 @default.
- W3123436326 hasAuthorship W3123436326A5050184772 @default.
- W3123436326 hasAuthorship W3123436326A5068383641 @default.
- W3123436326 hasAuthorship W3123436326A5079863455 @default.
- W3123436326 hasAuthorship W3123436326A5086970490 @default.
- W3123436326 hasAuthorship W3123436326A5091025078 @default.
- W3123436326 hasBestOaLocation W31234363261 @default.