Matches in SemOpenAlex for { <https://semopenalex.org/work/W2617763559> ?p ?o ?g. }
Showing items 1 to 58 of
58
with 100 items per page.
- W2617763559 endingPage "4406" @default.
- W2617763559 startingPage "4403" @default.
- W2617763559 abstract "One of the major roles of regulatory bodies is to enforce rules and thus maintain standards. They also often do research related to their missions, some of which might be used to establish the standards they are regulating and how they should be evaluated. This has led some to believe that, due to potential concerns of bias and conflicts of interest, regulatory bodies should not do evaluation methodology research related to their regulatory missions. This is the claim that is debated in this month's Point/Counterpoint. Arguing for the Proposition is Dev P. Chakraborty, Ph.D. Dr. Chakraborty earned his Ph.D. in solid state physics from the University of Rochester, New York in 1977 then, in 1979, began his career in medical physics working with Ivan Brezovich in the Department of Radiology, University of Alabama at Birmingham, AL, where he worked until 1988 before moving to the Department of Radiology, University of Pennsylvania, Philadelphia. He subsequently moved to the University of Pittsburgh, Pittsburgh, PA, in 1997, where he was Professor in the Department of Bioengineering before assuming his current position at ExpertCAD Analytics, LLC in 2016. He has published over 75 papers in peer-reviewed journals, many in the field of observer performance analysis. Arguing against the Proposition is Robert M. Nishikawa, Ph.D. Dr. Nishikawa received his B.Sc. in physics in 1981 and his M.Sc. and Ph.D. in Medical Biophysics in 1984 and 1990, respectively, all from the University of Toronto. While at the University of Chicago, he developed computer-aided diagnosis systems for classifying and detecting clustered calcifications in mammograms. He has seven patents on CAD-related technologies and has over 200 publications in breast imaging. He is currently a Professor and Director of the Clinical Translational Medical Physics Laboratory in the Department of Radiology at the University of Pittsburgh. He has won 24 awards including two for “best” paper, two innovation awards, and one teaching award. He is a fellow of the American Association of Physicists in Medicine, the Society of Breast Imaging, the College of American Institute for Medical and Biological Engineering, and a Distinguished Investigator, Academy of Radiology Research. His research interests are in computer-aided diagnosis, breast imaging, image quality assessment, and evaluation of medical technologies. The Food and Drug Administration (FDA) and the Center for Devices and Radiological Health (CDRH) both regulate imaging devices and claim leadership roles in how they are evaluated. To demonstrate that the CDRH leadership in imaging device evaluation research biases research in this area and results in suboptimal evaluation of new imaging devices, I will present a single extended example. CDRH scientists are leading proponents of FROC/ROC1, 2 methods for analyzing observer outcome studies. An alternative and often more efficacious approach is the JAFROC method3 pioneered in my laboratory. Does a computer-aided detection (CAD) manufacturer adopt evaluation methods developed by Chakraborty3 or does the manufacturer feel pressure to adopt the FDA's methods?1, 2 Chakraborty's methods/software (JAFROC) have been used in over 104 publications, but only 24 are from the US and none from the FDA. The chances that this low number is a fluke are astronomically small, especially given the much larger total numbers of published US studies relative to non-US studies. This is strong evidence the FDA has influenced US-researchers against using JAFROC. Most clinical trials, including the American College of Radiology Imaging Network (ACRIN) Digital Mammographic Imaging Screening Trial (DMIST),4 have used the lower power ROC paradigm for localization tasks, which is inappropriate and unethical:5 lower power means the study is either of dubious value or it is overly expensive. The location-specific method favored by the FDA1, 2 is based on the FROC curve: one can hardly do worse. FROC data consist of mark-rating pairs; marks are locations of suspicious regions and the rating is the associated confidence level. Based on a proximity criterion, a mark close to a lesion is scored as lesion localization (LL) and otherwise, it is non-lesion localization (NL). Lesion localization fraction (LLF) is defined as the number of LLs ≥ threshold divided by the total number of lesions. The non-lesion localization fraction (NLF) is the number of NLs ≥ threshold rating divided by the total number of images. The FROC curve (plot of LLF (ordinate) vs. NLF) rises with infinite slope from (0,0). The slope then decreases monotonically and the curve ends abruptly at an unpredictable point. The FROC is not contained within the unit square. This makes it impossible to define a meaningful area measure. The FROC is defined by marks: unmarked nondiseased cases, which represent perfect decisions, do not contribute to the area under the curve (AUC) under the FROC. In screening mammography, about 995 cases out of 1000 are nondiseased. The perfect radiologist, who marks all lesions and does not mark any nondiseased case, yields zero FROC AUC, receiving no credit for the 995 correct decisions. JAFROC is based on the AFROC (alternative-FROC) curve. The y-axis is similar to LLF, but the x-axis is the ROC false-positive fraction defined by the highest ratings on nondiseased cases, and the AFROC plot includes a connection from the uppermost operating point to (1,1). Unlike the FROC AUC, the AFROC AUC for the perfect observer is unity, not zero. JAFROC is ignored in FDA's Guidance Document,2 as are positive statements about JAFROC from the late Drs. Wagner and Metz,6 and there is not one reference to Chakraborty's work. The FDA's bias has doomed progress in breast cancer CAD (40,000 deaths/yr). Besides using incorrect FROC methodology, it has set a low (second reader) bar for CAD to be considered a “success”. The end result: massive clinical trials7 have shown that CAD is actually detrimental to the outcome and there has been a call to end CAD Medicare reimbursement.8 Regulation is necessary to balance the costs and benefits of implementing a product or activity. This raises two important issues. First, it is important to quantify costs and benefits accurately. Second, it is equally important for impartiality to acquire correct balances. The proposition directly addresses the second issue, but the first issue is necessary to discuss also. I will restrict my discussion to medical imaging devices for clarity. There are many well-established methods to determine the benefits of medical imaging devices.9 There are, however, situations where researchers need new evaluation methods, either for a new technology or to simplify tests for an existing type of technology. This requires research to develop and validate the new methodologies. The regulatory agencies need to understand the strengths and weaknesses of any tests presented to them as evidence for the effectiveness of a product. This would require regulatory agencies to either develop the expertise in-house or to rely on the scientific literature. That latter is insufficient for two reasons. First, regulatory science is not a well-funded branch of science. Therefore, unless the regulatory bodies perform the research, a disconnect may occur between developing the technologies and measuring their benefits and costs. This will either slow down approval of new technologies or lead to unbalanced regulations, or both. Second, reviewing the literature may be effective in understanding the basics of the evaluation methodology, but it is usually insufficient to understand the limitations of the method. Understanding the limitations is best done by applying the method, using simulations to a variety of situations, and evaluating the results. That is basically research and regulatory bodies benefit from conducting the studies themselves. While we can quantify benefits and costs, it is often difficult to decide on the proper balance of the two, particularly in an unbiased manner. Part of the difficulty arises from benefit and cost estimates not having the same units. A prime example of this, while not exactly in the regulatory domain, is the United States Preventative Services Task Force (USPSTF) recommendations on mammographic screening.10 We can evaluate the benefits of screening as lower mortality from breast cancer and costs as false-positive screens — recalling a woman for further imaging when, in fact, she does not have a breast cancer. It is not clear how to balance lives saved against more imaging and potentially an unnecessary biopsy. The USPSTF placed more weight on false-positive screens and chose not to recommend periodic screening for all women under the age of 50, compared to, for example, the American College of Radiology which supports annual screening of women 40 and older.11 Some proponents of screening argue that the USPSTF was biased in making their recommendations.12 There is no clear solution for this potential bias, but I do not believe that researching evaluation methodology is the right place to start. On the contrary, I believe there is less potential for bias when people are more knowledgeable — unless they are predisposed to a bias to begin with. Which is to say a bias can exist whether knowledge is obtained first hand or from reviewing the literature. I agree with my colleague that the FDA/CDRH needs to be current on the science. If regulatory science is not a well-funded branch of science, that makes it even more important to be current on the existing science, both from a revered in-house predecessor6 and from academia.3 I also agree that there is need for developing new evaluation methods, but then why is the new FDA/CRDH still wedded to the 1940s ROC paradigm; what is new about it? The “mechanistic” approach13 that they are enamored with does not advance the state-of-the-art in general-paradigm multireader multicase (MRMC) analysis, rather it explains and generalizes the variance-component decomposition used in Dorfman/Berbaum/Metz analysis14 in a mathematically appealing way. But, and this is the serious limitation, it applies only to the Wilcoxon ROC statistic; it is not even applicable to fitted ROC curves, let alone FROC methodology. In my Opening Statement, I cited the “power” imbalance when it comes to reviewing/vetting the work of the FDA/CDRH, and examples of questionable work. I could go on, especially how they validate methodologies. It is a brave and knowledgeable researcher who can properly review a paper15 listing as institution of origin: “NIBIB/CDRH Laboratory for the Assessment of Medical Imaging Systems”. Any applicant for an NIH grant in methodology development, and I see there is a recent funding opportunity announcement (PAR-17-125), would be well advised to cite this paper, never mind that it is about ROC analysis, while CAD provides FROC data, so at the very least the title of the paper is misleading. The cited work remains true to model observer philosophy, which assumes the lesion location is known, ignoring the fact that if location were known, there would be no need for a radiologist to find it. This entire debate would be of academic interest, but it was not for the implications for patient care: lives literally depend on the selection of proper imaging technology. Conducting ROC studies for search tasks is not only bad science but it is also unethical and a disservice to patients and taxpayers. My colleague Dev Chakraborty argues, I believe because it is not explicitly stated that the FDA, but principally the CDRH, is biased because it “forces” companies to use ROC analysis instead of JAFROC analysis, which Dev developed; and that this bias exists because members of the CDRH have done ROC research, but not FROC research. That is an interesting premise. Dev supports his assertion with statistics that are consistent with his view, but it does not constitute proof. Here is my prospective on Dev's claim of bias. First, I know many of the people at the CDRH. In my view, they are among the leaders in the field, both in terms of their scientific rigor and in their vision. The CDRH has a long history of significant and cutting edge research and establishing methodology for evaluating screen-film systems, digital systems, computer-aided diagnosis systems, ultrasound, and others. I have not seen signs of bias in my interactions with members of the CDRH. Certainly, the members have preferences, but they remain open-minded and fair. It is important to note that just as there are differences in approach between scientists in academia and industry, there are differences between scientists in the public service sector and academia (and industry). Scientists in the public are much more open to sharing data and ideas. Second, companies applying for FDA approval are, in my experience working with them, very conservative in their approach, and they basically follow any FDA precedent or previous approved applications. This is because the approval process can be time-consuming and expensive. Companies usually overpower their observer studies to include more readers and cases than what is required by an 80% power calculation. They do not want to risk having a null result because the observer study was underpowered. Furthermore, and more importantly, it is much easier and less risky just to copy a previously approved application. This will result in the same methods being perpetuated over time. So, when a company develops a new method, even if there are some benefits to it over existing techniques, they are less likely to use the new method in FDA submissions. This is the company's choice, not an FDA edict. So, while Dr. Chakraborty has presented evidence, it is all circumstantial and, until he produces a “smoking gun”, I believe that his assertion of bias at the CDRH is false. The authors have no relevant conflicts of interest to disclose." @default.
- W2617763559 created "2017-06-05" @default.
- W2617763559 creator A5026442912 @default.
- W2617763559 creator A5041154856 @default.
- W2617763559 creator A5079423135 @default.
- W2617763559 date "2017-06-28" @default.
- W2617763559 modified "2023-09-22" @default.
- W2617763559 title "Due to potential concerns of bias and conflicts of interest, regulatory bodies should not do evaluation methodology research related to their regulatory missions" @default.
- W2617763559 cites W2011531785 @default.
- W2617763559 cites W2017874571 @default.
- W2617763559 cites W2062652646 @default.
- W2617763559 cites W2064997673 @default.
- W2617763559 cites W2074996049 @default.
- W2617763559 cites W2086892210 @default.
- W2617763559 cites W2089114844 @default.
- W2617763559 cites W2095972041 @default.
- W2617763559 cites W2107691025 @default.
- W2617763559 cites W2125449863 @default.
- W2617763559 cites W2157927416 @default.
- W2617763559 cites W4293007919 @default.
- W2617763559 doi "https://doi.org/10.1002/mp.12373" @default.
- W2617763559 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/28547782" @default.
- W2617763559 hasPublicationYear "2017" @default.
- W2617763559 type Work @default.
- W2617763559 sameAs 2617763559 @default.
- W2617763559 citedByCount "1" @default.
- W2617763559 countsByYear W26177635592017 @default.
- W2617763559 crossrefType "journal-article" @default.
- W2617763559 hasAuthorship W2617763559A5026442912 @default.
- W2617763559 hasAuthorship W2617763559A5041154856 @default.
- W2617763559 hasAuthorship W2617763559A5079423135 @default.
- W2617763559 hasBestOaLocation W26177635591 @default.
- W2617763559 hasConcept C112930515 @default.
- W2617763559 hasConcept C144133560 @default.
- W2617763559 hasConceptScore W2617763559C112930515 @default.
- W2617763559 hasConceptScore W2617763559C144133560 @default.
- W2617763559 hasIssue "9" @default.
- W2617763559 hasLocation W26177635591 @default.
- W2617763559 hasLocation W26177635592 @default.
- W2617763559 hasOpenAccess W2617763559 @default.
- W2617763559 hasPrimaryLocation W26177635591 @default.
- W2617763559 hasRelatedWork W1515663861 @default.
- W2617763559 hasRelatedWork W1572277060 @default.
- W2617763559 hasRelatedWork W2000029124 @default.
- W2617763559 hasRelatedWork W2010076726 @default.
- W2617763559 hasRelatedWork W2019637006 @default.
- W2617763559 hasRelatedWork W2020321671 @default.
- W2617763559 hasRelatedWork W2417158417 @default.
- W2617763559 hasRelatedWork W2949918693 @default.
- W2617763559 hasRelatedWork W4252059530 @default.
- W2617763559 hasRelatedWork W4297729192 @default.
- W2617763559 hasVolume "44" @default.
- W2617763559 isParatext "false" @default.
- W2617763559 isRetracted "false" @default.
- W2617763559 magId "2617763559" @default.
- W2617763559 workType "article" @default.