Matches in SemOpenAlex for { <https://semopenalex.org/work/W3025091949> ?p ?o ?g. }
Showing items 1 to 74 of
74
with 100 items per page.
- W3025091949 endingPage "803" @default.
- W3025091949 startingPage "802" @default.
- W3025091949 abstract "In the resource-intensive world of drug development for retinal disease, the ability to detect small, but significant, changes in retinal imaging is crucial for efficient evaluation of the safety and potential efficacy of investigational products. A big question these images address is whether a product is actually working. If not, then it’s best to “fail fast” and move on to the next suitable candidate.1Lendrem D.W. Lendrem B.C. Torching the haystack: modelling fast-fail strategies in drug development.Drug Discov Today. 2013; 18: 331-336Crossref Scopus (11) Google Scholar Currently, a move exists to identify changes in quantitative imaging biomarkers (e.g., something detectable on 3.9-μm axial resolution spectral-domain [SD] OCT scans) as validated outcome measures to determine a treatment effect. The rationale is that if we can detect a change on imaging reliably and accurately, then we can see the positive or negative effects of an investigational product. This move to quantitative image feature detection has led to an increase in time and expense to annotate and grade SD OCT images properly. For clinical trials, this is typically performed by highly trained human graders who work at image reading centers. As one might expect, a high amount of concentration and effort is expended by human graders to perform annotations on the 50 to 200 B-scans in each SD OCT cube that are acquired for every eye during most study visits. Moreover, a second human grader often is used to verify the work of the first grader to ensure that high-quality data are generated for the sponsor and, ultimately, the Food and Drug Administration regulators who will be reviewing these outputs. Time can be saved with the use of semiautomated processing, whereby an algorithm makes a first-pass attempt at annotating retinal images, and then those annotations are adjusted by a human grader (and often confirmed by a second grader).2Loo J. Fang L. Cunefare D. et al.Deep longitudinal transfer learning-based automatic segmentation of photoreceptor ellipsoid zone defects on optical coherence tomography images of macular telangiectasia type 2.Biomed Opt Express. 2018; 9: 2681-2698Crossref PubMed Scopus (41) Google Scholar Although the semiautomated approach cuts down on human time and expense, room for improvement exists. In this issue, Loo et al3Loo J. Clemens T.E. Chew E.Y. et al.Beyond performance metrics: automatic deep learning retinal OCT analysis reproduces clinical trial outcome.Ophthalmology. 2020; 127: 793-801Abstract Full Text Full Text PDF PubMed Scopus (19) Google Scholar (see page 793) apply a fully automated (that is, no humans involved) method to annotate SD OCT images and calculate the area of missing ellipsoid zone in a clinical trial dataset in which ciliary neurotrophic factor was administered to treat macular telangiectasia 2 (ClinicalTrials.gov identifier, NCT01949324).4Chew E.Y. Clemons T.E. Jaffe G.J. et al.Effect of ciliary neurotrophic factor on retinal neurodegeneration in patients with macular telangiectasia type 2: a randomized clinical trial.Ophthalmology. 2019; 126: 540-549Abstract Full Text Full Text PDF PubMed Scopus (76) Google Scholar They compared their deep learning automatic algorithm with the gold standard of human graders at a reading center performing semiautomated annotations of the area of ellipsoid zone defects on SD OCT images at baseline and at 24 months. Loo et al were able to successfully show that their machine algorithm was comparable with those generated by the rigorous reading center process, a wonderful achievement! This article is significant because it signals the coming sea change in the way clinical trial imaging data likely will be graded, measured, and analyzed. As more fully automated machine methods are developed that can essentially replicate the work of trained human graders, what will future reading centers look like? One can imagine centers that take up much less square footage and whose reduced head count is comprised of data scientists, machine learning engineers, and clinical experts who work together to generate the high-quality data on which sponsors and regulators rely. However, before this not-too-distant future becomes a reality, important questions still need to be answered. Although no one can dispute that it is easier for us to press a button and let a machine do the work, it will still take a lot of time and effort to generate these algorithms, fine tune them, and validate them. In the end, will this really save any time or money? It would have been insightful for the authors to perform a cost analysis of the time and money spent by the traditional reading center process and compare that with the development, validation, and instance costs of the fully automated pipeline. If it took an army of PhDs and hundreds of hours of cloud graphics processing unit (GPU) time to train the algorithm, was any money or time really saved? Moreover, we still don’t know if the successfully developed algorithm can be transferred and used on different datasets or on data with different dimensions, such as SD OCT cubes from different machines, a varying number of B-scans per cube, or differing intervoxel dimensions. Ultimately, even if the monetary costs and time spent are equivalent between the traditional human grading system and the use of validated and regulatory-grade machine algorithms, I anticipate that machines will be able to perform measurements and annotations on surrogate imaging biomarkers that humans will never be able to perform, let alone see with our naked human eyes or segment with our limited attention spans. For this reason, reading centers of the future certainly will be embracing machine technologies to augment the important work that they are already doing. To be successful, they will need to work alongside sponsors and regulators to identify meaningful outcome measures that can be identified reproducibly by advanced deep learning algorithms. Beyond Performance Metrics: Automatic Deep Learning Retinal OCT Analysis Reproduces Clinical Trial OutcomeOphthalmologyVol. 127Issue 6PreviewTo validate the efficacy of a fully automatic, deep learning–based segmentation algorithm beyond conventional performance metrics by measuring the primary outcome of a clinical trial for macular telangiectasia type 2 (MacTel2). Full-Text PDF" @default.
- W3025091949 created "2020-05-21" @default.
- W3025091949 creator A5035477663 @default.
- W3025091949 date "2020-06-01" @default.
- W3025091949 modified "2023-10-16" @default.
- W3025091949 title "The Machines Are Coming: Implications for Image Reading Centers of the Future" @default.
- W3025091949 cites W2056974352 @default.
- W3025091949 cites W2803921547 @default.
- W3025091949 cites W2895138297 @default.
- W3025091949 cites W2997639937 @default.
- W3025091949 doi "https://doi.org/10.1016/j.ophtha.2020.03.003" @default.
- W3025091949 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/32444019" @default.
- W3025091949 hasPublicationYear "2020" @default.
- W3025091949 type Work @default.
- W3025091949 sameAs 3025091949 @default.
- W3025091949 citedByCount "2" @default.
- W3025091949 countsByYear W30250919492020 @default.
- W3025091949 countsByYear W30250919492023 @default.
- W3025091949 crossrefType "journal-article" @default.
- W3025091949 hasAuthorship W3025091949A5035477663 @default.
- W3025091949 hasConcept C111919701 @default.
- W3025091949 hasConcept C118552586 @default.
- W3025091949 hasConcept C13424479 @default.
- W3025091949 hasConcept C138885662 @default.
- W3025091949 hasConcept C147494362 @default.
- W3025091949 hasConcept C154945302 @default.
- W3025091949 hasConcept C17744445 @default.
- W3025091949 hasConcept C19527891 @default.
- W3025091949 hasConcept C199539241 @default.
- W3025091949 hasConcept C2776401178 @default.
- W3025091949 hasConcept C2780035454 @default.
- W3025091949 hasConcept C41008148 @default.
- W3025091949 hasConcept C41895202 @default.
- W3025091949 hasConcept C554936623 @default.
- W3025091949 hasConcept C64903051 @default.
- W3025091949 hasConcept C71924100 @default.
- W3025091949 hasConceptScore W3025091949C111919701 @default.
- W3025091949 hasConceptScore W3025091949C118552586 @default.
- W3025091949 hasConceptScore W3025091949C13424479 @default.
- W3025091949 hasConceptScore W3025091949C138885662 @default.
- W3025091949 hasConceptScore W3025091949C147494362 @default.
- W3025091949 hasConceptScore W3025091949C154945302 @default.
- W3025091949 hasConceptScore W3025091949C17744445 @default.
- W3025091949 hasConceptScore W3025091949C19527891 @default.
- W3025091949 hasConceptScore W3025091949C199539241 @default.
- W3025091949 hasConceptScore W3025091949C2776401178 @default.
- W3025091949 hasConceptScore W3025091949C2780035454 @default.
- W3025091949 hasConceptScore W3025091949C41008148 @default.
- W3025091949 hasConceptScore W3025091949C41895202 @default.
- W3025091949 hasConceptScore W3025091949C554936623 @default.
- W3025091949 hasConceptScore W3025091949C64903051 @default.
- W3025091949 hasConceptScore W3025091949C71924100 @default.
- W3025091949 hasIssue "6" @default.
- W3025091949 hasLocation W30250919491 @default.
- W3025091949 hasLocation W30250919492 @default.
- W3025091949 hasOpenAccess W3025091949 @default.
- W3025091949 hasPrimaryLocation W30250919491 @default.
- W3025091949 hasRelatedWork W2072753962 @default.
- W3025091949 hasRelatedWork W2348531541 @default.
- W3025091949 hasRelatedWork W2365235076 @default.
- W3025091949 hasRelatedWork W2748952813 @default.
- W3025091949 hasRelatedWork W2809632469 @default.
- W3025091949 hasRelatedWork W2899084033 @default.
- W3025091949 hasRelatedWork W3025091949 @default.
- W3025091949 hasRelatedWork W3160469062 @default.
- W3025091949 hasRelatedWork W3204793433 @default.
- W3025091949 hasRelatedWork W643246895 @default.
- W3025091949 hasVolume "127" @default.
- W3025091949 isParatext "false" @default.
- W3025091949 isRetracted "false" @default.
- W3025091949 magId "3025091949" @default.
- W3025091949 workType "article" @default.