Matches in SemOpenAlex for { <https://semopenalex.org/work/W2356454710> ?p ?o ?g. }
- W2356454710 endingPage "1076" @default.
- W2356454710 startingPage "1067" @default.
- W2356454710 abstract "Evaluating the pathogenicity of a variant is challenging given the plethora of types of genetic evidence that laboratories consider. Deciding how to weigh each type of evidence is difficult, and standards have been needed. In 2015, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) published guidelines for the assessment of variants in genes associated with Mendelian diseases. Nine molecular diagnostic laboratories involved in the Clinical Sequencing Exploratory Research (CSER) consortium piloted these guidelines on 99 variants spanning all categories (pathogenic, likely pathogenic, uncertain significance, likely benign, and benign). Nine variants were distributed to all laboratories, and the remaining 90 were evaluated by three laboratories. The laboratories classified each variant by using both the laboratory’s own method and the ACMG-AMP criteria. The agreement between the two methods used within laboratories was high (K-alpha = 0.91) with 79% concordance. However, there was only 34% concordance for either classification system across laboratories. After consensus discussions and detailed review of the ACMG-AMP criteria, concordance increased to 71%. Causes of initial discordance in ACMG-AMP classifications were identified, and recommendations on clarification and increased specification of the ACMG-AMP criteria were made. In summary, although an initial pilot of the ACMG-AMP guidelines did not lead to increased concordance in variant interpretation, comparing variant interpretations to identify differences and having a common framework to facilitate resolution of those differences were beneficial for improving agreement, allowing iterative movement toward increased reporting consistency for variants in genes associated with monogenic disease. Evaluating the pathogenicity of a variant is challenging given the plethora of types of genetic evidence that laboratories consider. Deciding how to weigh each type of evidence is difficult, and standards have been needed. In 2015, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) published guidelines for the assessment of variants in genes associated with Mendelian diseases. Nine molecular diagnostic laboratories involved in the Clinical Sequencing Exploratory Research (CSER) consortium piloted these guidelines on 99 variants spanning all categories (pathogenic, likely pathogenic, uncertain significance, likely benign, and benign). Nine variants were distributed to all laboratories, and the remaining 90 were evaluated by three laboratories. The laboratories classified each variant by using both the laboratory’s own method and the ACMG-AMP criteria. The agreement between the two methods used within laboratories was high (K-alpha = 0.91) with 79% concordance. However, there was only 34% concordance for either classification system across laboratories. After consensus discussions and detailed review of the ACMG-AMP criteria, concordance increased to 71%. Causes of initial discordance in ACMG-AMP classifications were identified, and recommendations on clarification and increased specification of the ACMG-AMP criteria were made. In summary, although an initial pilot of the ACMG-AMP guidelines did not lead to increased concordance in variant interpretation, comparing variant interpretations to identify differences and having a common framework to facilitate resolution of those differences were beneficial for improving agreement, allowing iterative movement toward increased reporting consistency for variants in genes associated with monogenic disease. The assessment of pathogenicity of genetic variation is one of the more complex and challenging tasks in the field of clinical genetics. It is now clear that enormous genetic variation exists in the human population. Most of this variation, including very rare variants, is unlikely to contribute substantively to human disease. For example, a typical genome sequence and reference genome have about 3.5 million differences, of which 0.6 million are rare or novel.1Kohane I.S. Hsing M. Kong S.W. Taxonomizing, sizing, and overcoming the incidentalome.Genet. Med. 2012; 14: 399-404Abstract Full Text Full Text PDF PubMed Scopus (91) Google Scholar As such, the challenge of interpreting the clinical significance of this variation is well recognized as a barrier to furthering genomic medicine.2Evans B.J. Burke W. Jarvik G.P. The FDA and genomic tests--getting regulation right.N. Engl. J. Med. 2015; 372: 2258-2264Crossref PubMed Scopus (70) Google Scholar, 3Rehm H.L. Berg J.S. Brooks L.D. Bustamante C.D. Evans J.P. Landrum M.J. Ledbetter D.H. Maglott D.R. Martin C.L. Nussbaum R.L. et al.ClinGenClinGen--the Clinical Genome Resource.N. Engl. J. Med. 2015; 372: 2235-2242Crossref PubMed Scopus (696) Google Scholar We have previously reported both inconsistencies across laboratories in the classification of Mendelian-disease variants and high discordance in the use of a single simple classification system, whereby reviewers showed a bias toward overestimating pathogenicity.4Amendola L.M. Dorschner M.O. Robertson P.D. Salama J.S. Hart R. Shirts B.H. Murray M.L. Tokita M.J. Gallego C.J. Kim D.S. et al.Actionable exomic incidental findings in 6503 participants: challenges of variant classification.Genome Res. 2015; 25: 305-315Crossref PubMed Scopus (255) Google Scholar Furthermore, recent analyses of variant classifications in ClinVar showed that for the 11% (12,895/118,169) of variants with two or more submitters, interpretations differed in 17% (2,229/12,895).3Rehm H.L. Berg J.S. Brooks L.D. Bustamante C.D. Evans J.P. Landrum M.J. Ledbetter D.H. Maglott D.R. Martin C.L. Nussbaum R.L. et al.ClinGenClinGen--the Clinical Genome Resource.N. Engl. J. Med. 2015; 372: 2235-2242Crossref PubMed Scopus (696) Google Scholar Inconsistency of the classification of variants across professional genetics laboratories has been reported elsewhere.5Yorczyk A. Robinson L.S. Ross T.S. Use of panel tests in place of single gene tests in the cancer genetics clinic.Clin. Genet. 2015; 88: 278-282Crossref PubMed Scopus (35) Google Scholar These data highlight the need for a more systematic and transparent approach to variant classification. Laboratories performing and reporting the results of clinical genetic testing are now tasked with considering a plethora of types of genetic evidence, some applicable to all genes and others specific to individual genes and diseases. To date, laboratories have developed their own methods of variant assessment because the prior American College of Medical Genetics and Genomics (ACMG) variant-reporting guidelines did not address the weighting of evidence for variant classification.6Richards C.S. Bale S. Bellissimo D.B. Das S. Grody W.W. Hegde M.R. Lyon E. Ward B.E. Molecular Subcommittee of the ACMG Laboratory Quality Assurance CommitteeACMG recommendations for standards for interpretation and reporting of sequence variations: Revisions 2007.Genet. Med. 2008; 10: 294-300Abstract Full Text Full Text PDF PubMed Scopus (633) Google Scholar Some laboratories assign points to types of evidence and generate a score,7Karbassi I. Maston G.A. Love A. DiVincenzo C. Braastad C.D. Elzinga C.D. Bright A.R. Previte D. Zhang K. Rowland C.M. et al.A Standardized DNA Variant Scoring System for Pathogenicity Assessments in Mendelian Disorders.Hum. Mutat. 2016; 37: 127-134Crossref PubMed Scopus (42) Google Scholar and others define specific combinations of evidence that allow them to arrive at each classification category8Thompson B.A. Spurdle A.B. Plazzer J.P. Greenblatt M.S. Akagi K. Al-Mulla F. Bapat B. Bernstein I. Capellá G. den Dunnen J.T. et al.InSiGHTApplication of a 5-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants in the InSiGHT locus-specific database.Nat. Genet. 2014; 46: 107-115Crossref PubMed Scopus (346) Google Scholar or use a Bayesian framework to combine data types into a likelihood ratio.9Goldgar D.E. Easton D.F. Byrnes G.B. Spurdle A.B. Iversen E.S. Greenblatt M.S. IARC Unclassified Genetic Variants Working GroupGenetic evidence and integration of various data sources for classifying uncertain variants into a single model.Hum. Mutat. 2008; 29: 1265-1272Crossref PubMed Scopus (151) Google Scholar Still others have simply relied on expert judgment of the individual body of evidence on each variant to make a decision. Deciding how to categorize and weigh each type of evidence is challenging, and guidance has been needed. Making the task even more challenging is that the true pathogenicity is not known for most variants, and it is therefore difficult to validate approaches to variant assessment, particularly for addressing variants that have limited evidence. However, combining the collective experience of experts in the community to begin to build a more systematic and transparent approach to variant classification is essential, and this has led the ACMG and Association for Molecular Pathology (AMP) to develop a framework for evidence evaluation. The initial framework was published in early 201510Richards S. Aziz N. Bale S. Bick D. Das S. Gastier-Foster J. Grody W.W. Hegde M. Lyon E. Spector E. et al.ACMG Laboratory Quality Assurance CommitteeStandards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology.Genet. Med. 2015; 17: 405-424Abstract Full Text Full Text PDF PubMed Scopus (14622) Google Scholar and focused on variants in genes associated with Mendelian disease. The ACMG-AMP guidelines defined 28 criteria (each with an assigned code) that address evidence such as population data, case-control analyses, functional data, computational predictions, allelic data, segregation studies, and de novo observations. Each code is assigned a weight (stand-alone, very strong, strong, moderate, or supporting) and direction (benign or pathogenic), and then rules guide users to combine these evidence codes to arrive at one of five classifications: pathogenic (P), likely pathogenic (LP), variant of uncertain significance (VUS), likely benign (LB), or benign (B). In some cases, the strength of individual criteria can be modified at the discretion of the curator, and the overall classification can be modified with expert judgment. As an example, a minor allele frequency (MAF) greater than the disease prevalence but less than 5% is coded benign strong (BS1); this is considered strong evidence against pathogenicity for a highly penetrant monogenic disorder and supports a LB classification when it is combined with at least one supporting line of evidence against pathogenicity (BP1–BP6). If BS1 is combined with another strong line of evidence against pathogenicity (BS2–BS4), this supports a B classification. Conversely, a variant predicted to be null (PVS1) would be classified as LP if it is absent from population databases (PM2) or P if it is observed to be de novo with confirmed paternity and maternity (PS2). If not enough lines of evidence are invoked to classify a variant as P, LP, LB, or B, or there are valid but contradictory lines of evidence, a variant is interpreted as a VUS. We set out to evaluate how the ACMG-AMP guidelines compare to individual laboratory approaches to variant classification and explore the variance in the use and interpretation of the pathogenicity criteria. Nine laboratories participating in the Clinical Sequencing Exploratory Research (CSER) consortium evaluated the use of the new ACMG-AMP guidelines and in-house interpretations to assess inter-laboratory concordance by either method of variant classification. Our goals were to evaluate consistency of the use of the ACMG-AMP codes and subsequent pathogenicity classification. Further, we used these criteria to analyze the basis for discordance and sought to reconcile differing implementations with an eye to guidance clarification. CSER is a National Human Genome Research Institute (NHGRI)- and National Cancer Institute (NCI)-funded consortium exploring the clinical use of genomic sequencing, developing best practices, and identifying obstacles to implementation. It is composed of nine clinical U-award sites focusing on all aspects of clinical sequencing, the ClinSeq project,11Biesecker L.G. Mullikin J.C. Facio F.M. Turner C. Cherukuri P.F. Blakesley R.W. Bouffard G.G. Chines P.S. Cruz P. Hansen N.F. et al.NISC Comparative Sequencing ProgramThe ClinSeq Project: piloting large-scale genome sequencing for research in genomic medicine.Genome Res. 2009; 19: 1665-1674Crossref PubMed Scopus (217) Google Scholar and nine R-award sites focusing on ethical, legal, and social implications. Eight of the nine clinical U-award sites and ClinSeq participated in this exercise. These included laboratories performing exome and/or genome sequencing for the following projects: BASIC3 (Baylor College of Medicine, Houston), PediSeq (Children’s Hospital of Philadelphia), CanSeq (Dana Farber Cancer Institute, Boston), HudsonAlpha Institute for Biotechnology, MedSeq (Brigham and Women’s Hospital and Partners Healthcare, Boston), NextGen (Oregon Health Sciences University, Portland), NCGENES (University of North Carolina, Chapel Hill), and NEXT Medicine (University of Washington, Seattle). Eight of the nine sites were accredited by the Clinical Laboratory Improvement Amendments (CLIA). Each site nominated 11 variants identified in their sequencing projects for this exercise. Submitted variants were single-nucleotide substitutions or small indels (<22 bp) in genes thought to be associated with Mendelian disease. Each site was instructed to provide a range of variants in each classification category with varying degrees of difficulty. Accepted classifications were B, LB, VUS, LP, and P. Each variant submission also included whether it was identified as a diagnostic result or an incidental finding. Any internal evidence that the submitting laboratory used to classify the variant—for example, the phenotype and family history of the proband or whether parental testing identified the variant as de novo—was also provided to all laboratories. Nine variants (two P, two LP, two VUS, two LB, and one B) were selected for distribution to all laboratories without the submitting laboratory’s classification; half were identified as incidental findings, half were identified as diagnostic findings, and one was from a carrier screen. The remaining 90 variants were randomly distributed to at least two other laboratories, enabling classifications from at least three laboratories for each variant. Each laboratory was asked to classify the pathogenicity by applying both their internal process and then the ACMG-AMP system. They were asked to document which ACMG-AMP criteria were invoked for the ACMG-AMP classification and note whether they found the classification of each variant difficult, moderate, or easy. Time taken for categorizing the variant was requested but not consistently recorded. In order to assess whether ACMG-AMP evidence codes were combined appropriately by the variant curator, we developed a pathogenicity calculator that combines the provided codes to generate a final classification. We used this calculator to compare the calculated ACMG-AMP classification based on tabulating the evidence codes provided by the laboratory with the final ACMG-AMP classification submitted by the laboratory. We shared these data with sites for consideration during consensus discussions and manually verified the results to identify which discrepancies were due to errors by the submitting laboratory and which were due to the use of judgment in overruling the ACMG-AMP classification. Descriptive statistics summarized the intra-laboratory classification concordance between the ACMG-AMP system and the laboratory’s own process and the inter-laboratory concordance both for each laboratory’s own process and for the ACMG-AMP system across laboratories. Additionally, we quantified the level of agreement. To do this, we considered the five-tier classification system in the following order—B, LB, VUS, LP, and P—and defined a one-step level of disagreement to be a range of classifications from one category to the next ordered category (e.g., from VUS to LP or LP to P); the maximum level included four steps (i.e., B to P). In addition, we tracked disagreements that were more likely to lead to medical-management differences (P or LP versus any of VUS, LB, and B) and disagreements less likely to affect clinical decision making (e.g., VUS versus LB or B, or confidence differences, such as B versus LB or P versus LP). To quantify the overall level of absolute agreement on ACMG-AMP and laboratory criteria within sites and agreement between sites using ACMG-AMP or laboratory criteria, we calculated Krippendorff’s alpha (K-alpha); ranging from 0 to 1, this generalized measure of absolute agreement corrects for chance responding and can handle any number of raters, scale of measurement, and missing data. Because it focuses on disagreement, it overcomes many of the weaknesses associated with other agreement measures, such as Cohen’s kappa.12Hayes A.F. Krippendorff K. Answering the call for a standard reliability measure for coding data.Commun. Methods Meas. 2007; 1: 77-89Crossref Google Scholar, 13Krippendorff K. Reliability in content analysis: Some common misconceptions and recommendations.Hum. Commun. Res. 2004; 30: 411-433Google Scholar, 14Krippendorff K. Content analysis: An introduction to its methodology.Second Edition. Sage, 2004Google Scholar, 15Brennan R.L. Prediger D.J. Coefficient kappa: Some uses, misuses, and alternatives.Educ. Psychol. Meas. 1981; 41: 687-699Crossref Scopus (841) Google Scholar, 16Zwick R. Another look at interrater agreement.Psychol. Bull. 1988; 103: 374-378Crossref PubMed Scopus (167) Google Scholar In general, values of 0.80 and above are considered evidence of good agreement.14Krippendorff K. Content analysis: An introduction to its methodology.Second Edition. Sage, 2004Google Scholar We also calculated 95% confidence intervals (CIs) for K-alpha by using bootstrapping with 20,000 replications.17Krippendorff, K. (2013). Bootstrapping Distributions for Krippendorff’s Alpha for Coding of Predefined Units: Single-Valued cα and multi-valued mvα. http://web.asc.upenn.edu/usr/krippendorff/boot.c-Alpha.pdf.Google Scholar Two variants were excluded from the quantitative analyses and are not represented in Figures 1A and 1B ; however, they are represented in the overall concordance shown in Figures 1C and 2. One variant was a low-penetrance allele (c.3920T>A [p.Ile1307Lys] [GenBank: NM_001127510.2] in APC [MIM: 611731]) for which several laboratories did not assign an ACMG-AMP classification, and the other variant (c.1101+1G>T [GenBank: NM_001005463.2] in EBF3 [MIM: 607407]) was a predicted loss-of-function variant in a gene for which there is no known association with disease. Neither of these two variants was relevant to this analysis of classifying high-penetrance variants for Mendelian conditions, for which the ACMG-AMP guidelines are intended. In addition, the two laboratories that had key personnel involved in the development of the ACMG-AMP recommendations were excluded from one study-wide sensitivity analysis to evaluate whether familiarity with the system affects concordance. Lastly, we performed a second sensitivity analysis by excluding the classifications of the submitting laboratory to determine the dependence of these results on a single laboratory and whether classification in a real case setting rather than only for the comparison study affects results.Figure 2Distribution of 99 Variants Submitted for AssessmentShow full captionGray outlines illustrate the distribution of variant classifications submitted for assessment. Green bars indicate those calls that were agreed upon after initial review, blue bars indicate those calls agreed upon after email exchange, and black bars indicate those calls agreed upon after discussion on conference calls.View Large Image Figure ViewerDownload Hi-res image Download (PPT) Gray outlines illustrate the distribution of variant classifications submitted for assessment. Green bars indicate those calls that were agreed upon after initial review, blue bars indicate those calls agreed upon after email exchange, and black bars indicate those calls agreed upon after discussion on conference calls. We analyzed the lines of evidence used for each variant classification to identify how commonly specific evidence codes and classification rules were used across all of the variants, the overall agreement in the pattern of ACMG-AMP codes used across sites for each variant, and the consistency with which each ACMG-AMP code was used within each variant. These were determined with a frequency table, the mean of coefficient of variation (CV) values across variants with each ACMG-AMP code, and K-alpha values of ACMG-AMP codes within each variant. Descriptive statistics of how often the strength of each line of evidence was modified during variant interpretation were also calculated. The variants with discrepant classifications based on the ACMG-AMP guidelines were discussed via phone conferences (n = 23) or via email (n = 43). Variants were chosen for discussion by phone conference if they were interpreted by all nine laboratories or if they were discrepant by more than one category of disagreement. The laboratory that submitted each of these 23 variants presented the lines of evidence used by all laboratories and the rationale for using, not using, or altering the strength of a particular evidence code. Once all evidence was discussed, each laboratory was asked to provide a final classification. For variants for which only one laboratory was discordant for only a one-level difference, the discordant laboratory was asked to re-review their classification in light of the evidence used and classifications made by the other laboratories. The discordant laboratory then provided either a change or a decision to retain the original classification, including the rationale in both scenarios by email. During phone conferences and via email, laboratories had the opportunity to share any internal data that could have contributed to discordance. The intra-laboratory comparison of the laboratory process and the ACMG-AMP system for the 347 paired variant assessments is summarized in Figure 1A. The classifications matched for 275 of 347 (79%) variant assessments. Eleven of the 347 paired variant assessments (3.2%) differed by greater than one level. Overall, in 48 of the 72 (67%) discordant calls, the ACMG-AMP system calls were closer to VUS. Specifically, a classification of B or LB was more likely to result from using the laboratories’ own criteria than from using the ACMG-AMP criteria. The K-alpha value for agreement within laboratories ranged from 0.77 to 1.00 (average = 0.91; seven of nine laboratories had K-alpha > 0.90). Considering the inter-laboratory classification for 97 variants, there was no statistically significant difference in concordance across laboratories between classifications based on laboratory criteria and those based on ACMG-AMP criteria (lab K-alpha = 0.76, 95% CI = [0.73, 0.80]; ACMG-AMP K-alpha = 0.72, 95% CI = [0.68, 0.76]). In other words, implementation of the ACMG-AMP criteria did not yield more consistent variant classification among these laboratories. All laboratories reviewing the variant (three to nine) agreed for 33 (34%) when they used either the ACMG-AMP system or their own criteria. No significant difference was found in inter-laboratory concordance when the two laboratories that contributed to the ACMG-AMP classification recommendations were removed from the analysis (K-alpha lab = 0.77, 95% CI = [0.73, 0.80]; K-alpha ACMG-AMP = 0.70, 95% CI = [0.66, 0.74]) or when the site that submitted the variant classifications was removed from the analysis (K-alpha lab = 0.76, 95% CI = [0.71, 0.80]; K-alpha ACMG-AMP = 0.75, 95% CI = [0.71, 0.78]). The distribution of types of disagreement among laboratories using each method is shown in Figure 1B. A total of 43/194 (22%) classifications had category differences that are more likely to influence medical decision making (P or LP versus VUS, LB, or B), the majority of which (33) were P or LP versus VUS. An additional 36 classifications (19%) involved differences between VUS and LB or B, which could have an impact on results reported by the laboratory given that many laboratories do not report LB or B results and that reporting VUS results could result in a more lengthy disclosure process and uncertainty of follow-up. The remaining 25% of variant classifications were differences in the confidence of calls (P versus LP or B versus LB), which are unlikely to have an impact on clinical care. The interpretation of 33/99 (34%) variants was identical across all sites that used the ACMG-AMP guidelines. After either emails or conference calls among the reporting laboratories, consensus on variant classifications based on the ACMG-AMP guidelines was achieved for 70/99 (71%) variants. Twenty-one of the discrepant variants were resolved via email, and the remaining 16 were resolved during phone conferences. The distribution and sources of variant-interpretation consensus can be found in Figure 2; gray outlines show the original distribution of submitted variant interpretations. Figure 1C shows the distribution of types of disagreement among laboratories after the consensus effort. Of the 29 variants that remained discordant, 25 involved only one level of difference (15 differed in confidence differences, three differed between LP and VUS, and seven differed between VUS and LB). Of the four variants with greater than one level of difference, two involved a difference between P and VUS, LB, or B. The final classifications for the 70 variants for which consensus was achieved, and the range of classifications for the remaining 29 discordant variants, are presented in Table S1. Consensus discussions led to the clarification of the correct use of several ACMG-AMP lines of evidence, some of which included errors in the appropriate use of the rules already described in the guidelines (Table 1). Although the ACMG-AMP guidelines suggest a VUS classification when conflicting pathogenic and benign lines of evidence are identified, some laboratories allowed one line of conflicting benign evidence of only a supporting level (e.g., computational predictions) to override otherwise strong evidence of pathogenicity. In these cases, consensus discussion led to the use of expert judgment, as described in the ACMG-AMP guidelines, for appropriately disregarding the limited conflicting evidence, such as computational predictions. For two variants, achieving concordant interpretations required one laboratory’s internal data. It was difficult to resolve the two variants that were excluded from the intra- and inter-laboratory analyses because the ACMG-AMP rules were not designed for low-penetrance variants (risk alleles) or variants in genes not clearly associated with the disorder. Some discrepancies in classification occurred because laboratories were interpreting the same variant for two different associated conditions, which have different disease frequencies. This led to a discordant use of the rules related to allele frequency.Table 1ACMG-AMP Rule Clarifications and Suggestions for ModificationRuleDescriptionClarifications and/or SuggestionsPVS1variant predicted null where LOF is a mechanism of diseasedo not apply to variants that are near the 3′ end of the gene and escape nonsense-mediated decayPS1variant with the same amino acid change as a previously established pathogenic variant, regardless of nucleotide changedoes not include the same variant being assessed because it is not yet pathogenic, and the rule is intended for variants with a different nucleotide changePS2de novo variant with confirmed maternity and paternityapply this rule as moderate or supporting if the variant is mosaic and its frequency in tissue is consistent with the phenotypePS3variant shown to have a deleterious effect by a well-established functional studyreduce the strength for assays that are not as well validated or linked to the phenotypePM1variant located in a mutational hotspot and/or critical and well-established functional domainnot meant for truncations; more clarification is needed for applying this rulePM2, BS1variant absent in population databases or with an allele frequency too high for the diseasecannot assume longer indels would be detected by next-generation sequencinguse a published control dataset if its size is at least 1,000 individualscannot be applied for low-quality calls or non-covered regionsmust define the condition and inheritance patternPM3for recessive disorders, variant in trans with a pathogenic variantinvoke this rule as supporting if the phase is not establishedcan upgrade if more than one proband is reportedPM4protein-length-changing variantapplicable for in-frame deletions, insertions, or stop-loss variants, but not frameshifts, nonsense, and splice variantsPM5novel missense variant at amino acid with different pathogenic missense changeensure pathogenicity of previously reported variantsuggest changing “novel” to “different” because some variants that are not novel might require assessment with this rulePP3, BP4variant with multiple lines of computational evidenceall lines must agreePP4the patient’s phenotype or family history is highly specific to the genotypenot mean" @default.
- W2356454710 created "2016-06-24" @default.
- W2356454710 creator A5000356257 @default.
- W2356454710 creator A5006293085 @default.
- W2356454710 creator A5012005755 @default.
- W2356454710 creator A5012919211 @default.
- W2356454710 creator A5014144046 @default.
- W2356454710 creator A5028470777 @default.
- W2356454710 creator A5028969570 @default.
- W2356454710 creator A5030284017 @default.
- W2356454710 creator A5031879183 @default.
- W2356454710 creator A5038349072 @default.
- W2356454710 creator A5038545410 @default.
- W2356454710 creator A5040589454 @default.
- W2356454710 creator A5042514001 @default.
- W2356454710 creator A5044328665 @default.
- W2356454710 creator A5045615913 @default.
- W2356454710 creator A5045989105 @default.
- W2356454710 creator A5046265833 @default.
- W2356454710 creator A5049699006 @default.
- W2356454710 creator A5054258919 @default.
- W2356454710 creator A5057332639 @default.
- W2356454710 creator A5057829810 @default.
- W2356454710 creator A5059340927 @default.
- W2356454710 creator A5064496301 @default.
- W2356454710 creator A5069783162 @default.
- W2356454710 creator A5074214242 @default.
- W2356454710 creator A5075565472 @default.
- W2356454710 creator A5078888054 @default.
- W2356454710 creator A5079493528 @default.
- W2356454710 creator A5080516152 @default.
- W2356454710 creator A5085346874 @default.
- W2356454710 creator A5089918245 @default.
- W2356454710 creator A5091633479 @default.
- W2356454710 date "2016-06-01" @default.
- W2356454710 modified "2023-10-16" @default.
- W2356454710 title "Performance of ACMG-AMP Variant-Interpretation Guidelines among Nine Laboratories in the Clinical Sequencing Exploratory Research Consortium" @default.
- W2356454710 cites W1644197353 @default.
- W2356454710 cites W1823058240 @default.
- W2356454710 cites W1971571778 @default.
- W2356454710 cites W1976866489 @default.
- W2356454710 cites W1986505248 @default.
- W2356454710 cites W1986966812 @default.
- W2356454710 cites W2003873813 @default.
- W2356454710 cites W2044340012 @default.
- W2356454710 cites W2051978340 @default.
- W2356454710 cites W2061504941 @default.
- W2356454710 cites W2063752042 @default.
- W2356454710 cites W2068619217 @default.
- W2356454710 cites W2074393181 @default.
- W2356454710 cites W2095597256 @default.
- W2356454710 cites W2107918293 @default.
- W2356454710 cites W2108524651 @default.
- W2356454710 cites W2109378245 @default.
- W2356454710 cites W2112604819 @default.
- W2356454710 cites W2149490731 @default.
- W2356454710 cites W2167588661 @default.
- W2356454710 cites W2168462034 @default.
- W2356454710 cites W2327499427 @default.
- W2356454710 doi "https://doi.org/10.1016/j.ajhg.2016.03.024" @default.
- W2356454710 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/5005465" @default.
- W2356454710 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/27392081" @default.
- W2356454710 hasPublicationYear "2016" @default.
- W2356454710 type Work @default.
- W2356454710 sameAs 2356454710 @default.
- W2356454710 citedByCount "408" @default.
- W2356454710 countsByYear W23564547102016 @default.
- W2356454710 countsByYear W23564547102017 @default.
- W2356454710 countsByYear W23564547102018 @default.
- W2356454710 countsByYear W23564547102019 @default.
- W2356454710 countsByYear W23564547102020 @default.
- W2356454710 countsByYear W23564547102021 @default.
- W2356454710 countsByYear W23564547102022 @default.
- W2356454710 countsByYear W23564547102023 @default.
- W2356454710 crossrefType "journal-article" @default.
- W2356454710 hasAuthorship W2356454710A5000356257 @default.
- W2356454710 hasAuthorship W2356454710A5006293085 @default.
- W2356454710 hasAuthorship W2356454710A5012005755 @default.
- W2356454710 hasAuthorship W2356454710A5012919211 @default.
- W2356454710 hasAuthorship W2356454710A5014144046 @default.
- W2356454710 hasAuthorship W2356454710A5028470777 @default.
- W2356454710 hasAuthorship W2356454710A5028969570 @default.
- W2356454710 hasAuthorship W2356454710A5030284017 @default.
- W2356454710 hasAuthorship W2356454710A5031879183 @default.
- W2356454710 hasAuthorship W2356454710A5038349072 @default.
- W2356454710 hasAuthorship W2356454710A5038545410 @default.
- W2356454710 hasAuthorship W2356454710A5040589454 @default.
- W2356454710 hasAuthorship W2356454710A5042514001 @default.
- W2356454710 hasAuthorship W2356454710A5044328665 @default.
- W2356454710 hasAuthorship W2356454710A5045615913 @default.
- W2356454710 hasAuthorship W2356454710A5045989105 @default.
- W2356454710 hasAuthorship W2356454710A5046265833 @default.
- W2356454710 hasAuthorship W2356454710A5049699006 @default.
- W2356454710 hasAuthorship W2356454710A5054258919 @default.
- W2356454710 hasAuthorship W2356454710A5057332639 @default.
- W2356454710 hasAuthorship W2356454710A5057829810 @default.
- W2356454710 hasAuthorship W2356454710A5059340927 @default.
- W2356454710 hasAuthorship W2356454710A5064496301 @default.