Lynch Syndrome and Deficient Mismatch Repair Genes
Sorscher | March 9, 2018
In their recent article in JCO Precision Oncology, Kok et al reported a remarkable response to pembrolizumab given to a patient with deficient mismatch repair (dMMR) metastatic breast cancer. dMMR/microsatellite high (MSI-H) breast cancers were among the responding tumors that led to the recent US Food and Drug Administration accelerated approval of pembrolizumab for any patient with a dMMR/MSI-H metastatic solid tumor.Read more »
With Regard to PTEN Promoter Testing for Hereditary Cancer Risk Assessment
Mester et al | February 23, 2018
Germline pathogenic variants in PTEN are causative for PTEN hamartoma tumor syndrome (PHTS, Mendelian Inheritance in Man +601728), a rare autosomal dominant disorder that causes an increased risk for neurodevelopmental disorders as well as benign and malignant neoplasias. Given the increased risk for breast, endometrial, and other cancers in individuals with PHTS, PTEN is a part of the hereditary cancer next-generation sequencing panels offered by several clinical laboratories Read more »
Oncologists’ Use of Genomic Sequencing Data to Inform Clinical Management
Gornick et al | February 21, 2018
Introducing the JCO Precision Oncology Molecular Tumor Board Case Discussion Series
Ford | February 20, 2018
Model for Prioritizing Targetable Alterations in Cancer
Dienstmann | February 20, 2018
Large-Cell Neuroendocrine Carcinoma of the Lung: A Focused Analysis of BRAF Alterations and Case Report of a BRAF Non-V600–Mutated Tumor Responding to Targeted Therapy
Chae et al | February 16, 2018
Acquired Resistance to Poly (ADP-ribose) Polymerase Inhibitor Olaparib in BRCA2-Associated Prostate Cancer Resulting From Biallelic BRCA2 Reversion Mutations Restores Both Germline and Somatic Loss-of-Function Mutations
Carneiro et al | February 14, 2018
Polyclonal BRCA2 Reversion Mutations Detected in Circulating Tumor DNA After Platinum Chemotherapy in a Patient With Metastatic Prostate Cancer
Cheng et al | February 14, 2018
Somatic Reversion of Germline BRCA2 Mutation Confers Resistance to Poly(ADP-ribose) Polymerase Inhibitor Therapy
Banda et al | February 14, 2018
Tumor Suppressor Tolerance: Reversion Mutations in BRCA1 and BRCA2 and Resistance to PARP Inhibitors and Platinum
Ganesan | February 14, 2018
Genomic Profiling of T-Cell Neoplasms Reveals Frequent JAK1 and JAK3 Mutations With Clonal Evasion From Targeted Therapies
Greenplate et al | February 13, 2018
Part of the process of verifying or validating a method to confirm that it is suitable for use is an assessment of precision. By this we mean the closeness of agreement between independent results of measurements obtained under stipulated conditions; it is solely related to the random error of measurements and has no relation to trueness/accuracy.4
There is some variation in the terminology used but for the purposes of this discussion, repeatability, also known as within-run precision, is defined as the closeness of agreement between results of successive measurements obtained under identical conditions. Reproducibility is at the other extreme and refers to the closeness of agreement between results of successive measurements obtained under changed conditions (time, operators, calibrators, reagents, and laboratory). For the purposes of this discussion reproducibility will not be considered, as it involves multiple laboratories. Instead total precision within a laboratory (within-laboratory precision) will be assessed.
While the term precision relates to the concept of variation around a central value, imprecision is actually what is measured. For a normal distribution the measure of imprecision is the standard deviation (SD). Alternatively one can use the variance, which is simply the square of the SD.
It is generally assumed in the laboratory that the variation associated with repeated analysis will follow a normal distribution, also known as the Laplace-Gaussian or Gaussian distribution. If this is true then using the principle of analysis of variance components:
where σ is the SD and σ2 the variance.
Note, some authors refer to total variation as just the between-run component instead of combined between-run and within-run shown above. Care must be taken in knowing which term is being referred to. CLSI now uses the term within-laboratory precision to denote the total precision within the same facility using the same equipment1 and this term will be used for this concept throughout this paper.
In a production like process, such as measuring an analyte in some matrix, the mean (μ) and SD (σ) are not known, and can only be estimated. The estimates of μ and σ are usually denoted by and s respectively. For n measurements we have:
The coefficient of variation (CV) is defined as:
When evaluating the precision of an assay, the trivial approach for estimating repeatability for any given level is to perform 20 replicate analyses in a single run on a single day. Similarly the within-laboratory precision is estimated by measuring a sample 20 times over multiple days. Unfortunately this approach is insufficient, as it tends to under-estimate repeatability, as the operating conditions in effect at the time may not reflect usual operating parameters.2
CLSI recommends an alternative approach that is described in documents EP05-A21 and EP15-A2.2 The two documents are intended for different purposes. EP05-A2 should be used to validate a method against user requirements, and is generally used by reagent and instrument suppliers to demonstrate the precision of their methods. In contrast, EP15-A2 is intended to verify that a laboratory’s performance is consistent with claims made by the manufacturer. Thus a laboratory may choose to use an EP15-A2 based assessment if it is verifying a method on an automated platform using the manufacturer’s reagents. However, for a method developed in-house a higher level of proof is required to validate the method, in which case EP05-A2 would be the appropriate guideline to use.
Various materials may be used to complete the assessment with either protocol. These include pooled patient samples, quality control material, or commercial standard material with known values. When using quality control samples, these should be different to those used to ensure the instrument is in control at the time of the assessment.
As the period of assessment is quite short, the total SD or within-laboratory SD derived from these experiments should not generally be used to define acceptability limits for internal quality control. For this, longer-term assessment is required.
The EP05-A2 protocol recommends that:
The assessment is performed on at least two levels, as precision can differ over the analytical range of an assay.
Each level is run in duplicate, with two runs per day over 20 days, and each run separated by a minimum of two hours.
There should be at least one quality control (QC) sample in each run. If QC material is being used for the precision assessment, it should be different to that used to control the assay.
The order of analysis of test materials and QC for each run or day should be changed.
To simulate actual operation, include at least ten patient samples in each run.
The EP15-A2 protocol is similar except that the experiment is undertaken with three replicates over five days for at least two levels. The reader is referred to the CLSI documents for details.1,2
When undertaking the assessment the data must be assessed for outliers, which are considered to be present if the absolute difference between replicates exceeds 5.5 times the SD determined in the preliminary precision test. If an outlier is found the pair should be rejected and the cause investigated and resolved before repeating the run. The figure of 5.5 is derived from the upper 99.9% value of the normalised range for differences between two populations.
Estimation of Repeatability and Within-Laboratory Precision
The following example relates to the verification of performance of calcium according to EP15-A2 using a five day protocol. For the purposes of this example the results of only a single level are shown (Table 1).
Calcium results for level 1 (mmol/L).
Repeatability is estimated using the equation below.
D = total number of days.
n = total number of replicates per day.
xdr = result for replicate r on day d.
d = average of all replicates on day d.
The first step is to calculate the mean of the replicates for each day, then for each result subtract the mean for that day and square the resultant value. For example, on day 1 the average of the three values is (2.015 + 2.013 + 1.963)/3 = 1.997. The first replicate on day 1 is 2.015, so we calculate (2.015 – 1.997)2 = 0.00032.
Table 2 shows the results of each of these calculations.
The sum of the squared differences is 0.0055, and we know D=5 and n=3. Thus:
The next step is to calculate the variance for the daily means (sb2) using the equation.
D = total number of days.
d = average of all replicates on day d.
x̿ = average of all results.
Using the values from our example the mean of all the results is 1.984 mmol/L. On day 1 the mean of the three replicates was 1.997 (see Table 2), so the square of the difference from the overall mean is (1.997 – 1.984)2 = 0.000162.
Table 3 shows the results of the same calculation for the remaining days.
Summing the square of the differences gives a total of 0.00127. Thus the variance of the daily means is:
Finally, we can calculate the total or within-laboratory SD (sl) using the equation:
where n is the number of replicates per day.
Again, using the values from the example:
Evaluation of Results
As alluded to above, EP15-A2 is generally used to verify that a method is performing as is claimed by the manufacturer. Therefore the imprecision estimates calculated above must be compared to the manufacturer’s claim. If the repeatability and within-laboratory SD are less than that indicated by the manufacturer, then the user has demonstrated precision consistent with the claim and no further calculations are required. However, if the values achieved are greater than those reported by the manufacturer, a statistical test needs to be performed to determine whether this difference is statistically significant.
Repeatability Verification Value
In order to compare the estimated repeatability to a claimed value we can calculate the critical or verification value using the equation:
σr is the claimed repeatability.
C is the 1-α/q percentage point of the Chi-square distribution.
α is the false rejection rate and q is the number of levels tested
v is the degrees of freedom and in this instance is equal to D•(n-1).
Using the example data and assuming the claimed repeatability is an improbable CV of 1.1%, the claimed SD is therefore 1.984/0.01 = 0.022 mmol/L which is less than the estimated repeatability of 0.023. In order to calculate the verification value we must first calculate the repeatability degrees of freedom, which is equal to D•(n-1). For the EP15-A2 protocol of three replicates over five days we get v = 10.
To calculate C, α is traditionally taken as 5% and q is equal to 2. Thus we need to find the 97.5% percentage point of the Chi-square distribution with 10 degrees of freedom. This can be looked up using statistical tables or by use of the spreadsheet function CHIINV(α/q,v) = CHIINV(0.05/2,10) = 20.48.3
Therefore the verification value is:
As the estimated repeatability is less than or equal to the verification value the data are consistent with the manufacturer’s claim.
Within-Laboratory Precision Verification Value
An analogous method is used to compare the estimated within-laboratory SD to the manufacturer’s claim with the verification value being calculated according to equation 8.
where σl is the claimed within-laboratory or total SD and T is the effective degrees of freedom for the within-laboratory precision estimate. T is best calculated in a spreadsheet and is given by:
For our example, assume a manufacturer’s claim of 1.2%, which corresponds to a SD of 0.024. Therefore:
Again, as the within-laboratory SD is less than or equal to the verification value the data are consistent with the manufacturer’s claim.
Special thanks to Amanda Caswell for her careful review of the manuscript.
Competing Interests: None declared.
1. Clinical and Laboratory Standards Institute. Evaluation of precision performance of quantitative measurement methods. Wayne, PA, USA: CLSI; CLSI document EP05-A2. 2004.
2. Clinical and Laboratory Standards Institute. User verification of performance for precision and trueness; approved guideline. 2. Wayne, PA, USA: CLSI; 2005. CLSI document EP15-A2.
3. Australasian Association of Clinical Biochemists Website. Resources – Tools. [(Accessed 7 April 2008)]. www.aacb.asn.au.
4. Linnet K, Boyd JC. Selection and analytical evaluation of methods with statistical techniques. In: Burtis CA, Ashwood ER, Bruns DE, editors. Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. 4. Elsevier Saunders; St Louis: 2006. p. 357.