Using next generation electronic medical records for laboratory quality monitoring
There is a saying that one can only have two of the following three features when it comes to a service or product: cheap, fast or good. The increasing commoditization of laboratory services often emphasizes on the former two, at the expense of quality.
Clinical laboratories produce results that physicians rely upon to make diagnostic and management decisions. The results generated by the laboratories should meet certain quality specifications to be clinically fit for purpose. At present, this is monitored through a combination of internal quality control and external quality assurance systems. These complimentary quality systems rely upon periodic testing of a sample that has a known value, and looking for significant deviation from the known/target value, when compared to certain control limits.
The internal quality control system was developed at a time when laboratory testing was done manually and in small batches. The tests are often developed in the laboratory and the laboratory practitioner often has good analytical understanding of the methodology employed. This ensured that the laboratory practitioners always looked at the raw analytical data and interpreted them, which would help detect any significant deviation in analytical performance before any incorrect patient results were released. The relatively leisurely pace also gave laboratory practitioners ample time to troubleshoot any unusual analytical performance that was detected. Given the low numbers of patients being tested at that time, having an internal quality control sample tested at the beginning, the end or both ends of the batch was likely to be able to detect significant performance deviations from the ‘in control’ situation.
By contrast, modern day laboratory practice is driven by highly automated instruments that can perform multiple tests simultaneously at very high throughputs, which can run into the thousands of patient samples per hour. Turnaround time from sample receipt in the laboratory to reported result on a doctor’s desktop is often set to a short few hours. Furthermore, in an effort to simplify and automate laboratory testing, most of the analytical data generated by the instruments are analysed and reviewed in middleware using electronic rules and algorithms. In other words, the instruments represent convenient ‘black box’ solutions where laboratory practitioners have little knowledge or control over the analytical process or data interpretation. These factors have greatly eroded the technical ability and independence of laboratory practitioners to detect and correct analytical deviations and can lead to erroneous patient results being released.
Despite the changes in laboratory practice, the modern day laboratory mainly still employs the historical internal quality control practices, where a quality control sample is tested at the beginning of a run or at fixed intervals throughout the day. The internal quality control practices cannot be the same for a laboratory practice where the number of samples tested is 300 per day and another where the number of samples tested is 10,000 per day (1). The continued practice of using traditional internal quality control carries significant clinical risk for missed error detection with the impact greatly amplified by the high test volumes in modern laboratories. It is unsurprising that large-scale laboratory errors are still being reported even in the laboratories that employ ‘state of the art’ internal quality control systems (2-4). It has been shown that the historical internal quality control practices lack sufficient power in detecting significant errors to meet the increasingly stringent quality requirements needed to meet medical requirements (4,5).
Another casualty of modern, electronically driven medical practice is the reduced interaction between the laboratory practitioner and the clinical practitioners, particularly with the advent of electronic test ordering systems. The previous professional courtesy of providing clinical details along with laboratory requests is fast becoming a thing of the past. It is now more common to receive laboratory requests without clinical details. This imposes significant challenges for laboratory practitioners to interpret laboratory results or detect a trend in the right clinical context, without which, a laboratory result is just a number without context or value.
It is clear that laboratory practices need to change. In particular, there is a need to adopt quality systems that continuously monitor the analytical performance of instruments. Some of these techniques include the moving sum of outliers (4), moving average (5-7), CUSUM-logistic regression (8), and average of delta (9). These techniques use the continuous calculation of statistics based on individual patient results to monitor trends in the population mean or SD that may signify significant shifts in the all reported results and lead to potential misclassification of patients.
The main technical difficulty associated with these techniques is the underlying assumption that the patient population being tested is relatively stable, which may not always be true. This makes it challenging to identify if the shift in distribution is caused by a change in the patients that are being measured, for example more diabetic patients coming from a particular clinic on a certain day of the week, or a true change in analytical performance of the glucose method. As aptly put by some authors, the objective is to monitor the analytical performance of the method, not the patients being tested that day (10). While there are some methods that can increase the effectiveness of these techniques, including the selection of a ‘normal’ population or application of truncation limits to the data, which is removing the patients with abnormal results (5), use of simulated annealing algorithms (10), they do not completely exclude the possibility of an underlying patient population shift. Furthermore, certain tests are not performed in patients from a ‘normal’ population (e.g., tumour markers, therapeutic drug monitoring, cardiac markers, endocrine hormones), thereby challenging the above assumption.
The next generation electronic medical record promises to bring together different clinical databases that are traditionally organized into silos (11,12). This opens up the possibility for the laboratory to match their laboratory trends with clinical information. For example, a laboratory that is employing the moving sum of outlier technique may detect an increased number of patients with elevated insulin-like growth factor-1 (IGF-1) concentration. By extracting the contemporaneous clinical information, it can be determined that there had been no increase in the diagnosis of acromegaly, which is a relatively rare disorder (prevalence: 50–60 per million population; incidence: 3–4 per million per year). Hence, the laboratory can be confident that the observed shift in the number of patients with elevated IGF-1 levels is unlikely to be genuine and initiate a detailed investigation looking for analytical errors. It is not difficult to imagine the same scenario for therapeutic drug monitoring, where the trend in the drug concentration can be matched with prescription patterns. It is even more tantalizing to think about the potential power of such tools when laboratories and health systems share the same information technology platform.
It is possible that internal quality control systems may become less relevant when such practice becomes commonplace. The role of the external quality assurance program will then become the periodic quality spot check of the analytical systems, provided they have a matching matrix with the clinical samples, target values assigned by reference method, are administered frequent enough and the results are returned in a timely manner.
These new tools bring the exciting possibility of a new laboratory practice that is responsive (fast), leverage on existing data (cheap) to improve the quality system (good).
Acknowledgments
Funding: None.
Footnote
Provenance and Peer Review: This article was commissioned by the editorial office, Journal of Laboratory and Precision Medicine for the series “Clinical Database in Laboratory Medicine Research Column”. The article has undergone external peer review.
Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/jlpm.2017.08.06). The series “Clinical Database in Laboratory Medicine Research Column” was commissioned by the editorial office without any funding or sponsorship. Tony Badrick served as Guest Editor of the series and serves as an unpaid editorial board member of Journal of Laboratory and Precision Medicine from December 2016 to November 2018. Tze Ping Loh served as Guest Editor of the series and serves as an unpaid editorial board member of Journal of Laboratory and Precision Medicine from February 2017 to January 2019. The authors have no other conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Yundt-Pacheco J, Parvin CA. Validating the performance of QC procedures. Clin Lab Med 2013;33:75-88. [Crossref] [PubMed]
- Algeciras-Schimnich A, Bruns DE, Boyd JC, et al. Failure of current laboratory protocols to detect lot-to-lot reagent differences: findings and possible solutions. Clin Chem 2013;59:1187-94. [Crossref] [PubMed]
- Loh TP, Lee LC, Sethi SK, et al. Clinical consequences of erroneous laboratory results that went unnoticed for 10 days. J Clin Pathol 2013;66:260-1. [Crossref] [PubMed]
- Liu J, Tan CH, Badrick T, et al. Moving sum of number of positive patient result as a quality control tool. Clin Chem Lab Med 2017; [Epub ahead of print]. [Crossref] [PubMed]
- Liu J, Tan CH, Loh TP, et al. Verification of out-of-control situations detected by "average of normal" approach. Clin Biochem 2016;49:1248-53. [Crossref] [PubMed]
- Hayashi S, Ichihara K, Kanakura Y, et al. A new quality control method based on a moving average of "latent reference values" selected from patients' daily test results. Rinsho Byori 2004;52:204-11. [PubMed]
- Usta M, Aral H, Mete Çilingirtürk A, et al. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9. Scand J Clin Lab Invest 2016;76:553-60. [Crossref] [PubMed]
- Sampson ML, Gounden V, van Deventer HE, et al. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results. Clin Biochem 2016;49:201-7. [Crossref] [PubMed]
- Jones GR. Average of delta: a new quality control tool for clinical laboratories. Ann Clin Biochem 2016;53:133-40. [Crossref] [PubMed]
- Ng D, Polito FA, Cervinski MA. Optimization of a Moving Averages Program Using a Simulated Annealing Algorithm: The Goal is to Monitor the Process Not the Patients. Clin Chem 2016;62:1361-71. [Crossref] [PubMed]
- Tolan NV, Parnas ML, Baudhuin LM, et al. "Big Data" in Laboratory Medicine. Clin Chem 2015;61:1433-40. [Crossref] [PubMed]
- Loh TP. Knowledge is power: harnessing clinical database for better informed laboratory medicine practice. J Lab Precis Med 2017;2:44. [Crossref]
Cite this article as: Loh TP, Badrick T. Using next generation electronic medical records for laboratory quality monitoring. J Lab Precis Med 2017;2:61.