2. Basic concepts of quality control

On a daily basis, too many clinical laboratories prove to generate a wide range of results. This pool of clinical lab data ought to be summarized with an aim of monitoring the test performance. The basis for tracking performance—the quality control—is descriptive statistics, which involves three key concepts: measures of spread, shape, and center.

## 2.1. Descriptive statistics: measures of spread, shape, and center

After close examination, a combination of nearly identical aspects typically exhibits at least some differences for a certain property like smoothness, color, potency, volume, weight, and size. Likewise, laboratory data will possess at least some measurement differences. An effective example entails that if the glucose present in a specimen is examined a hundred times in one row, then there would emerge a range of the resultant data wherein such differences in the lab values can affect outcomes of several sources. Despite the fact that measurements differ, their resultant values yield patterns whose visualization and analysis can prevail collectively. The laboratorians describe and perceive these patterns deploying graphical representations as well as descriptive statistics. Nevertheless, once comparing and also analyzing sets of lab data, the description of the patterns can occur focusing on their spread, shape, and center. Even though the comparison of the data's center is quite typical, comparison of the spread is fairly more powerful. Nonetheless, data dispersion enables the lab practitioners to evaluate the predictability, as well as the lack of, in the lab test or rather a measurement.

### 2.2. Measures of center

Keywords: method evaluation, quality control, quality management, descriptive statistics,

The current nature of conducting medical transactions and procedures has revealed that most of the underlying medical decisions are arrived at utilizing laboratory data. As a result, there is the great significance that the outcomes emanating from the laboratory be of the high degree of accuracy. Determination and upholding of accuracy call for considerable cost and potential, involving the utilization of several approaches in accordance with the underlying test's complexity [1]. Invariably, commencing the entire decision-making process, one is entitled to acknowledge the necessary quality besides knowing how to measure the quality. In conjunction with that, there are several statistical techniques deployed to enable the medical practitioner to measure the resultant quality. Prior to enacting a modern test, there is the essence of determining whether the test can be pursued acceptably wherein method evaluation is deployed in verifying the acceptability that accrues to the new approaches before reporting the results to the patient. Immediately, an approach has been enacted, a necessity prevails regarding that the laboratory ensures its validity over time. Quality control is the process that facilitates the upholding of the validity accruing to the laboratory over time. All the two concepts—method evaluation and quality control—are effective constituents of quality management. Invariably, quality management entails that the aggregate testing process is directed to the chief goal of enhancing the accuracy that accrues to the laboratory results [2]. This chapter presents the basic statistical concepts besides providing a universal overview regard-

ing the procedures crucial for enacting a new method to ensure its persistent accuracy.

which involves three key concepts: measures of spread, shape, and center.

2.1. Descriptive statistics: measures of spread, shape, and center

On a daily basis, too many clinical laboratories prove to generate a wide range of results. This pool of clinical lab data ought to be summarized with an aim of monitoring the test performance. The basis for tracking performance—the quality control—is descriptive statistics,

After close examination, a combination of nearly identical aspects typically exhibits at least some differences for a certain property like smoothness, color, potency, volume, weight, and size. Likewise, laboratory data will possess at least some measurement differences. An effective example entails that if the glucose present in a specimen is examined a hundred times in one row, then there would emerge a range of the resultant data wherein such differences in the lab values can affect outcomes of several sources. Despite the fact that measurements differ, their resultant values yield patterns whose visualization and analysis can prevail collectively.

reference interval, diagnostic efficiency, predictive values, specificity, sensitivity,

1. Introduction to method evaluation and quality management

imprecision, inaccuracy

10 Quality Control in Laboratory

2. Basic concepts of quality control

The three typically deployed descriptions regarding the center include the mode, the median, and the mean. The mean is sometimes termed as the average of various data values. The median encompasses the "middle" point accruing to the data and is frequently deployed with fairly skewed data. The mode encounters its use rarely in describing the center of data but is often utilized in describing the data that deems to have two centers or rather bimodal data. The mean of the lab data can be acquired by summing up the total data values and dividing by the total number of samples or objects (Figure 1). Computing the median necessitates arrangement of the data values as per their ranks—either in an ascending manner or descending manner. Two values dominate the middle of the data, and then the median is an average of the two middle values. On the other hand, the mode entailed the most frequently appearing data value in the underlying dataset. It is often deployed in conjunction with the data's shape, bimodal distributions.

### 2.3. Measures of spread

The spread of the data depicts the distribution of the various data values. The spread further denotes the correlation of the entire data points to the data's mean. The descriptions of spread include standard deviation (SD), range, and coefficient of variation (CV). The range simply refers to the largest value regarding the dataset minus the dataset's smallest value. It denotes the data's extreme that one may identify standard deviation is a frequently deployed approach, especially when measuring variation. The SD and the variance denote the "average" distance notably from the data's center (mean) to every other value in the underlying dataset.

Figure 1. Basic measurements of data include the center, spread, and shape [1].

Furthermore, the CV enables the laboratorians to put up an effective comparison regarding the SDs with varying units. Computation of a dataset's SD necessitates prior computation of the dataset's variance (s<sup>2</sup> ). Variance precisely implies the average accruing to the squared distances of all the dataset's values from the set's mean. Variance, as a dispersion measure, denotes the difference dominant between each data value and the data's average. Afterward, the SD is simply the variance's square root. An additional approach of connoting SD is using the CV, which is computed via division of the SD by the mean of the data, and multiplying the quotient by 100 to represent it as a percentage (Figure 1). The CV proves to simplify the comparison of SDs accruing to test outcomes connoted in varying concentrations and units. The CV encounters extensive application in summarizing the underlying QC data, and it can be less than 1% for the highly precise analyzers.

(difference) prevalent between the test method and the reference approach values divided by the range of the dataset. The difference plot further enables simple comparison regarding the differences in order to previously set up maximum limits [1]. Invariably, the main difference between reference and test method depicts the underlying error. COM experiments have a correlation with prevalence of two types of errors—systematic errors and random errors. The random errors are dominant in nearly all measurements besides being either negative or positive. Random error can emanate from environmental variations, an instrument used, reagent, and operator variations. Computation of the random error calls for calculation of the dataset's SD regarding the regression line. This error implies the average distance notably between the regression line and the data. A larger random error implies a wider scattered data values. Nevertheless, if the data points were perfectly in the same alignment as the regression line, the dataset's random error or rather the standard error would be zero. On the other hand, the systematic error affects observations in a consistent manner and also in one direction. The measures of y-intercept and slope yield an estimate regarding the systematic error. Invariably, systematic error can encounter categorization into proportional and constant errors. The constant systematic errors prevail once a continual difference exists between the test approach and the underlying comparative technique values, irrespective of the dataset's concentration. A proportional error prevails once the differences accruing to the test approach and the comparative approach values are fairly proportional to the underlying analyte concentration. When-

The Basic Concepts of Quality Control Reference: Interval Studies, Diagnostic Efficiency, and Method…

http://dx.doi.org/10.5772/intechopen.76848

13

ever the slope is not equal to one, a proportional error is present in that dataset.

Inferential statistics is the subsequent degree of complexity past paired descriptive statistics. They are deployed in drawing conclusions or rather inferences convening the SDs or mean of two datasets. Nevertheless, inferential statistics acknowledges the relevance of data distribution regarding shape. The respective distribution is key in determining the type of inferential statistics to use in analyzing the underlying data. Data depicting Gaussian distribution is normally analyzed deploying "parametric" tests that encompass ANOVA (a Student's t-test or analysis of variance). "Nonparametric" analysis is used for the data that is not normally distributed. Reference interval studies mostly depict nonparametric tests, wherein population data frequently depict skewness [1]. A precaution entails that an inappropriate analysis

regarding sound data can direct the practitioner toward drawing a wrong conclusion.

Lab examination data are deployed in making clinical diagnoses, managing therapy, and assessment of physiologic functionalities. Interpretation of lab data implies that the clinicians are comparing the evaluated test outcome from a certain patient with a certain reference interval. Nevertheless, reference intervals encompass all the data values defining the observations' range. All normal ranges are indeed referenced intervals, but not all reference intervals outstand to be normal ranges. The following example asserts the validity of this statement.

4. Inferential statistics

5. Reference interval studies
