Blame the Laboratory Understanding Analytical Error

Similar documents
Like-For-Like Changes: Is Validation Testing Needed? Validation Case Study #7

Approaching the Response to Audit Observations

A Wissen White Paper. Effectively utilizing an offshore Remote DBA team augmentation strategy to increase productivity and reduce costs

Approaching The Response To Audit Observations

Annex 7 Guidelines on pre-approval inspections

Design of Experiments for Analytical Method Development and Validation

PROCEDURES FOR HANDLING OOS RESULTS

Speeding Up CNC Machining with ZW3D Tool Path Editor. ZW3D CAD/CAM White Paper

ICH Topic Q 2 (R1) Validation of Analytical Procedures: Text and Methodology. Step 5

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests

Terms concerned with internal quality control procedures

Being Accountable in Work and Life

VALIDATION OF ANALYTICAL PROCEDURES: TEXT AND METHODOLOGY Q2(R1)

What went wrong? Unsuccessful information technology projects

Planning and conducting a dissertation research project

Implementing New USP Chapters for Analytical Method Validation

Publishing papers in international journals

MANAGING THE RISKS OF CHANGE

STANDARDIZED WORK 2ND SESSION. Art of Lean, Inc. 1

QbD Approach to Assay Development and Method Validation

EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS

Step-by-Step Analytical Methods Validation and Protocol in the Quality System Compliance Industry

The Best Practices of High Performing Sales Teams: Consultative Selling Skills

INTERNATIONAL STANDARD ON AUDITING (UK AND IRELAND) 530 AUDIT SAMPLING AND OTHER MEANS OF TESTING CONTENTS

Mozzarella Process Analysis Get more out of your production with High Resolution in-line analysis. ProFoss. Dedicated Analytical Solutions

ASSURING THE QUALITY OF TEST RESULTS

Why Disruptive Innovations Matter in Laboratory Diagnostics

Brochure content creation -Outsourced!

Variables Control Charts

Delay, Disruption and Acceleration Costs

GCU STYLE TUTORIAL - PART ONE - INTRODUCTION TO WRITING STYLES

5Strategic. decisions for a sound investment policy

FINANCIAL ANALYSIS GUIDE

Guidance for Industry

Assay Development and Method Validation Essentials

STRATEGIC PLANNING TEN-STEP GUIDE. Planning is a critical component of good business and good management of business.

USE OF REFERENCE MATERIALS IN THE LABORATORY

Closing the Business Analysis Skills Gap

Flour Milling Process Analysis Get more out of your production with High Resolution in-line analysis. ProFoss. Dedicated Analytical Solutions

PERSONAL DEVELOPMENT PLAN. Understanding the PDP

The Promise and Challenge of Adaptive Design in Oncology Trials

Guide to Writing a Project Report

Why System Suitability Tests Are Not a Substitute for Analytical Instrument Qualification

Time Series Forecasting Techniques

Statistical estimation using confidence intervals

To avoid potential inspection

Potential Interview Questions

WRITING A CRITICAL ARTICLE REVIEW

Management Accounting 303 Segmental Profitability Analysis and Evaluation

Qualification of an Environmental Monitoring Program

The Stacks Approach. Why It s Time to Start Thinking About Enterprise Technology in Stacks

GMP Training Systems, Inc.

Process Validation: Practical Aspects of the New FDA Guidance

Testing Your Laboratory Balance

TRANSITION BEHAVIOR SCALE IEP AND INTERVENTION MANUAL

FDA Guidance for Industry Update - Process Validation

Automated Firewall Change Management. Ensure continuous compliance and reduce risk with secure change management workflows

In vitro diagnostic reagent, calibrator and control material stability

KEYS TO SUCCESSFUL DESIGNED EXPERIMENTS

Internal Audit and supervisory expectations building on progress

Enhanced calibration High quality services from your global instrumentation partner

DSIP List (Diversified Stock Income Plan)

EXTRACTABLES AND LEACHABLES FOR MEDICAL DEVICES: MEETING THE 510 (k) REQUIREMENTS

GRADATION OF AGGREGATE FOR CONCRETE BLOCK

Getting Started with Statistics. Out of Control! ID: 10137

1. Current situation Describe the problem or opportunity (the need for your proposal).

1

Writing for work documents

Show your value, grow your business:

Compass Interdisciplinary Virtual Conference Oct 2009

Grooming Your Business for Sale

Motivation Through Goal Setting: The Road to Success

Planning and Writing Essays

Organisation Profiling and the Adoption of ICT: e-commerce in the UK Construction Industry

3 keys to effective service availability management. Visibility. Proactivity. Collaboration.

Project Selection Guidelines

IB Math Research Problem

xxx Lesson 19 how memory works and techniques to improve it, and (2) appreciate the importance of memory skills in education and in his or her life.

INTRODUCTION TO COACHING TEACHING SKILLS TEACHING/LEARNING. September 2007 Page 1

Chapter 10. Control Design: Intuition or Analysis?

Cloud Computing And Pharma: A Prescription For Success. How and why this critical technology will change the industry. kellyservices.

360 feedback. Manager. Development Report. Sample Example. name: date:

GUIDELINES FOR THE VALIDATION OF ANALYTICAL METHODS FOR ACTIVE CONSTITUENT, AGRICULTURAL AND VETERINARY CHEMICAL PRODUCTS.

Cleaning Validation in Active pharmaceutical Ingredient manufacturing plants

Wait-Time Analysis Method: New Best Practice for Performance Management

Executive Summary of Mastering Business Growth & Change Made Easy

calibrate confidence with Risk Management: The Critical Role of Measurement and Calibration WHITE PAPER SERVICES

Management Update: Gartner s Updated Help Desk Outsourcing Magic Quadrant

Chapter 3 Notes Page 1

USING CLSI GUIDELINES TO PERFORM METHOD EVALUATION STUDIES IN YOUR LABORATORY

Speech at IFAC2014 BACKGROUND

Guidance for Industry

University Hospital Preoperative Patient Flow & Work Flow Analysis. Final Report

Validating Methods using Waters Empower TM 2 Method. Validation. Manager

Metacognition. Complete the Metacognitive Awareness Inventory for a quick assessment to:

Mauro Calvano. About Aviation Safety Management Systems

Focus on Essay Writing

Validation and Calibration. Definitions and Terminology

Teaching All Students to Read: With Strong Intervention Outcomes

Transcription:

Blame the Laboratory Understanding Analytical Error John McConnell, Brian K. Nunnally, and Bernard McGarvey Analysis and Control of Variation. Coordinated by John McConnell ] Analysis and Control of Variation is dedicated to revealing weaknesses in existing approaches to understanding, reducing, and controlling variation and to recommend alternatives that are not only based on sound science, but also that demonstrably work. Example situations are used to illustrate both problems and successful methodologies. The objective of the column is to combine sound science with proven practical advice. Reader comments, questions, and suggestions will help us fulfil our objective for this column. Case studies illustrating the successful reduction or control of variation submitted by readers are most welcome. Please send your comments and suggestions to column coordinator John McConnell at john@wysowl.com.au or coordinating editor Susan Haigney at shaigney@ advanstar.com. KEY POINTS DISCUSSED The following key points are discussed in this article: Before analytical error can be reduced, it must be understood The pharmaceutical industry has a history of confusing compliance with good quality When outsourcing analytical tests to a vendor laboratory, not only must compliance aspects be covered in the outsourcing protocols, but also minimum standards for analytical variation must be established in advance Validation studies and laboratory controls both suffer from the Hawthorne effect, resulting in estimates for analytical error that are lower than that which is found in routine day-to-day samples All laboratories have an obligation to provide their customers with test results with minimum analytical error In the 2008 draft process validation guidance, the US Food and Drug Administration recommends identification and control of variation in all manufacturing processes, including analytical processes Specifications are a necessary part of the business landscape, but to use them as the sole basis for determining the quality of laboratory control data is an error The best estimates for analytical error are produced by blind controls. INTRODUCTION It often seems to laboratory managers that production managers, validation teams, discovery scientists, and development managers are trained to blame the laboratory when undesirable analytical results are noted. Too often, they are correct (1). As this article was being written, one of the authors was sent to an overseas manufacturing facility to help determine why the production of a critical compound was failing to meet requirements. He quickly discovered that the problem was not in the factory, it was in the laboratory. Excessive analytical error was confounding the data and leading to wasteful and non-productive investigations in the production areas. Reducing analytical error is vital in the pharmaceutical industry, but first it must be understood. In many years of studying laboratory performance, the authors have yet to encounter a single laboratory in the pharmaceutical industry where the laboratory controls were stable at the outset. Worse, many laboratory managers were unconcerned with this finding, citing the fact that they generally met the compliance standards. On those rare occasions where they did not meet standards, a deviation For more Author ABOUT THE AUTHORS John McConnell is the coordinator of Analysis and Control of Variation and is the owner and director of Wysowl Pty Ltd. in Queensland, Australia. He may be reached at john@wysowl.com.au. Brian K. information, go to Nunnally, Ph.D., is in charge of process validation at Wyeth Research. He may be reached at nunnalb@ gxpandjvt.com/bios wyeth.com. Bernard McGarvey, Ph.D., is process modeling group leader, Process Engineering Center, [Eli Lilly and Company. He may be reached at mcgarvey_bernard@lilly.com. gxpandjvt.com Journal of Validation Technology [Summer 2009] 23

Analysis and Control of Variation. was raised, corrective action identified and taken, and the deviation was cleared. This raises an important issue for the industry. Too often compliance and good quality are confused. As one might expect, this problem is most obvious in highly regulated industries such as the pharmaceuticals, food, and aerospace industries. For example, outsourcing of routine tests is becoming common in the industry. Some companies accomplish this outsourcing to vendors without problems. Others, however, find themselves struggling with significantly increased analytical error. Investigations of vendors regularly reveal that when qualifying poorer performing vendors, the focus had been placed on compliance and not on analytical variation. It is rare for the contractual documents to make adequate comment about acceptable levels of analytical error, and the vendor is able to shelter behind the fact that the test remains compliant and that all contractual obligations are being met. During validation, another set of problems is common. Validation trials are usually conducted under the best possible conditions. The number of instruments used is limited, the best analysts are used, and supervision of both the laboratory and manufacturing are undertaken by the best-qualified and most experienced people available. Then the validated process is handed over to production people who will, by necessity, increase the number of people taking samples and to laboratories who will need to use more and usually less qualified analysts and more instruments. Sampling and analytical errors increase and almost inevitably the validation data will contain shortterm variation, but no long-term variation such as reagent batch changes, staff turnover, or long-term wear and tear or degradation of instruments. Total variation increases, as do deviations. Costs increase and quality suffers. There are many issues that need to be addressed to solve these problems. This article brings a focus to understanding analytical error. THE CRITICAL NATURE OF ANALYTICAL ERROR The areas of validation, production, and discovery are customers of the laboratory. These customers deserve the best possible quality, including minimum analytical error that the laboratory can deliver. It is not possible to effectively validate a process, determine the quality of the product or to determine the effectiveness of discovery trials without data from the analytical laboratory. When so many critical decisions depend on the quality of the data, it is important for the laboratory to view validation, discovery, and production as customers and to deliver the best possible service and data quality to customers. The discovery, development, and production personnel are expected to produce the highest quality product possible. Should not the laboratory be expected to do the same? Understanding and reducing variation, including analytical error, requires statistical thinking. Any statistical approach can be no better than the data it uses. If a production process is in statistical control but the analytical process used to measure it is not, it is possible that the data will exhibit instability. This in turn will cause an investigation for causes of excessive variation in the validation, discovery, or production area a hunt that is all wasted effort and that is doomed to fail because the problems are in the analytical area. An example of this can be seen in the Figure. The production area was hunting for reasons for a key variable to be hovering perilously close to the upper specification. They were wasting their time, because the trend up in their data had nothing to do with the production process. It was a function of a drift up in the analytical process, as evidenced by the drift up in the laboratory controls. Note also the two special causes or disturbances in the laboratory controls that coincide with out-of-control points in the production data. THE CHANGING FDA THINKING FDA increased the pressure on the pharmaceutical industry to better understand and reduce variation when it released its draft process validation guidance (2). The guidance states, in part, Manufacturers should understand the sources of variation, detect the presence and degree of variation, understand the impact of variation on the process and ultimately on product attributes, and control the variation in a manner commensurate with the risk it represents to the process and product. There can be little doubt that FDA intended these comments to apply to laboratories as well as production, development, and discovery areas. The FDA guideline further states (2), After establishing and confirming the process, manufacturers must maintain the process in a state of control over the life of the process, even as materials, equipment, production environment, personnel, and manufacturing procedures change. The chart for laboratory controls in the Figure clearly fails to meet these requirements. In order to meet these new FDA guidelines, laboratories will need to demonstrate that the test is in a state of control. In order to meet their customer s requirements, laboratories will need to drive analytical error to minimum levels. Essentially, the following are four methods to estimate analytical variation: Validation data Stability data 24 Journal of Validation Technology [Summer 2009] ivthome.com

John McConnell, Coordinator. Figure: Trending laboratory controls. Manufacturing data 0 5 10 15 20 25 30 35 40 45 Laboratory controls 0 5 10 15 20 25 30 35 40 45 Laboratory controls Blind controls. The strengths and weaknesses of each of these methods are discussed in the following sections. VALIDATION DATA The most common method of determining analytical precision is through the use of validation data. Validation data come from analytical studies using protocols containing pre-approved acceptance criteria. These studies involve replicate testing, and may include multiple analysts, multiple instruments, multiple day testing, and other parameters to determine assay variation. In many cases, this is the easiest method, but usually it is poor practice. Validations are characterized by the Hawthorne effect. The Hawthorne effect is a phenomenon where people concentrate harder when they believe they are being observed, or when they overestimate the extent to which they believe their work is being assessed. The Hawthorne effect in validation studies results in the underestimation of actual long-term analytical error due to extreme attentiveness and the use of the most highly trained and experienced personnel. This extreme concentration and very high skill level cannot be maintained long term under normal laboratory conditions. It is well understood that analysts tend to give improved focus to validation and control samples compared to normal samples. In addition, additional medium-to long-term variation because of aspects such as periodicity and reagent batch changes are not included in validation studies (1). gxpandjvt.com After validation has been completed, inevitable additional sources of variation will be introduced. Some reasons for this include the following: Expert analysts will be replaced by analysts with lower skill levels and/or with less experience Staff turnover and training will impact on analytical precision Batch changes in reagents will occur Normal wear and tear to equipment and the associated maintenance issues will take place Degradation from the optimal validation conditions will occur in even well managed laboratories. Validations are essential. If used carefully, in concert with an understanding that such high standards are unlikely to be maintained long term, precision studies for analytical validation help the laboratory to understand the ability of the analytical method to meet the shortterm variability objectives, providing that appropriate replicate strategies have been used. Replicate samples should straddle what are believed to be the primary sources of variability. Between-analyst and between-instrument variation are examples. Validation studies should continue to work on reducing analytical error in replicates until the variability is less than the objectives contained in the validation protocol. Validation estimates of analytical error can be useful. This is particularly the case when (at least initially) few other options exist. However, because of the impact of the Hawthorne effect and because only short-term variation is included in validation studies, these studies are limited in their utility. The use of stability, con- Journal of Validation Technology [Summer 2009] 25

Analysis and Control of Variation. trol, or blind control data offer significant advantages to understanding the actual day-to-day variability of the method. STABILITY DATA One simple method for determining the analytical variability of an assay that is often available to laboratory managers is the use of stability data. Stability studies are a series of samples that represent the full-scale manufacturing process and are designed to generate data showing the degradation profile for the material. They are prepared shortly after the lot is produced and are aliquoted at the same time. Assay variation may be determined from stability data by compiling historical data generated during the stability study on non-degrading products. These data include testing by multiple analysts, using multiple instruments, on multiple days, and so on, thus yielding a good estimate of assay variation. Because stability samples are prepared at the same point in time and because manufacturing variation is therefore eliminated, they can be an excellent method of testing analytical variation, providing sampling variation is minimal (which is usually the case, but not always). The only other aspect about which one must be cautious is degradation of the material over time. Providing these parameters are met, stability studies can provide an effective estimate of the variability of the analytical assay. Often, stability studies are a workable way to estimate variability when control and blind control data do not exist. It can also be a useful population to compare to control data in the absence of blind controls. Because the Hawthorne effect has been reduced, stability estimates of analytical error have advantages over validation data in most cases. However, stability studies do have weaknesses. In particular, they do not reveal day-to-day variability or any periodicity that may be present because stability studies are conducted over long time periods. In particular, care must be taken if the parameters under examination are degrading over time. Under these circumstances the analysis can be difficult even when the fit of the model is good and the slopes are consistent. They can be useless when the fit of the model is less than excellent or when the slopes are not consistent. LABORATORY CONTROLS Laboratory controls are representative samples used to demonstrate that the analytical procedure was in control when samples are being analysed. They are prepared from representative material and are run with each assay. The use of laboratory controls is usually superior to both validation and stability data in an attempt to understand analytical variability. A good laboratory control has several important characteristics. It is important to eliminate both sampling and process variation, so the control material must be both homogenous and representative. The way a control sample presents to the laboratory should be identical to that of a production, development, or discovery sample. If the normal sample is a liquid, the control should also be a liquid. In addition, the control material should not degrade with time. Providing the rate of degradation over time has a strong element of predictability, there are ways to analyze laboratory controls that trend over time (including effective use of control charts), but analysis is simpler and more effective if the material is stable. Control samples should be run in the same way a normal sample is run. For example, where replicates are averaged into a single reportable result, then the control should be handled identically. The more the handling of controls departs from the handling of normal samples, the less useful they become. One example of improper laboratory control preparation comes from an analytical development area that had prepared a control for a sister laboratory including diluting and aliquoting the control. They were quite proud of what they had done and expected the sister laboratory to be grateful to them for going the extra mile. The analytical development area personnel were disappointed when it was explained how diluting and aliquoting the control was inappropriate because the first step in running a sample is to dilute it. By diluting and aliqouting the control in advance, they had removed crucial steps from the process and ensured that the control would not be run identically to normal samples. The major weakness of laboratory control samples is the presence of the Hawthorne effect, especially when there are specifications, acceptance standards, or system suitability criteria related to the control. When these circumstances exist most analysts will treat the control in a fundamentally different way to a normal sample. They tend to take more time and care with a control sample. As a consequence, the analytical error in laboratory controls is likely to be greater than it is in normal discovery, development, and production samples. This is the same problem noted earlier for validation studies. Laboratory control data should be plotted on a control chart for analysis. Before the control data are released, the control chart should be reviewed. It is common practice for the control value to be compared to the specification or acceptance limits only. This is a poor practice (1, 3). Statistically significant signals will be missed if the data are not viewed on a properly prepared control chart. If the control chart exhibits a significant signal, the labo- 26 Journal of Validation Technology [Summer 2009] ivthome.com

John McConnell, Coordinator. Table: Comparison of various techniques to determine analytical error (1). Study type Short- to medium-term variability estimate Long-term variability estimate Validation study + - - - - - - Stability data - ++ - Laboratory controls ++ +++ - - Blind controls +++ + +++ *adapted from reference 1. Eliminates the Hawthorne effect ratory result should not be released until a laboratory investigation has been completed, even though specifications or acceptance standards have been met. Implementing Laboratory Controls Laboratory control studies are an important, if imperfect, approach to understanding analytical variability. Running a control in concert with the process samples is not only critical, but also it is often a non-negotiable requirement. Implementing a control and establishing limits is not a complex affair. The normal requirements for establishing limits are the following: 6 independent points for introductory limits 15 independent points for temporary limits 30 independent points for semi-permanent limits. Because limits can change with the passage of time, it is usually not good practice to put limits into the test method, unless the test method is able to be changed quickly and easily (usually defined as less than a week). An excellent place to put control limits is in job aides. Usually, this requires writing a protocol to control the creation, implementation, and version control of job aides. BLIND CONTROLS Blind controls are control samples submitted to the laboratory where the laboratory does not realize the same is actually a control; the laboratory treats the material as they would any other sample. They are prepared similar to control samples, but are often not prepared by the analytical laboratory so as to not receive special treatment that may occur with other control samples. The central theme of this article is the understanding of analytical error. Of the three methods examined so far, two have a common problem the Hawthorne effect. Blind controls overcome this problem because although in all other respects, blind controls are similar to normal laboratory controls. From the analysts perspective they are indistinguishable from normal discovery, development, and production samples. Because of this, analysts will treat the gxpandjvt.com blind control in the same way they would treat any other sample. This eliminates the Hawthorne effect and this is why blind controls give the best estimate of analytical error. Getting the samples into the laboratory in such a way as they appear to be normal samples is the most difficult aspect of implementing blind controls, but this is not an insurmountable problem. The use of blind controls is rare in the pharmaceutical industry. The major reasons for this are probably the compliance aspects of the study. This is changing as more and more routine testing is outsourced to vendor laboratories by large pharmaceutical companies and as companies become more aware of how significant unnecessary analytical error can be in the confounding of discovery, development, and production data. Usually, it is easier to get blind controls into a vendor laboratory in such a way that they are indistinguishable from normal samples. Regardless of whether the laboratory is internal or a vendor laboratory, one key to successfully implementing a blind control study is to keep the group organizing the study as small as possible. Again, the success of a blind control study is that the analyst must not be able to tell the difference between a production sample and a blind control. In most cases, even the laboratory manager is unaware of which samples are blind controls, even though he or she may be aware that blind controls will enter the laboratory from time to time. The Table (adapted from Reference 1) demonstrates why blind controls are a superior way to determine the actual analytical variability, as it will be present in normal discovery, development, and production samples. The only limitation that might be associated with a blind control study is the possible lack of a long-term variability estimate, but even this limitation can be eliminated with a long enough series of blind controls. Laboratory controls are nearly always available, but what if they are not? For example, in discovery areas where new molecules are being developed (and in some other circumstances) control material may not be available. This can be overcome by taking a portion of material from the first lot produced and setting this aside for control purposes. Journal of Validation Technology [Summer 2009] 27

Analysis and Control of Variation. The mean of this material may forever remain fuzzy, but if the material meets the other requirements for a laboratory control, it will still give an effective estimate of the degree of analytical variation present, and it is the degree of precision, rather that the accuracy, that is nearly always the more important understanding. This approach also works well where control material is available, but where some characteristics of the control material differ from normal samples. Again, this is most common in discovery and development areas. The Table illustrates that if laboratory controls are supplemented by blind controls, a superior understanding of the actual day-to-day, week-to-week analytical error is revealed. This then provides the foundation for action to reduce this variation to minimal levels. Some approaches to conquering analytical error and providing excellent quality data to the laboratory s customers will be the subject of future articles in this column. CONCLUSIONS In order to comply with FDA guidelines, and to better understand analytical error, companies in the pharmaceutical industry have no choice other than to use statistical process control techniques to bring all their analytical processes into a state of statistical control (3). Because a high level of analytical error is a common problem in the pharmaceutical industry, it needs to be addressed. However, before jumping to solutions, first it is necessary to properly understand and characterise the analytical error present. Because an unstable analytical process has no known capability, we cannot calculate the probability of either passing or failing specifications in a statistically valid way until the test under examination is known to be stable (3, 4). Discovery, development, and production people have a right to expect that the analytical systems on which they are so dependent are stable with minimal analytical error. Properly understanding and characterising this variation and then bringing our analytical processes into statistical control are the first steps towards meeting that expectation. Specifications are a necessary part of the analytical landscape, but to use them as the sole means of determining data quality (for example, the quality of laboratory controls) is to beg for trouble. Laboratory managers who understand the strengths and weaknesses of each of the approaches to determining analytical variation, and who are most effective in characterising the variation present in each test, are in the best position to move forward their attempts to conquer analytical variation and to better service their customers. REFERENCES 1. B. K. Nunnally and J. S. McConnell, Six Sigma in the Pharmaceutical Industry, CRC Press, 2007. 2. FDA, Guidance for Industry Process Validation: General Principles and Practices Draft Guidance, 2008. 3. J. McConnell, B. Nunnally, and B. McGarvey, Variation: Past, Present, And Future, Journal of Validation Technology, Volume 15, Number 2, Spring 2009. 4. W. A. Shewhart, Economic Control of Quality of Manufactured Product, Van Nostrand, 1931. JVT 28 Journal of Validation Technology [Summer 2009] ivthome.com