Computer and Software Validation Volume II
|
|
|
- Anna Clarissa Gregory
- 9 years ago
- Views:
Transcription
1 Table of Contents Maintaining the Validated State in Computer Systems Orlando Lopez Use Automated Testing Tools? Janis V. Olson Considerations for Validation of Manufacturing Execution Systems Chris Wubbolt and John Patterson John E. Lincoln Storm Clouds? Cloud Computing in a Regulated Environment Robert H. Smith Robert H. Smith Computer System Compliance and Quality Planning Bernard T. O Connor John E. Lincoln Requirements Management Orlando Lopez Integrating Risk Management into Computer System Validation Timothy Fields
2 Orlando Lopez Maintaining the Validated State in Computer Systems Orlando Lopez INTRODUCTION In the life science industries, plenty of attention is placed in the validation of computer systems during the development of new systems and associated infrastructure. After deploying the computer systems to support operations, the operational life of these systems initiates. This is the period of time that relevant processes are in place to run the computer system in its operational environment, monitored for satisfactory performance, and modified as part of corrective, adaptive, perfective, and preventive maintenance. Computer systems maintenance and operations create a threat to the validated state of the systems by altering the delivered validated computer systems to meet the changing environment and/or changing needs of users. The primary reasons a system development is not complete without the operational phase are the presence of latent defects in the computer system and changes that occur in its operational environment. Over 67% of the relative cost of the system will be accounted to the computer systems maintenance (1). A percentage of this budget will be assigned to the revalidation effort. It is very common in the life science industry to observe that a few years after deploying a computer system, deficient operational supporting processes and/or the incorrect implementation of such processes nullify the validated state of the computer system causing remediation activities. This situation is a typical situation across multiple computer systems and provokes a remediation project across the company and adds cost to the operational life of many computer systems. This paper provides a brief description of the typical operational life activities and processes in support to preserve the validated state of computer systems performing regulated operations. OPERATIONAL LIFE After the system has been released for operation, the computer system operational life activities take over(2). The operational life is governed by two key processes: operation and maintenance. The operation process defines the activities of the organization that provides the service of operating a computer system in its live environment for its users. The maintenance process defines the activities of the organization that provides the service of managing modifications to the software product to keep it current and in operational fitness. This process includes, as applicable, the migration and retirement of the computer system. To maintain uniformity during the maintenance period, the maintenance activities must be governed by the same procedural controls followed during the development period. Written procedural controls must be available for the operation and maintenance of computer systems (3). OPERATIONAL ACTIVITIES Routine use of computer systems during the operational life requires procedural controls that describe how to perform operational activities. These operational procedural controls must be in place and approved by the appropriate individuals. The execution of these procedural controls should be monitored by the regulated user to verify the accurate implementation and adherence. These procedural controls should be reviewed on a periodic basis as according with the local retention policy. In addition, it is vital that management should ensure 2
3 Orlando Lopez that the relevant associates are trained accordingly. Key operational procedural controls are: Archiving: In the context of the regulated user, archives consist of records that have been selected for permanent or long-term preservation based on their evidentiary value. All computer system baselines should be archived in an environmentally controlled facility, as applicable, that is suitable for the material being archived and that is both secure and, where possible, protected from environmental hazards. A record of all archived materials should be maintained. Backups: A backup process must be implemented to allow for recovery of the system following any failure that compromises its integrity (4). Integrity and accuracy of backed-up data and the ability to restore the data should be checked during validation and monitored periodically (5). The frequency of backups depends on data criticality, amount of stored data, and frequency of data generation. The procedural controls establishing the backup process must be in place to ensure the integrity of backups (secure storage location, adequately separated from the primary storage location, etc.). This may be part of a more general disaster recovery plan. Business Continuity: Business continuity procedural controls, including disaster recovery procedural controls, ensure minimal disruption in the case of loss of data or any part of the system. It is necessary to ensure that the integrity of the data is not compromised during the return to normal operation. At the lowest level, this may mean the accidental deletion of a single file in which case procedural controls should be in place for restoring the most recently backed-up copy. At the other extreme, a disaster such as a fire could result in loss of the entire system. The procedural control employed should be tested regularly and all relevant personnel should be made aware of its existence and trained to use it. A copy of the procedural controls should be maintained off-site. Infrastructure Maintenance: The procedural controls applicable for the preventative maintenance and repair of the infrastructure provide a mechanism for anticipating problems and, as a consequence, possible loss of data. In addition to the typical infrastructure elements such as system-level software, servers, wide-area network, local-area manager, and the associated components; the infrastructure includes uninterruptible power supplies (UPSs) and other emergency power generators. Modern infrastructure hardware usually requires minimum maintenance because electronic circuit boards, for example, are usually easily replaced and cleaning may be limited to dust removal. Diagnostic software is usually available from the supplier to check the performance of the computer system and isolate defective integrated circuits. Maintenance procedural controls should be included in the organization s procedural control. The availability of spare parts and access to qualified service personnel are important for the smooth operation of the maintenance program. Problem Reporting: The malfunction or failure of a computer system components, incorrect documentation, or improper operation that makes the proper use of the system impossible for an undetermined period are some characteristics of the incidents that can affect the correct operation of a computer system. These system incidents may become non-conformances. In order to remedy problems quickly, a procedural control must be established to record any computer system failures from the users of the system. These enable the reporting and registration of any problem encountered by the users of the system. Problem Management: Reported problems can be filtered according to whether their cause lies with the 3
4 Orlando Lopez user or with the system itself and then fed back into the appropriate part of the supplier s organization. In order to remedy problems quickly, a procedural control must be established if the system fails or breaks down. Any failures, the results of the analysis of the failure, and, as applicable, any remedial actions taken must be documented. Those problems that require a remedial action involving changes to any baseline are then managed through a change control process. Retirement: The retirement of computer systems performing regulated operations is a critical process. The purpose of the Retirement Period is to replace or eliminate the current computer system and, if applicable, ensure the availability of the data that it has generated for conversion, migration, or retirement. Restore: A procedural control for regular testing of restoring backup data to verify the proper integrity and accuracy of data must also be in place. Security: Computer systems security includes the authentication of users and access controls. Security is a key component for maintaining the trustworthiness of a computer system and associated records. Security is an ongoing element to consider and is subject to improvement. In particular, after a system has been released for use, it should be constantly monitored to uncover any security violations. Any security violation must be followed up and analyzed and proper action must be taken to avoid a recurrence. Training: All staff maintaining, operating, and using computer systems that perform regulated operations must have documented evidence of training in the area of expertise. For users, the training will concentrate on the correct use of the computer system, security, and how to report any failure or deviation from the normal operating condition. MAINTENANCE ACTIVITIES The validated status of computer systems performing regulated operations is subject to threat from changes in its operating environment, which may be either known or unknown. The January 2011 of the Eudralex Volume 4- Annex11 Computerised Systems establishes in Item 10 that any changes to a computerised system including system configurations should only be made in a controlled manner in accordance with a defined procedure. (5). The above statement is consistent with other regulations and guidelines such as: FDA Code of Federal Regulations Title 21 Part (6), , , (7), 11.10, (8); World Health Organization (WHO) - Technical Report Series, No. 937, Annex 4. Section 12 (9), and Pharmaceutical Inspection Convention Scheme (PIC/S) PI (Part Three) (10). Explicitly in Section 3.3 of the WHO Technical Report (9) stipulates three events to cover during the maintenance of computer systems: Verification and revalidation: After a suitable period of running a new system, it should be independently reviewed and compared with the system specification and functional specification. The periodic verification must include data checks, including any audit trail. Computer systems used to control, monitor, or record functions that may be critical to the safety of a product should be checked for accuracy at intervals of sufficient frequency to provide assurance that the system is under control. If part of a computerized system that controls a function critical to the safety of the product is found not to be accurate, then the safety of the product back to the last known date that the equipment was accurate must be determined. Change control: Modifications and adjustments to computer systems shall only be made in accordance with a defined procedural control that includes provisions for checking, approving and implementing the modification and/or adjustment. Checks: Data should be checked periodically to confirm that they have been accurately and reliably recorded and/or transferred. These checks include the following EU Annex 11 items: Data (11.5), Accuracy Checks (11.6), and Data Storage (11.7). 4
5 Orlando Lopez SUMMARY The regulated applications and infrastructure must be well maintained during the operational phase to assure that the systems remains in a validated state and provides costsavings to the maintenance of the computer systems. The operational phase comprises of: Ensuring that system changes are performed according to the approved change control procedure Maintaining training records for users and technical support personnel Controlling user access Conducting periodic reviews based on established criteria Ensuring that data is backed up and archived in accordance with established schedules Identifying and tracking system problems and related corrective actions Maintaining configuration items. The above activities must be outlined in written and approved procedural controls. About the Author Orlando López, currently working with Smith & Nephew, Memphis (Tennessee) as an IT Manager, Global Regulatory/ Compliance, has over 20 years of experience within Pharmaceutical and Medical Devices manufacturers. His role in Smith & Nephew includes subject matter expert for quality and compliance oversight on new computer systems implementation and maintenance. He is the author of two books: 21 CFR Part 11 - A Complete Guide to International Compliance, published by Sue Horwood Publishing Limited, and Computer Infrastructure Qualification for FDA Regulatory Industries published by Davis Healthcare International Publishing (www. suehorwoodpubltd.com).he may be contacted by at [email protected]. References 1. Zelkowitz et al. [1979]. 2. O. Lόpez, 21 CFR Part 11: Complete Guide to International Computer Validation Compliance for the Pharmaceutical Industry, CRC Press: Boca Raton, FL. 2004, pp ICH Q7 Good Manufacturing Practice Guide for Active Pharmaceutical Ingredients. 4. OMCL Network/EDQM of the Council of Europe, Validation of Computerised Systems - Core Document (July 2009). 5. EU Volume 4 Good Manufacturing Practice Annex : Computerised Systems (Brussels, BE, June 2011). 6. Code of Federal Regulations, Title 21, Food and Drugs (Government Printing Office, Washington, DC), Part Code of Federal Regulations, Title 21, Food and Drugs (Government Printing Office, Washington, DC), Part Code of Federal Regulations, Title 21, Food and Drugs (Government Printing Office, Washington, DC), Part WHO, Validation of Computerized Systems, Technical Report 937, Annex 4 in Appendix 5, (2006). 10. PIC/S, Good Practices for Computerised Systems in Regulated GXP Environments (2007). Originally Published in Journal of GXP Compliance Volume 17 Number 2 5
6 Janis V. Olson Software Validation: Can an FDA-Regulated Company Use Automated Testing Tools? Janis V. Olson INTRODUCTION - WHAT ARE SOFTWARE TEST TOOLS? Software test tools help development and testing teams verify functionality, ensure both the reliability and security of the software they develop, and investigate software bugs. Off-the-shelf tools are available for all stages of software development. Examples include static code analyzers, record and replay, regression testing, and bug tracking. Some software testing tool vendors offer an integrated suite that starts with the gathering of requirements and continues through software development and testing throughout the life of a project, including supporting the live system. Other vendors concentrate on a single part of the application development life cycle, such as just testing. HOW DO YOU USE A TOOL SO THE US FOOD AND DRUG ADMINISTRATION WILL ACCEPT IT? FDA will accept use of a software testing tool if the following are done: Know, understand, and document the intended uses of the tool Test and verify the tool in the context of the intended uses Document the validation of the tool Maintain the validated state Know, Understand, and Document the Intended Uses of the Tool Companies, project teams and individuals only get what they want if they know what their requirements are for how they are going to use the tool. So the first task is to understand the intended use requirements and get agreement that this is how the tool will be used. Understand the native functionality of the tool when used as is and what needs to be configured or customized to get the performance needed. Plan what to do in a validation plan or a project plan. Identify what will be done by whom when. Then, go to work on setting up the tool. Documentation of the process is the key to success. Document what can be done with the tool s native configurations and any customizations needed to assure that the tool will work as expected. Review the native functions and the configurations and map them back to the intended use requirements to assure that all the needs of the tool as stated in the intended use requirements have been covered. Document the intended uses of the tool in the context of the processes that will use the tool in written procedures. Procedures need to be written at the level needed by the people in the organization who will use them. Test and Verify the Tool in the Context of the Intended Uses One should use code already tested or verified you have through other means. Determine the amount of testing/verification and the extent of testing that needs to be done based on the risk of the failure of the tool. Low-risk tools still require testing and verification that the tool will meet one s intended uses. Document your testing with test cases or scripts. Document the results; including the objective evidence of the actual results that can be compared to the expected results. Do not just record that the tool worked as expected or passed. Use good and poor code to test the tool. Keep those pieces of code as part of the test documentation. Do fault insertion to assure that the tool behaves properly in all the expected scenarios. Do testing in the context of the intended use. Understand that risk is based on the intended use of the tool. Know the tool s limitations. 6
7 Janis V. Olson Each tool will solve problems but also create potential issues. Understand what these potential issues are and their consequences. Mitigate the possible risks possibly as part of the procedures for the use of the tool. Document the Validation of the Tool Validation is more than just the testing. It is the linking of all the activities and documentation to one another. The intended use requirements are mapped to the tool s native functions and any special configurations and/ or customizations. They are also mapped to the testing scripts. The executed scripts have the documentation of the objective evidence of the results that can be compared to the expected results. Once all this documentation is together and linked, this is the assembled evidence that the tool is validated for its intended uses. Write a summary report of the effort. Train the users and use the tool. Maintain the Validated State Once the tool is validated, monitor the use and the results of the tool. If the users find that they have additional uses that were not validated, those will need to be validated prior to use. Likewise, if users find new expected scenarios that were unknown at the time of the original validation, additional documentation of the behavior of the tool under these new scenarios must be placed in the validation file and tested. Monitor for additional issues and their consequences and evaluate the risks. Document new risks as part of the risk analysis. TWO EXAMPLES OF COMMONLY USED TESTING TOOLS Static Code Analyzer A static code analyzer is a commonly used code review tool. FDA recommends the use of static code analyzers in high-risk applications, especially for medical device software. Static code analyzers will find issues that software engineers know are not associated with the basic code. Many false positive issues are found that software engineers then have to evaluate and investigate. Static code analyzers have coding rules that they enforce. If the code does not conform to their rules, the software engineers will get many issues that need to review and resolve. Once the software engineers have resolved them, this information on the false positives routinely identified by the tool can be used so that the next time the false error is found, in the same way by the static code analyzer there is no need to do another investigation. Some companies change their coding standards to conform to the tool for all future development. However, because they often have built code on old code bases, they maintain a list of false errors they will ignore or not investigate fully. Naturally, this is based on the risk of not investigating these errors. Static code analyzers also allow software engineers or the tool vendor to do custom configurations. These customizations tell the analyzer to ignore some of the false errors that the tool finds but that the developers know are not issues with the code and can be ignored. A company should do extensive research to determine the best static code analyzer for their needs. Code modules with common errors, complexities, and poor coding practices as well as modules with good code should be developed if not already available to the software engineers. Several different manufacturers static code analyzers should then be evaluated. The tool that found the most types of errors and issues should be chosen. Limitations of the static code analyzer should be documented so other verification methods can be used to find errors the static code analyzer missed or was not intended to find. A list of common false errors the tool found should be compiled. The false code errors, the investigations into each one, the results of the investigations, and what the software engineers would do with each false error when found in their code modules was prepared. These became part of their standard operating procedures in the use of the tool. Record and Replay Another commonly used tool is an automated record and replay testing tool. These tools allow testers to record each keystroke as they are running a test manually. The testing team then can schedule testing to be performed any time in the future and the tool will replay each keystroke as if a tester was doing it manually. The advantage of this approach is that the documentation accumulated while recording, such as screen shots or other documentation, is used not only to show that the code passed the test when run manually but also used to compare the tool output the 7
8 Janis V. Olson very first time it is run and demonstrates, in fact, that the tool is running the manual tests and running them correctly. To optimize testing, many of these tools allow testers to modify the keystrokes rather than rekeying them, insert negative testing, or change if the code is changed and requires changes to the testing steps run by the tool. Modifications to the keystrokes outside the record feature of the tool should be documented and independently reviewed to assure that the changes are done appropriately and correctly. The documentation of the keystrokes is the code for the tool. After validation of the tool, testers can have the tool continue to provide the documented evidence of the test results, or chose after to only have the tool record that the tests passed or how they failed. All testing failures will need to be investigated and resolved. About the Author Janis V. Olson (Halvorsen) is Vice President of Regulatory and Quality Services, a global team of FDA compliance experts. Janis worked for FDA for over twenty-two years in various positions. She currently works with EduQuest assisting companies with FDA regulation compliance and understanding computer system development and validation. SOFTWARE TESTING TOOLS ARE BUSINESS DRIVERS AND HAVE BENEFITS The use of software testing tools will increase the quality of software applications by efficiently detecting functional, performance, and security issues. Compared to manual testing, which may be inconsistent because of human error, testing tools also are more consistent and repeatable in following the documented testing procedures/processes. Many of the tools can measure software performance throughout development, testing, and maintenance of the software product. Other tools can assist in conducting and documenting risk analysis and benchmarking against other software. The use of these tools can improve a company s time-to-market or time-to-implementation because of more frequent and consistent testing, finding, and fixing of issues and bugs. This improves product lifecycle management processes and reduces your total cost of ownership. Once the tool is in a validated state, use it over and over. Companies can be confident that the tool is giving them the information they need to demonstrate, to FDA, to their customers, and to all their stakeholders, that the software is being tested consistently and effectively. Originally Published in Journal of Validation Technology Volume 18 Number 3 8
9 Chris Wubbolt and John T. Patterson Considerations for Validation of Manufacturing Execution Systems Chris Wubbolt and John T. Patterson INTRODUCTION Improvements in computer-based control systems through technological breakthroughs in integrated circuit manufacturing and software design have been essential to the development of the modern day pharmaceutical manufacturing facility. Automation of processes have improved efficiency, reduced costs, and increased compliance for the manufacture of pharmaceutical products. The MES represent one of the more recent offspring of these technological improvements and are capable of a high degree of integration with different types of equipment and systems including both shopfloor automation equipment (e.g., DCS, PLC) and the business management systems such as enterprise resource planning (ERP) systems. The MES also enables critical functionality including electronic batch records (EBR) that help ensure the highest level of consistency and reproducibility, thereby ensuring the highest level of product quality and corresponding assurance of patient safety. Because of the favorable business case provided by the MES for pharmaceutical and medical device products, their use within these types of manufacturing environments is increasing. A common architecture for an MES system is shown in Figure 1. VALIDATION STRATEGY AND APPROOACH To ensure the appropriate validation of an MES design, it is essential to have a well-defined validation strategy that is based on company validation and quality standard operating procedures (SOPs) and further detailed in an approved quality assurance plan (QAP) or equivalent validation or quality planning document. GAMP5 provides an excellent basis for development of a company-wide quality management system that comprehensively addresses requirements for validation, implementation, and use of computerized systems, including explanation of key principles as well as templates and additional guidance and instruction (see Reference). For MES, several elements that would normally be considered in a validation or quality assurance plan are as follows: System scope including interfaces with other IT/ automation systems System description and intended use Roles and responsibilities during the validation effort, including required approvals on deliverables Governing quality management system and system development lifecycle (SDLC) policies and SOPs, including: Computer system validation policy and procedures Risk management Configuration management Change control Training MES vendor involvement (additional discussion pertaining to vendor involvement is provided below) Testing and qualification strategy SDLC deliverables, including: Quality assurance plan 9
10 Chris Wubbolt and John T. Patterson Figure 1: Common architecture for an MES system. MES Overview Specifications Manufacturing Execution System Test Documents Reports Hardware Processor Speed RAM Disk Space Work Station #1 Software Application Software Operating System (Windows XP) Workstation Network Software Hardware IBM Compatible Laptop computer User/Technical Manuals SOPs Instrumentation and Equipment Requirements Functional specifications System design specifications Risk assessments Design qualification Configuration management Qualification protocols Qualification and test summary reports Electronic batch record (EBR) development SOPs Part 11 compliance documentation, including electronic record and electronic signature applicability assessments Security and access controls, including user groups and roles Definition and description of batch end reports Development phase issue management (e.g., test incident reports) Documentation management Validation summary reporting Incident, change, and problem management SOPs (operational phase) Back-up and recovery processes EBR operational phase change management SOPs Periodic review reports Decommissioning (if required). LEVERAGING MES VENDOR QUALITY PROCESSES Implementation and validation of MES is a considerable undertaking and requires significant organizational resources. 10
11 Chris Wubbolt and John T. Patterson The ability to leverage the MES vendor s design, development, testing and verification, and quality processes and documentation may help to streamline the validation and implementation processes while maintaining a high level of quality and compliance. The MES vendor s quality processes and documentation must be formally assessed to allow leveraging of existing documentation and testing that the vendor may have conducted. Typically, for critical systems such as an MES, an on-site audit must be conducted to adequately assess the vendor s quality systems and system development lifecycle methodology. A vendor audit normally includes assessment of the following vendor processes: Overall quality system Organizational structure System development lifecycle methodology, including System functional and detailed design specifications Coding standards and code review Unit testing Integration and system testing Configuration management Incident, problem, and change management Release management processes, including major, minor, and patch releases Training program Customer support. The ability to leverage a vendor s previous efforts is dependent upon the quality of the vendor s documentation and processes, particularly those related to their SDLC processes. Although software vendors are typically not regulated by the US Food and Drug Administration (unless the vendor produces software used in medical devices), if vendor documentation, including testing, can be leveraged it must be at a GMP quality level. GMP quality documentation processes typically include adherence to applicable SOPs, version control, approved documents, and an appropriate level of quality review and approval. Testing and incident resolution, including review and approval of test results, should also be carefully evaluated during the vendor assessment and audit process. The results of the vendor assessment are typically summarized within a vendor self-assessment or audit report depending on the criticality of the vendor supplied software. If vendor documentation and processes are utilized during the validation process, the vendor self-assessment, audit report, or supporting documentation should clearly indicate the quality processes and documentation that were assessed. If the vendor s processes and documentation are deemed adequate, it may be possible to leverage the vendor s documentation and previous SDLC activities during the MES validation effort. If quality issues are identified during the vendor assessment, the ability to leverage vendor activities and documentation may be limited. The amount of vendor documentation to be used during the validation effort should be documented within the project validation or quality plan, or risk assessment documentation, along with appropriate justifications. GLOBAL VS. LOCAL IMPLEMENTATIONS For multi-site organizations, another consideration is whether to develop individual (i.e., stand alone) MES instances versus a global instance, which could better facilitate the highest level of standardization within the organization, as well as allowing leveraging of resources. Although this may not be a priority, particularly for smaller organizations, significant efficiencies can be gained through the use of a single (or minimum) number of global instances from which all other local MES instances can be more efficiently configured. If such a global and local approach is pursued, it is important to consider how their respective quality and validation planning processes will work and coordinate with each other, including roles and responsibilities within the local and global organizations. For example, to ensure the ability to update the global instance independent of the local instances, it is considered 11
12 Chris Wubbolt and John T. Patterson Figure 2: MES validation process. System Identification MES Validation Quality / Validation Plan User Requirements Specifications System Design Design Specification Configuration and Build - Change Mgt. - Incident/Problem Mgt. - Periodic Review Testing Operation and Maintenance Functional Specification Unit and Integration Testing Installation, Functional, User Acceptance Validation Report SOPs beneficial to have a separate quality or validation plan in place for any global instances. The local MES quality or validation plan will also need to appropriately reference and align with the global quality and validation plan because it would be assumed that the local instances would leverage significant testing and qualification (e.g., operational qualification [OQ]) from the global, both of which may be needed as documented evidence during regulatory inspections. Finally, one final consideration is whether the global instance is for testing purposes only, or whether it is also used for manufacture of product. Either approach can be used and ultimately depends on what makes sense for the organization, although the global approach will require more rigorous project management, communication, and configuration management controls and processes. RISK MANAGEMENT One other important consideration is the use of the risk management techniques in ensuring appropriate risk failure scenarios are identified and appropriate mitigations are put in place prior to use of the MES in pharmaceutical or medical device manufacturing. Such risk assessments should be used wherever possible, particularly when there may be limited knowledge of the possible failure scenarios and should be focused on failure scenarios that can have impact on product quality or patient safety. For example, risk assessments of interfaces between MES and business ERP systems can be effective in identifying possible failure points that can be used to ensure appropriate monitoring is put in place prior to the commencement of manufacturing. Mitigation strategies identified as a result of the risk assessments may include, but are not limited to, identification of the need for additional testing (or, in some cases, reduced testing or testing that leverages vendorsupplied documentation), system re-design, procedural controls, or increased monitoring of system operations. After a mitigation strategy is implemented, it is important to verify the effectiveness of the mitigation strategy. One way to assess the effectiveness of a mitigation strategy is to re-rank the risk scenario to determine if the mitigation strategy has reduced the risk to a lower priority. 12
13 Chris Wubbolt and John T. Patterson 13 Risk Assessment Example The use of a risk assessment to focus validation efforts to critical activities may include, for example, the ability to leverage unit testing or development testing in place of operational qualification or system testing during the validation effort. The risk assessment effort should be completed following an approved SOP and documented accordingly, typically within a risk assessment report. In addition, the unit or development testing must be of GMP quality and completed in a controlled testing environment. A risk assessment that identifies low risk functionality (as defined by the risk assessment SOP) may allow unit or development testing to be used in place of operational qualification or integrated system testing. Low risk functions typically refer to those functions that have minimal impact on product quality, patient safety, or data integrity. SYSTEM DEVELOPMENT LIFECYCLE One final consideration for the MES validation and quality planning process is the type of software development process (e.g., traditional waterfall, rapid prototyping) that will be used. In general, if only very limited configuration changes are planned then a traditional waterfall method is probably more appropriate. However, if more sophisticated software configuration or some customization is needed, a more efficient iterative or rapid prototyping method should be used and appropriately managed using pre-approved SOPs and work instructions to ensure appropriate control of the software development process. OPERATIONAL CONTROLS Once the initial MES validation is completed, it is equally important that operational phase processes including change, incident and problem management, and periodic review along with appropriate business and quality governance be established to ensure that the MES maintains a state of on-going validation. It is critical that such operational phase processes exhibit the following characteristics: Managed by defined processes including approved SOPs and work instructions IT-based incident and problem management systems aligned with any business corrective action preventive action (CAPA) systems Change management processes that include both business and regulatory quality involvement Periodic review is performed at appropriate time intervals Risk assessments be revisited (as appropriate based on incident or problems) Ongoing auditing of activities by independent quality group. When properly implemented, the MES validation process will include both initial development and operational phase processes as described in Figure 2. SUMMARY In summary, the considerations for validation of an MES system are similar to any other type of IT or automation system used in pharmaceutical manufacturing. However, it is important to understand that in many cases the technological complexity of the MES versus many other control technologies, including the interfaces with other systems, may increase the need for more rigorous review, testing, and oversight of the MES validation process. This increase in technical complexity and the prevalent use of paperless batch records afforded by MES will demand the highest level of inspection readiness during regulatory (e.g., EMA, FDA) reviews. REFERENCE ISPE, GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems, GAMP 5, February Originally Published in Journal of Validation Technology Volume 18 Number 1
14 Chris Wubbolt and John T. Patterson About the Author Chris Wubbolt is a principal consultant at QACV Consulting specializing in computer systems compliance, including validation, as well as quality assurance activities such as auditing, training, and six sigma quality improvement processes. He may be reached by at [email protected]. John T. Patterson is the Senior Director of IT Compliance and is responsible for the overall regulatory compliance and inspection readiness of IT and Automation capabilities within the Manufacturing and Supply Chain function of Merck & Co. He may be reached by at [email protected]. 14
15 John E. Lincoln Lifecycle Considerations for Device Software John E. Lincoln INTRODUCTION This discussion addresses methodology and documentation recommended by the US Food and Drug Administration to effectively address lifecycle considerations in software development. This includes steps to verify (or test) and validate software or firmware in a medical device to assist in performance, provide a display, accept input, or similar functions. It also applies to software that is the actual product such as imaging software. This discussion is based on the model provided in the FDA guidance document on the General Principles of Software Validation of January 11, Much of the following information is taken from this document (1). The basic principles discussed can also be applied to software used in processes, production equipment, facilities, test equipment, quality management system (QMS) under the current good manufacturing practices (CGMPs), and in other regulated industries other than devices. Note that QMS issues also require addressing the issues outlined in 21 CFR Part 11, Electronic Records/ Electronic Signatures (2). While most software books in the software industry address lifecycle as the development lifecycle up to the product release, FDA and the International Organization on Standardization (ISO) expect lifecycle to go far beyond development and initial release and to include updates, distribution and control, and other activities through final retirement and de-commissioning. FDAregulated companies frequently neglect these latter areas. SOFTWARE IS DIFFERENT FROM HARDWARE Software is different from hardware. This seems obvious, but it bears emphasis because it often is not given the consideration required in a company s CGMP systems. The conscious recognition of this fact is crucial to all product and software lifecycle activities. While software shares many of the same engineering tasks as hardware, it has some important differences. Some of these are listed in the guidance as follows: The vast majority of software problems are traceable to errors made during the design and development process. While the quality of a hardware product is highly dependent on design, development, and manufacture, the quality of a software product is dependent primarily on design and development with a minimum concern for software manufacture. Software manufacturing consists of reproduction that can be easily verified. One of the most significant features of software is branching (i.e., the ability to execute alternative series of commands based on differing inputs). This feature is a major contributing factor for another characteristic of software its complexity. Even short programs can be complex and difficult to fully understand. Testing alone cannot fully verify that software is complete and correct in most cases. In addition to testing, other verification techniques and a structured and documented development process should be combined to ensure a comprehensive validation approach. Unlike hardware, software is not a physical entity and does not wear out. In fact, software may improve with age, as latent defects are discovered and removed. However, as software is constantly updated and changed, such improvements are sometimes countered by new defects introduced 15
16 John E. Lincoln into the software during the change. Unlike some hardware failures, software failures occur without advanced warning. The software branching that allows it to follow differing paths during execution may hide some latent defects until long after a software product has been introduced into the marketplace. Another related characteristic of software is the speed and ease with which it can be changed. This factor can cause both software and non-software professionals to believe that software problems can be corrected easily. Combined with a lack of understanding of software, it can lead managers to believe that tightly controlled engineering is not needed as much for software as it is for hardware. In fact, the opposite is true. Because of its complexity, the development process for software should be even more tightly controlled than for hardware in order to prevent problems that cannot be easily detected later in the development process. Seemingly insignificant changes in software code can create unexpected and significant problems elsewhere in the software program. The software development process should be sufficiently well planned, controlled, and documented to detect and correct unexpected results from software changes. Given the high demand for software professionals and the highly mobile workforce, the software personnel who make maintenance changes to software may not have been involved in the original software development. Therefore, accurate and thorough documentation is essential (1). For these and other reasons, software engineering needs an even greater level of managerial scrutiny and control than does hardware engineering throughout their respective lifecycles. And the software lifecycle takes on even greater importance (1). SOFTWARE LIFECYCLE ACTIVITIES The FDA guidance does not recommend the use of any specific software lifecycle model. Software developers are expected to establish a software lifecycle model that is appropriate for their product with consideration of product risk to the end user, clinician or patient, and in harmony with their company standard operating procedures (SOPs) and structure. The software lifecycle model should cover the software from its birth to its retirement. Activities in a typical software lifecycle model as defined by the FDA guidance include the following: Quality planning System requirements definition Detailed software requirements specification Software design specification Construction or coding Testing or verification Installation Operation and support Maintenance Retirement Systems, validation, and controlled documentation to support the above (1). Verification, testing, and other tasks that support software validation occur during many of these activities. A lifecycle model organizes these activities in various ways and provides a framework for monitoring and controlling the software development project. Several software lifecycle models (e.g., waterfall, spiral, rapid prototyping, and incremental development) are defined in FDA s Glossary of Computerized System and Software Development Terminology (3). If a product is sold outside the US and carries a CE-mark, then a company s notifiedbody may have their own suggestions as well. Developing and executing a software lifecycle program requires that the company write an SOP on the subject that includes either of the following: Pointers to other documents that address each of the bullet points defined as part of the software s lifecycle Excerpts from appropriate documents in the SOP or a separate document (the approach favored by the author) that would still require that either some pointing or referencing is still performed, or that a tight change control or where used system exists. Change control must ensure that when one of those linked documents is updated, the update is reviewed for possible impact on the SOP or 16
17 John E. Lincoln separate lifecycle document. A company may opt for some variant that results in a self-contained SOP with either an attachment or a separate lifecycle document tied to specific software app, revision or release, device master record (DMR) document, or similar controlled document defined by the SOP. One of the FDA guidance documents on devices having commercial-off-the-shelf (COTS) software illustrates how a company should consider product hazards throughout the product s lifecycle by means of a flow chart (4). Whatever method a company initially selects, it can be developed, tried and refined, continuously improved, or ultimately completely revised as use and change history dictate. Any discussion of software lifecycle as expected by FDA is broader than the scope of just software validation in the strictest definition of that term. Planning, verification, testing, traceability, configuration management, and many other aspects of good software engineering discussed under lifecycle management are important activities that together help to support a final conclusion that software is validated at any point in its lifecycle. In any such activity there must be an integration of software lifecycle management and risk management activities. Based on the intended use and the safety risk associated with the software to be developed, the software developer should determine the specific approach, the combination of techniques to be used, and the level of effort to be applied. Software validation and verification activities must be conducted throughout the entire software lifecycle. When the software is developed by someone other than the device manufacturer (e.g., off-the-shelf software, or contracted software development), the software developer may not be directly responsible for compliance with FDA regulations. In that case, the party with regulatory responsibility (i.e., the device manufacturer) needs to assess the adequacy of the off-the-shelf software developer s activities and determine what additional efforts by the company or the contractor are needed to establish that the software is validated for the device manufacturer s intended use and to regulatory requirements. CHANGE CONTROL THROUGHOUT THE SOFTWARE LIFECYCLE An FDA analysis of medical device recalls revealed that approximately 8% of recalls are attributable to software failures. Of those software-related recalls, the majority (~79%) were caused by software defects that were introduced when changes were made to the software after its initial production and distribution (1). Software validation and other related good software engineering practices over the product or software s lifecycle are viewed by FDA as a principal means of avoiding such defects and resultant recalls. Change control, including verification and regression testing or validation of such software changes needs to be a prominent part of the company software lifecycle management program. Software may be developed in-house or under contract. However, software is frequently purchased offthe-shelf for a particular intended use. All production or quality system software, even if purchased off-the-shelf, must have documented requirements that fully define its intended use and information against which testing results and other evidence can be compared to show that the software is validated for its intended use. FDA requirements in software lifecycle issues such as development, changes, verification, and validation are not new. Validation of software using the principles and tasks discussed and expected have been conducted in many segments of the software industry for more than 30 years. REQUIREMENTS AND SPECIFICATIONS The quality system regulation states that design input requirements must be documented and that specified requirements must be verified. A documented software requirements specification provides a baseline for both validation and verification. The software validation process cannot be completed without an established software requirements specification (see 21 CFR 820.3(z) and (aa) and (f) and (g)) (5). FDA will carefully review requirements and specifications lists against lifecycle activities and validation test cases during audits to ensure basic points are understood, addressed, and validated by test cases, and fully documented. The company should do no less under a formal documented system. VERIFICATION AND VALIDATION (V&V) 17
18 John E. Lincoln The quality system regulation is harmonized with ISO 8402:1994, which treats verification and validation as separate and distinct terms. On the other hand, many software engineering journal articles and textbooks use the terms verification and validation interchangeably, or in some cases refer to software verification, validation, and testing (VV&T) as if it is a single concept with no distinction among the three terms. The author generally defines verification as specific test, inspection, checklist, or similar activities. The author generally defines validation as a collection of verification activities, destruction or non-destructive, that shows scientifically that specified requirements that comprise the system s objectives have been consistently met. A conclusion that software is validated is highly dependent upon comprehensive software testing, inspections, analyses, and other verification tasks performed at each stage of the software development lifecycle. Testing of device software functionality in a simulated use environment (being increasingly stressed by FDA), and user site testing are typically included as components of an overall design validation program for a software automated device. RISK-BASED Software V&V are difficult because a developer cannot test forever, and it is hard to know how much evidence is enough. In large measure, software validation is a matter of developing a level of confidence that the device meets all requirements and user expectations for the software automated functions and features of the device. The level of confidence, and therefore the level of software validation, verification, and testing effort needed, will vary depending upon the safety risk (hazard) posed by the automated functions of the device. Expected risk analysis tools are outlined in ISO (which addresses devices) and ICH Q9 (which addresses pharmaceutical and is useful for basic principles for devices). Additional guidance regarding safety risk management for software may be found in Section 4 of FDA Guidance for the Content of Pre-market Submissions for Software Contained in Medical Devices, and in ANSI/AAMI/ISO and IEC (6-8). DEFECT PREVENTION Software quality assurance needs to focus on preventing the introduction of defects into the software development and lifecycle processes and not on trying to test quality into the software code after it is written or changed. Software testing is limited in its ability to surface all latent defects in software code. The complexity of most software prevents it from being exhaustively tested. Software testing is necessary, but in most cases, software testing by itself is not sufficient to establish confidence that the software is fit for its intended use. In order to establish that confidence, software developers should use a mixture of methods and techniques to prevent software errors and to detect software errors that do occur. This best mix of methods throughout the software s life depends on many factors including the development environment, application and intended use, project scope, programming language, and product end-use risk. Software validation is a major element of software maintenance throughout its lifecycle, and thus takes place within the environment of an established software lifecycle. The software lifecycle is usually defined generally by SOP, and specifically by a dedicated plan or plans. Both the systemic lifecycle SOP and the product or process-specific plan lists software engineering tasks and documentation necessary to support the software validation efforts and associated activities from cradle to grave. In addition, the software lifecycle plan and resultant activities contain specific verification and validation tasks that are appropriate for the intended use of the software. The final conclusion that the software is validated at any point in time should be based on evidence collected from planned efforts conducted throughout the software lifecycle up to that point in time. SOFTWARE VALIDATION AFTER CHANGES The guidance states, due to the complexity of software, a seemingly small local change may have a significant global system impact. When any change is made to the software, the validation status of the software needs to be re-established. Whenever software is changed, a validation analysis should be conducted not just for validation of the individual change, but also to determine the extent and impact of that change on the entire software system (1). Then the software developer conducts an appropriate level of software regression testing to show that unchanged but 18
19 John E. Lincoln vulnerable portions of the system have not been adversely affected. Design controls and appropriate regression testing provide the confidence that the software is validated after a software change (1). Validation documentation should be sufficient to demonstrate that all software validation plans and procedures have been completed successfully. DESIGN REVIEWS Design reviews should be held throughout the software s lifecycle, especially at important junctures. Reviews are a requirement during the development process to verify the completion of one key milestone prior to moving on to the next. The review team must have at least one independent member (i.e., one who has no vested interest in the outcome reached during the review process). Documented design reviews should also be held for regression testing of proposed or implemented changes, new revisions or releases, major hardware platform changes, and decommissioning. Validation activities should be conducted using the basic quality assurance precept of independence of review...assign or contract review members that are not involved in a particular design or its implementation, but who have sufficient knowledge to evaluate the project and conduct the verification and validation activities (1). The guidance states, answers to some key questions should be documented during formal [documented] design reviews. These include: Have the appropriate tasks and expected results, outputs, or products been established for each software lifecycle activity? Do the tasks and expected results, outputs, or products of each software life cycle activity: Comply with the requirements of other software life cycle activities in terms of correctness, completeness, consistency, and accuracy? Satisfy the standards, practices and conventions of that activity? Establish a proper basis for initiating tasks for the next software lifecycle activity? (1) ACTIVITIES AND TASKS Software validation and other lifecycle activities are accomplished through a series of milestones and tasks that are planned and executed at various stages of the software development lifecycle, including maintenance and change. These tasks may be one-time occurrences or may be iterated many times, depending on the lifecycle model used and the scope of changes made as the software project progresses (1). For each of the software lifecycle activities, there are certain typical tasks that support a conclusion that the software is validated and works as intended. However, the specific tasks to be performed, their order of performance, and the iteration and timing of their performance will be dictated by the specific software lifecycle model that is selected and defined by SOP and the safety risk associated with the software application. For low-risk applications, certain tasks may not be needed at all. However, the software developer should at least consider each of these tasks and should define and document which tasks are or are not appropriate for their specific application. This discussion (and the FDA guidance document [1]) is meant to be generic and is not intended to prescribe any particular software lifecycle model or any particular order in which tasks are to be performed. Once a software product has been baselined (i.e., approved), any change to that product should have its own mini life cycle plan including testing. Testing of a changed software product requires additional effort. Not only should testing demonstrate that the change was implemented correctly, testing should also demonstrate that the change did not adversely impact other parts of the software product. MAINTENANCE AND SOFTWARE CHANGES As applied to software, the term maintenance does not mean the same as when applied to hardware. Software maintenance includes corrective, perfective, and adaptive maintenance but does not include preventive maintenance actions or software component replacement (1). The software validation guidance states, changes made to correct errors and faults in the software are corrective maintenance. Changes made to the software to improve the performance, maintainability, or other attributes of the software system are perfective maintenance. Software changes to make the software system usable in a changed environment are adaptive maintenance. When changes are made to a software system, 19
20 John E. Lincoln either during initial development or during post-release maintenance, sufficient regression analysis and testing should be conducted to demonstrate that portions of the software not involved in the change were not adversely impacted. This is in addition to testing that evaluates the correctness of the implemented change. The specific validation effort necessary for each software change is determined by the type of change, the development products affected, and the impact of those products on the operation of the software (1). Whether production or quality system software is developed in-house by the device manufacturer, developed by a contractor, or purchased off-the-shelf, it should be developed using the basic principles outlined elsewhere in this guidance. The device manufacturer has latitude and flexibility in defining how validation of that software will be accomplished. The software developer defines a lifecycle model. Validation is typically supported by verifications of the outputs from each stage of that software development lifecycle and checking for proper operation of the finished software in the device manufacturer s intended use environment. REGRESSION TESTING Regression analysis and testing are employed to provide assurance that a change has not created problems elsewhere in the software product. Regression analysis is the determination of the impact of a change based on review of the relevant documentation (e.g., software requirements specification, software design specification, source code, test plans, test cases, and test scripts) in order to identify the necessary regression tests to be run. Regression testing is the rerunning of test cases that a program has previously executed correctly and comparing the current result to the previous result in order to detect unintended effects of a software change. Regression analysis and regression testing should also be employed when using integration methods to build a software product to ensure that newly integrated modules do not adversely impact the operation of previously integrated modules. HOW MUCH VALIDATION EVIDENCE IS NEEDED? The level of validation effort should be commensurate with the risk posed by the product or automated operation and the lifecycle stage of the software. In addition to risk, other factors influence the level of validation testing. These include complexity of the product or process software, the degree to which the user or device manufacturer is dependent upon that product or automated process to produce a safe and effective outcome, and its use and reliability history up to that point. The company then determines the nature and extent of testing needed as part of the next validation effort. Documented requirements and product risk analysis (tied to the end user of the resultant device) of the product or process help to define the scope of the evidence needed to show that the software is validated for its intended use. These decisions should be recorded in the software documentation, preferably tied to specific product risk document sections, paragraphs, or line items. BENEFITS OF SOFTWARE LIFECYCLE ACTIVITIES Software lifecycle activities and their validation are a major part of the development and maintenance of software throughout its lifecycle. They play a critical role in assuring the quality of device software and software automated operations throughout the lifecycle. Software validation can increase the useability and reliability of the device, resulting in decreased failure rates, fewer recalls and corrective actions, less risk to patients and users, and reduced liability to device manufacturers. Well-documented software validation can also reduce long-term costs by making it easier and less costly to reliably modify software and revalidate software changes. Software maintenance can represent a large percentage of the total cost of software over its entire lifecycle. An established comprehensive software validation process helps to reduce the long-term cost of software by reducing the cost of validation for each subsequent release of the software. REFERENCES 1. FDA, Guidance Document, General Principles of Software Validation; Final Guidance for Industry and FDA Staff, January 11, FDA, 21 CFR Part 11, Electronic Records; Electronic Signatures Final Rule, 62 Federal Register 13430, March 20, FDA, Glossary of Computerized System and Software 20
21 John E. Lincoln Development Terminology, Division of Field Investigations, Office of Regional Operations, Office of Regulatory Affairs, August FDA, Guidance for Industry, FDA Reviewers and Compliance on Off-the-Shelf Software Use in Medical Devices, Office of Device Evaluation, Center for Devices and Radiological Health, September FDA, Quality System Regulation, 21 Part 820, Medical Devices; Current Good Manufacturing Practice (CGMP); Final Rule. 6. ANSI/AAMI/ISO 14971:2007/2009, Medical devices Application of risk management to medical devices 7. FDA, Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, Office of Device Evaluation, Center for Devices and Radiological Health, May 11, IEC , Medical electrical equipment, Part 1-4: General Requirements for safety Collateral Standard: Programmable electrical medical systems, ed. 1-1, GENERAL REFERENCES FDA, Design Control Guidance for Medical Device Manufacturers, Center for Devices and Radiological Health, March FDA, Do It by Design, An Introduction to Human Factors in Medical Devices, Center for Devices and Radiological Health, March About the Author John E. Lincoln, principal consultant, J. E. Lincoln and Associates LLC, assists companies in the design and implementation of complete 21 CFR 111, 210, 211, 820 and ISO quality management systems, fully CGMP-compliant, and which have passed FDA audits. He may be reached by at ARTICLE ACRONYM LISTING CDRH FDA Center for Devices and Radiological Health CGMPs Current Good Manufacturing Practices (for devices it is 21 CFR Part 820, Quality System Regulations) CFR U.S. Code of Federal Regulation COTS Commercial Off-The-Shelf Software FDA US Food and Drug Administration IDE FDA Investigational Device Exemption ISO International Organization for Standardization QA Quality Assurance RA Regulatory Affairs R&D Research and Development SOP Standard Operating Procedure V&V Or V[T]&V; Verification [Testing] and Validation Originally Published in Journal of Validation Technology Volume 18 Number 2 21
22 Robert H. Smith Storm Clouds? Cloud Computing in a Regulated Environment Robert H. Smith RUPERT KING/GETTY IMAGES INTRODUCTION It is hard to look at anything computer related today and not see bold headlines proclaiming the power of Cloud computing. One sees all kinds of messages that suggest we should be moving to the Cloud. Journey to the Cloud, Why you should have a Cloud computing strategy, and Majority of CTOs have a Cloud strategy are but a few recent banners. But, in an US Food and Drug Administration-regulated environment, should we be moving to the Cloud? What actually is Cloud computing? Is it easy? Should it be? What is FDA s opinion? What would a journey to the Cloud look like in a regulated company? CLOUD COMPUTING DEFINED Precisely defined, Cloud computing possesses all of the following elements: On-demand self service fully automated acquisition and productivity Resource pooling share computing resources Rapid elasticity ability to add or remove resources seamlessly Utility billing pay-per-use or metered consumption. On-Demand Self Service The central idea here is that if a user wants a computing environment, they can get one and be quickly, if not immediately, productive. For example, a project manager on a compliance project decides that their team needs a collaboration tool (like a Wiki) or an issue-tracking tool. It s on its way with a few clicks. You can imagine this would be similar to the shopping cart on a popular website for an electronic application a few clicks and your stuff is on its electronic way and ready to be used. Resource Pooling Today s computers are massively powerful and typically under-used. Resource pooling builds on the idea that powerful physical computers are under utilized, enabling them to run hundreds of virtual computers. One set of physical hardware is used to run many virtual computers. This is called multi-tenancy one computer with many distinct compute workloads running. You can think of this like adding an operation to a station on the assembly line. If a station on the line completes an operation and then waits three minutes for the next tray, then you can add three more minutes of work to that station without impacting performance that station now performs two operations, but at no additional cost. Rapid Elasticity If you need more computing resources, you get it. If the application needs additional 22
23 Robert H. Smith CPU (central processing unit) power, additional memory, or additional storage, the Cloud environment can provide it. This can often take place without the user needing to do anything. The application and storage can move in the Cloud to a new location where the resources are available. Conversely, computing resources can be turned in when not needed. So to sum it up, Cloud computing offers users a way to add or release computing resources based on need. One can think of this as a perfectly elastic labor pool so each morning one could walk out front of their plant and get exactly the number of resources one needs, even if that was triple or half of yesterday. Utility Billing The electricity service at our homes is an excellent example of a model that we understand that applies to Cloud billing. There is usually some base fee or minimum to use the service, then after that you can use all you want, you just have to pay for it. The one main difference is that for electricity and water we often pay more per unit the more we use and in Cloud computing one will often pay less for additional compute units (usually things like the amount of storage or number of bytes transferred). Generally, the more one uses the more one pays in total, and the less one uses the less one pays in total. CLOUD COMPUTING MODELS Before we explore these points from a regulated company s point of view, let s look at how Cloud services can be made available. There are four basic models, as follows: Public Cloud. In the public Cloud, services are available to anyone one who wants them and can pay for them. A growing number of organizations offer these services. Private Cloud. In this model, an internal IT organization builds and manages the infrastructure. The internal organization builds and creates an internal version of the public Cloud. Out-Sourced Private Cloud. In this model, the organization contracts a third-party organization that specializes in providing the Private Cloud capability. Hybrid Cloud. In this model, two of the models above are combined. Done correctly, Cloud computing is cost effective. The cost savings come on many fronts including staff, hardware, software, supporting infrastructure, and efficient utilization of those elements by sharing them across a pool of users. For common application types, one can have a highly functioning environment in less than 30 minutes. In many organizations, a similar environment could take weeks to months to procure and the cost could easily be ten-fold. The compelling case for use results from this agility and low cost. Also, to many technical pundits, Cloud computing represents a change in the computing environment. Unisys recently gave a webcast presentation where they gave the analogy that in the 1700s, a mill needed to be located near a river or stream so that it could use the flowing water to generate mechanical power. This defined early logging and mineral operations well into the 1900s. Steam engines and other advances improved the geographical options, but the model was still one of self-generated power. In the 1900s, the rapid growth and acceptance of off-premises generated energy dominated the landscape. Today, organizations maintain only small emergency power generating capabilities, if any at all. Unisys and others predict that a similar transformation is rapidly occurring in the computing space. The vision for the future is that much of the computing we rely on today will be done off-premises by Cloud providers. Cloud computing is certainly a compelling and interesting paradigm change. The simple fact that a user can get a powerful and capable computing platform or application in a few minutes, cheaply, and pay a low fee, makes the timeline for this paradigm to take root a question of when, not if. A Word of Caution Readers must be cautious. It is early in the Cloud marketplace and there are healthy amounts of hype. Many vendors are just putting Cloud in front of what they were already doing, or they are rushing immature offerings into the market. Buyer beware is prudent advice today. Forrester Research calls this Cloud Washing (1). There is a similar paradigm, but very different offering called software-as-a-service (SaaS). SaaS may or may 23
24 Robert H. Smith not be implemented using true Cloud technology. It is thus important to peel off the marketing hype. A SaaS company may host all the computing power it needs in a traditional data center, have perfect controls, and a solid contract that could make it an excellent candidate for a regulated company. At the same time, the same software hosted in a public cloud would not be a good candidate for a regulated company. SaaS is a similar idea you can buy software on a service basis as you need it and pay for what you use. But, SaaS can be implemented in many ways. It is essential to understand the vendor s offering in the context of the other points this column discusses. In environments regulated by such agencies or rules as the Credit Card Act, FDA, The Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes Oxley Act of 2002 (SOX), not all the implications of Cloud computing are well understood or even defined. Regulators may or may not align with some of the basic premises of Cloud computing. Cloud computing is a promising option because many vendors offer basic applications and computing platforms. Many are easy to use and compelling; however, in the GXP landscape there is more to consider. Promise is not enough. For example, Amazon s S3 (probably the most popular Cloud service today Simple Storage Service) can be mis-configured and inadvertently expose data. There are widespread reports that tools are being developed to exploit these mis-configurations to gain access to data. For most organizations, GXP or not, that kind of business risk would be unacceptable. THE REGULATED ENVIRONMENT If one takes the time to read FDA warning letters, it is clear that for both small and large organizations it is easy to stray outside the intent of the regulations and good practices. We do not have guidance from FDA on Cloud computing; however, some GXP pundits and even FDA representatives have expressed concern. The concern of governing bodies with the Cloud concepts makes sense. For example, governing bodies want access to supporting electronic data and systems. What if the data are located in a country where it does not have jurisdiction? To have jurisdiction, the data must reside where FDA, through the US Marshal Service, has jurisdiction. This is a complex matter. The idea is that this raises a complex issue that some part or the data in context may reside in a place where there is no direct jurisdiction. From a practical matter, FDA has the power to take the product off the market, but that may not be enough if there is a concern about patient safety or some level of misconduct. Related to this, lets say FDA would like to inspect backup tapes. That is a reasonable request if data integrity is in question for any reason. But let s say that Company A s data is at a Cloud provider. Because the Cloud is multi-tenant, those back-up tapes may contain data from 5 or 50 other companies. If the agency is investigating data integrity, it probably is not interested in a lot of pre-processing before they get the data. So, what are they going to get? They are probably going to get all the data. If you have nothing at all to do with Company A, do you want FDA s investigation to include your data? It is possible that user data are segregated. That can solve part, but not the entire problem. FDA is in the process of becoming more thorough and has started to look at audit and event logs. The review of the data in context requires access to application, system, and potentially other tools that are not likely to be segregated. They also may be missing other key information or not retained for sufficiently long periods. Next is the question of validation for intended use. At some level, in a Cloud multi-tenant offering, all companies would be using the same features and functions in the same way. But it is likely that at the detailed level, the configurations, settings, and options will be different or the user may require different interfaces to other systems. Who controls the validated state? Who is the subject matter expert during an inspection? Who defines the retention and archive policies? These questions have to be answered and understood within the constraints of the regulations and each company s tolerance for risk. In the regulated landscape, we can t assume, and we certainly have learned we can t rely on vendor claims or marketing hype. Related to understanding risk, the legal landscape is evolving. There are many complex, and at times conflicting, requirements that companies face. In a Cloud environment the location of data and processing are mobile they can move from system to system, and geography to geography. The mobility of data and computing can put companies at risk or clearly place them in violation of 21 CFR Part 11 and other regulations. Data 24
25 Robert H. Smith controls that do not violate one region s laws can violate another s. An analog industry, the Payment Card Industry (PCI), is consistently improving its security posture for both brand protection and economic reasons. We can compare the pharmaceutical industry to the PCI industry, as both have self-governance and government regulation similar to GXP companies. Even though it is constantly improving, the PCI Security Standards Council is warning merchants about the complexities of protecting credit card data running in virtualized systems and cautioning that some configurations may make it nearly impossible for organizations to achieve compliance. Over time, these issues will be addressed, but it provides a reference example supporting the complexities of meeting regulations using this emerging paradigm. The PCI Security Council warned organizations against mixing virtual machines of different security levels to protect credit cards. They suggested that isolating systems containing cardholder data might be impossible if the in-scope and out-of-scope software components are hosted on the same hypervisor (i.e., the software layer that allows many virtualized systems to run on one physical system). The PCI guidance prohibits different security levels from co-existing on the same server. While one can argue this point technically, it still shows how industry guidance can conflict with Cloud offerings. Companies planning for the use of these technologies need to understand how regulators interpret the technology. JOURNEY TO THE CLOUD The Cloud has great promise. The following are some things to consider before you start the journey. First, according to Unisys and Forrester, most organizations take three to five years to make the transition (2). That timeline is for non-regulated companies. A regulated company may take longer. In the GXP arena, companies need to consider quite carefully the lack of clear guidance and be aware that some regulators have expressed reservations. These reservations have sound rationale based on related findings in non- Cloud environments that may be exacerbated by some vectors of Cloud offerings (e.g., resource pooling and rapid elasticity). These need to be well understood before using Cloud offerings. A plan or strategy to take advantage of the benefit of Cloud should include the following: Server consolidation. Companies should have a plan to move from physical devices to internal virtualized computing. This is a first step and can take one to three years. This is also a good time to perform application consolidation. Automated administration and deployment of systems. The focus should be on defining the smallest number of systems and related system policies. Generally, fewer system policies lead to better policies that are applied more consistently. Also keeping the number low helps prevent mix-ups and reduces the opportunity to select improperly. Development of standard IT policies (that address governing body requirements) will be done during this phase. Build an internal Private Cloud operated by your IT department. This is important to ensure that core assumptions have been tested and processes are documented. Smaller companies should get thorough outside review and audit of the Private Cloud. Larger organizations that have internal audit departments should likewise practice inspections and ensure that all relevant data and evidence are available, sufficient, and robust to demonstrate adherence to GXP regulations and company policy. Specific operational policy adjustment and supporting document adjustments are almost certainly required. Personnel skills will need augmenting. This can include retraining, new additions, or outside assistance. This is an essential part of the process. Cloud computing is a shift in the paradigm and will require a change management program for the IT team. It also introduces new roles related to architecture and security. Define your business goals and optimize accordingly. Different Cloud offerings are targeted at different users some technical and some nontechnical. Vendors should be reviewed carefully. The contract and legal team must be involved. The regulatory implications of Public Cloud computing may be insurmountable in the short term. Private Cloud providers may be able to meet the requirements your company has, but the contract and data protection elements must be well thought 25
26 Robert H. Smith out and tested. If the agreement ends, how do you get the data back with supporting audit trails? Can you show both in context? Will you be able to do so in 5, 10, 50, or 100 years? What is the disaster recovery plan? How is disaster recovery practiced? Does the disaster recovery plan make assumptions that align with your company s risk model? Does the Cloud provider have a law enforcement notification policy? Do they have a customer notification policy in the event of a security event? What are the controls and guarantees? Make sure to verify that legal provisions can be translated into technical milestones. For example, can you get a backup tape back on your site and the data extracted in context to give to an agency that requests it? Do they have robust change control and do you have access to the records? Make sure your risk management, compliance, and regulatory affairs teams understand the new risks introduced. For some applications, anything beyond a Private Cloud offering may be too extreme. It is important to not assume that there is something about using Cloud computing that automatically means a more secure and compliance system. An example of how it could be worse is that one mistake on the part of the provider could affect tens or hundreds of systems because of the inherent re-use found in a Cloud offering. FINAL THOUGHTS Secure Cloud computing is evolving. There is a working group, Cloud Security Alliance, doing excellent work in this area and making tremendous progress quickly ( It is interesting to note that their security guidance document is already at version three. It shows their agility and commitment, but also the velocity of change as Cloud computing marches forward (3). Cloud computing and Private Cloud computing offer society tremendous benefits. There are legitimate examples in non-regulated companies of return on investments (ROIs) ranging from 300% and up, making this a attractive way to get cost out of the business and improve service. Public Cloud offerings are probably too risky for regulated applications until the industry matures. Private Cloud strategies offer promise and significant ROIs through better computer resource utilization, lower power consumption, and better staff utilization. However, a prudent plan will be to view this as a new paradigm. The new paradigm requires new roles, new processes, and a refined risk model so that leaders in GXP environments do not find themselves in a bad storm. REFERENCES 1. James Staten, Cloud Is Defined, Now Stop the Cloudwashing, Forrester Blogs, October, John Brand, Mapping Out The Journey To Private Cloud Enablement, Forrester Research, December 1, Cloud Security Alliance, Press Release, Cloud Security Alliance unveils 2011 initiatives at CSA Summit at RSA, February 15, Originally Published in Journal of Validation Technology Volume 15 Number 4 About the Author Robert Smith is Director Technology Services at University California Riverside. Robert has 25 years of software and systems experience including start-up, FDA/GXP regulated, internal use and commercial systems. He holds CISSP and PMP credentials. Robert can be reached at [email protected]. 26
27 Robert H. Smith Building High Quality Software Robert H. Smith 27 INTRODUCTION User satisfaction (i.e., how much the user likes the software) is just one aspect of software quality. Other aspects include the number of bugs, number of requirements met, and ease of use. How do we build high quality software in a regulated environment? It can be quite challenging as the industry is squeezed economically and IT departments everywhere are chartered to do more with less to save money. IT departments are not the only ones doing more with less. The business units that will use the software are also challenged to provide increased productivity with fewer dollars. No company or organization within the company has more funding than they need. What are some workable strategies to reduce costs and increase quality? How can teams balance demands to reduce costs, meet increasingly complex regulations, and demonstrate agility? Good software quality must, of course, be built-in and not tested-in at the end. So how do we build good quality software? This discussion addresses the following key questions: What will make the system easy to use? Can complex business processes be simplified? How are the business processes likely to change? Who are the true subject matter experts (SMEs)? Who are the users? Does the team have access to good software designers? What are the security needs? Where are the risks? Taking time at the start of a project to consider questions like these can make the difference between a system with high quality and one with low quality (i.e., a system that does not accomplish what was intended or what everyone hoped it would do). VALIDATION TESTING AND QUALITY TESTING Before we tackle these questions and their importance to quality software, let us consider traditional testing at the end. In the last two years, the teams with whom I have been working have adopted a new strategy to separate validation testing and quality testing. Validation testing is defined to demonstrate intended use. In other words, does the software meet its requirements? Quality testing is basically everything else. Validation testing and quality testing are separated because validation testing is expensive. It s expensive because everything is formal, and all test cases must be reviewed and approved. Results must be reviewed and approved. Evidence of performing these reviews and approvals must be kept. These are good practices that are designed to meet regulations. Quality testing can have different objectives. A testing lead can tell a tester, this is a critical process, try all the things you can think of to break this. A good tester (i.e., one who understands their domain, knows the things users will do, and knows how the software works) can really excel. For example, let s suppose that your company has a policy that only the Microsoft Internet Explorer browser can be used and users cannot install any tool bars or addons to their browser. Realistically your validation test cases are certainly not going to create formal test cases to cover some odd combinations of browser add-ons. But these little gadgets can and do affect systems. It is these types of cases where testers can experiment and find ways users may affect a system in unexpected ways. A few years ago we had a user (there was not a policy at that time) install two remember my password tool bars. These two tool bars added the current password attempt to the last. While the password was no good, the password grew and grew and
28 Robert H. Smith eventually caused a system crash. No company is going to write formal validation test cases like that for non-critical systems. However, these are the kinds of things that good testers can find and correct for far less money than trying to formally envision this type of bug up front. By separating validation testing (more expensive) and quality testing (less expensive) you get more testing and, therefore, a better chance for an optimum outcome. Just like the products we make in the life sciences business, quality is better designed in at the start of a software project. What Will Make the System Easy to Use? This seems like an obvious question, but it is often overlooked. For example, in the development of a brand new large system, some great techniques like rapid prototyping and iterative design were used. This process led to a robust system delivered on time and on budget. But, as the system progressed more and more functionality tools were added to the core use case. The result was a workflow that was too time consuming and too tedious for the high-volume users. The system has step-by-step panels that walked occasional users through the process of releasing a document in the global regulated environment. But there was a hidden use case for highvolume users. Had the users and their respective needs been better defined, a better 1.0 system could have been delivered. Can Complex Business Processes be Simplified? Software bugs often hide in complexity. Complex logic is hard to maintain, and it is hard to anticipate how it will be used. Let s look at an example that ties three easy requirements together and see what might happen: 1. The system shall support a variable number of sites read from a configuration table in the database. Note: for testing purposes, consider 0 sites, 1 site, 25 sites (the current number), and 50 sites. 2. The system shall support a variable number of document prefixes read from a configuration table in the database. Note: for testing purposes, consider 1, 35 (the current number), and 50 prefixes. 3. The system shall support a variable number of site + prefix specific approval workflows read from a configuration table in the database. Note: for testing purposes, consider 1, 7 (the current number), and 15 workflows. At the large end we d need to iterate through 37,500 test scenarios 50 sites x 50 prefixes x 15 workflows. This problem screams for automation; certainly people are not going to write and execute 37,500 test iterations thus a team might set off writing automated tests. But, would it not be better to ask why are we letting each site determine its own rules for approving documents? What if we took a global approach? A total of 37,500 possibilities is a quality problem. How are we going to test? What if we design quality in up front and align on a simpler global business process? By just aligning to one business process across all sites, we can change the problem to 750 iterations. However, if we start to look at the approvals and narrow that down we can get to something reasonable from a system test point of view. By narrowing the approval routes to only three or four, the problem becomes iterations. There is also a great business advantage to simplifying the process business process simplification is a great way to design in quality. It is easier for users to understand, easier to implement and easier to test, and finally, it is easier to maintain. How are the Business Processes Likely to Change? Knowing where and how business processes are likely to change can help the design and test teams focus their testing and reviews. Often the person changing the system, particularly when using third-party developers, is not the person who designed the system. The person implementing a downstream change will most likely be at a disadvantage in terms of how the system works and thus more likely to make mistakes. Of course, documentation helps, but we all have read a document that was perfectly clear to the author but was completely vague to the reader. A perfect example is a some assembly required toy. These toys can certainly be assembled, and we have no doubt that some person somewhere can follow the instructions, yet we stare at the instructions trying to figure out how to put pin A in hole 1c while rotating part A clockwise. Who are the True Subject Matter Experts? Often a proxy represents the true subject matter expert (SME). Teams follow the proxy and build software to 28
29 Robert H. Smith specification and user (proxy SME) feedback. Then the real SME says, Why can t it do oxicution flips? It must be able to do those on furfle day. The team goes off to find some way to squeeze oxicution flips in on furfle day. They find bugs later when they realize furfle day happens three more times than they ever knew and all the reports are wrong. For example, a large software company was supplying security software to a leading South Pacific bank. The software company s SME told the team they must have MSI (Microsoft System Installer). The team could not figure out why this must have requirement was coming from their SME. The team kept asking all kinds of different questions and got answers that did not add up. The software company s installer was very elaborate because it performed a wide range of operations that would replace or update the software company s offering and most competitor s offerings as well. Changing the installer would have made it impossible to meet the bank s time table and introduced significant risk for a large roll-out. The team finally convinced their SME proxy to give them access to the ultimate decision makers. As it turned out, there was a check list from Microsoft on what features were desired in early roll-out applications to get the early Windows adopter incentives. A desirable item on a checklist (it was not even in the agreement) had become a requirement that would cause the software company to fail. The real requirement was Make sure the application is eligible for the Microsoft early adopter incentives. Instead, the team got a set of nice-to-have things that made logical sense to the proxy, but the proxy did not know the real business problem get the Microsoft incentive. The team was able to catch this and helped the bank rollout on time. But it shows how dangerous a proxy can be even when they are acting quite logically. Who are the Users? In the document management system mentioned earlier, the team started building the system thinking most of the use would be distributed to quality engineers, manufacturing engineers, design engineers, and regulatory affairs (RA) professionals. With the exception of the RA team, these were occasional users. Thus the team used a wizard-step-the-user-throughthe-process approach. Mid project, the decision was made to delay primary use by those users and instead let most of the processing be done by the central document management group. The members of the document management group were everyday users and experts. They did not like all the panels and all the clicks the system slowed them down. They were not satisfied. For the document management group, the software quality was perceived as poor. At the same time, the same features made the software friendly and easy for the occasional user. The answers to the software quality problems outlined above are not more testing, better testers, or a clever test approach. These problems are best resolved up front or addressed immediately when they arise and not be left to testing for evaluation. Does the Team Have Access to Good Software Designers? Often teams are handicapped by a perception that anyone can design and test software. This is simply not true, but it is a convenient myth. For example, Teams often protest trying to figure out what might change. There are several techniques to address this question. Some that are used across quality systems can also be applied to IT systems and the processes they support. A recent example involved a document management system. Understanding the strategy around the quality systems and manufacturing systems as the company became global was needed. On the second front, IT understood that we were looking to expand and rationalize our global manufacturing footprint. Accordingly, we asked, Will this document management system ever need to run in a language other than English? We were told No. IT asked, Are you sure? The answer was still, English only. However, less than six months later, a goal to start manufacturing in Costa Rica, a Spanish-speaking site, was announced. Our organization could have anticipated this need. The developers of our new system could have avoided some problems the system is now facing had this additional knowledge been known earlier. 29
30 Robert H. Smith an engineer was asked to do a final code review before a deployment. He did the review and reported the code looked fine. It seemed to work and do all the things enterprise code should do. It had reasonable error handling and reasonable security. It appeared that the job was done correctly. Because the engineer was really talented, he read the requirements and noted that while the code was fine, good actually, it had one small problem. In some cases, the code did not do what the requirements said it should do. He was the third person to review and the only one to see this problem. Make sure you have access to the right software engineers and designers. Quality can depend on it. A good software tester should be a master of looking at complex business processes and then at software to imagine what the user might do. These insights should be translated into the upfront processes. A good software tester has great value. What are the Security Needs? Security is not a decoration that gets added at the end. Good security often conflicts with other principles of quality software. For example, users do not want to have to re-login to change roles, but they may need to do so. If a use-case is changing roles in a system that must be Part 11 compliant, then it probably needs to require credentials on the role change. As a result, an important use-case for quality emerges out of a security requirement take the user back to where they were when they have to reauthenticate. Do not make them start over again having to remember the case number they were working on. As previously covered in this column (1), good security is a challenge, but has to be addressed up front to deliver a quality software system. Where are the Risks? We should spend most of our time addressing most of the risk. At virtually any seminar or class related to life sciences, we will hear: know, manage, and mitigate your risks. Software quality risk management requires thought and anticipating human behavior. Humans are funny. They can know and understand what to do, then without hesitation do something right that is completely contrary to what they know. Here is an example of this from the software development world. How easy to use is your defect management system? If the defect management system is hard to use or causes the users grief, they just don t use it. A tester that finds a bug will walk down the hall to the engineer and say, Hey, I was testing and found this nasty bug. The engineer agrees and promises to fix it. The tester is happy he found a bad bug that will be fixed and goes back to work. Of course, the engineer gets sick and the tester forgets completely he found this particular bug. The organization deploys with the bug. Often this will get a root cause assigned like test resources did not follow the process. Instead it should say, The defect management software takes 15 minutes to enter a defect and everyone avoids it when they can. They are focused on productivity and the system hinders them. Here the risk is that defects do not get captured because the software is too time-consuming to use. Make sure you understand the actual risks. Related to this, many teams do not understand risks and risk management. Teams should spend some time understanding and managing their risks. At a former company, a colleague of mine introduced several techniques to manage project risks. What his work showed was teams that studied their risks did an excellent job of predicting and managing impact to their projects compared to teams that did not. Interestingly, he also found that these teams were poor predictors of the particular risks that would impact them. He speculated it was because they managed identified risks well enough to make other risks more likely to impact them. But the lesson was clear teams that understand and manage risk do better than those who do not. Commercial-off-the-shelf software has an interesting risk profile. For example, Documentum is a commercial enterprise content management system used by millions of people worldwide. If an organization is going to validate Documentum, where do they spend the money? It makes absolutely no sense to write extensive requirements and test cases to show Documentum does not lose content. Why? There is no risk. That is the real question. Documentum can be configured with and without version integrity. Documentum can be configured with a very broad set of security settings. Making a mistake in configuration is probably one thousand times more likely and more 30
31 Robert H. Smith devastating than any new bug a particular organization might discover through validation testing. The regulations require validation for intended use; therefore, basic operations need a set of requirements and test cases. But it is even more important that controls be in place to manage the configuration. A team implementing Documentum needs to spend its time on the configuration because that is where the risk is hidden. If they make a mistake in configuration, the users might overwrite important data or allow deletion when they should not. In a case like this, the configuration specification and related test cases are the most important because that is where the organization is at risk, not whether Documentum can check-in or check-out a document. Readers should partner with their IT departments and make sure that they understand the risks and testing strategy. It is not a one-dimensional strategy and cannot produce a simple meaningless answer like, Just validate it. Take time to dig in and understand risks as they relate to your system, process, and environment. A bug may be less dangerous to your organization than a system or process that is hard to use. When things are hard to use, human nature takes over and users try to avoid the system. In our regulatory world, that is a serious compliance problem. For managers and business process owners, the ISPE GAMP 5 Guide (2) has an excellent reference in Appendix D5. For business systems analysts and test professionals, the GAMP Good Practice Guide: Testing of GxP Systems is also an excellent resource (3). FINAL THOUGHTS A risk-based approach to software quality is a necessity. Risk hides in process complexity. It hides in last-minute changes to software that were not thought out before the real SME showed up. Risk hides when a team excessively spends their time and money on extensive testing on low-risk operations that do not advance system quality. Risk hides when users avoid the system or look for ways to avert the system s controls to get things done. Modern enterprise applications need to invest in an upfront approach to optimize quality. Testing resources need to be balanced between formal validation testing and software quality testing to optimize the system. By balancing formal validation testing and quality testing, project dollars can be guided to the areas that represent the most risk to get the best and highest quality system (i.e., easy to use, addressing complex business processes, designed to anticipate change, fulfilling the needs of SMEs and users, and with appropriate security). REFERENCES 1. Smith, Robert, Information Security A Critical Business Function, Journal of GXP Compliance, Autumn ISPE, GAMP 5: A Risk-Based Approach to Complaint GxP Computerized Systems, Appendix D5, ISPE, February ISPE, GAMP Good Practice Guide: Testing of GxP Systems, ISPE, December GXP Originally Published in Journal of GXP Compliance Volume 15 Number 2 31
32 Robert H. Smith About the Author Robert H. Smith is an application technical lead responsible for quality systems software development at Abbott Vascular. Prior to this, he was Sr. Director, Engineering at Symantec Corporation, where he was responsible for developing enterprise client, host, and server-based corporate security products as well as the Symantec and Norton Live Update offering. Robert has more than 25 years of software development experience including VC start-ups funded by The Mayfield Fund, Granite Capital, and Wasatch Venture Fund, and holds CISSP and PMP credentials. Robert can be reached by at 32
33 Bernard T. O Connor Computer System Compliance and Quality Planning Bernard T. O Connor INTRODUCTION Quality planning as specified in the quality plan is critical to the success of a project. Despite its importance, the sufficiency of the project plan and its impact on quality planning are often not well understood. It has been surprising over the years to find that some project managers balk at preparing a project plan. This has been the case whether they were leading a small Excel spreadsheet project or a large system development program with substantial hardware and multi-domain software elements. Some of the personal traits and limitations that these project managers share appear repeatedly. They are great at making lists, but they find it difficult to time phase the items in the list into a progressive sequence of tasks and work products that form the system life cycle. This is the basis for both a project plan as well as a quality plan. WHY THE ABSENCE OF A PLAN? There are multiple possible reasons for the absence of planning. Perhaps the project managers do not consider computer systems development as a process. They do not see the lifecycle sequence that runs in phases and stages from concept to validation to maintenance. Many of these people believe that an MS Project schedule prepared by the development managers themselves is good enough. After all, it usually is a good timeline of tasks that a project manager (PM) can use for accountability. But what if they left something off as unimportant when it was actually very critical? What about the fact that critical tasks by support groups usually are not shown on such a schedule? They are often there only by inference. Some of those PMs believed that they had all of the necessary information in their heads and, therefore, they really didn t need a plan. The old familiar anguished cry of but everybody knew, is heard again and again in those cases. Once, when receiving a corrective action and preventive action (CAPA) for a missing project plan, a PM looked up in surprise and said defiantly but they know what they are doing! It is interesting to think of the fact that we live in the so-called Information Age, yet some people do not recognize the importance of information dispersion to the team. Those anti-plan PMs do not realize that without a plan so many important bits of information such as constraints, standards, regulations, responsibilities, deliverables, support groups, etc., are not documented or approved by the key people, and therefore not formally or completely presented to the team. It is not just the team who doesn t receive this information but also the subcontractors, support groups, and so forth. This is information that is critical to the success of any project in meeting its cost, quality, performance, and schedule goals and it often remains in someone s desk drawer, computer, or in their head. The following is a discussion regarding the development items that must be addressed 33
34 Bernard T. O Connor in a quality plan (QP) or a software quality assurance plan (SQAP) for a computer system and how it can be created and be fully representative of the projects needs. SOFTWARE COMPLIANCE AND THE QUALITY PLAN A computer system includes not only the computer hardware but also the software and peripheral devices that are necessary to make the system function. The computer and its peripherals stand out and can be easily identified, but the system software is often not so obvious. If you have purchased a computer system from a commercial vendor, extensive acceptance testing by the user is usually not performed unless it is a component of a larger, more complex system. Such systems are typically accepted based on installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ) validation. This usually consists of a functional test that treats the system as a black box. It is only required to execute a suite of inputs and evaluate the system outputs for compliance to the published performance requirements for such an acceptance test. The user interface with the system is usually a graphical user interface (GUI) that is friendly and specific, so the testing usually goes smoothly. The software that the system uses is usually represented by a single commercial identifier, version number, and service packs, etc. Vendor data, an acceptance test protocol, and a report are usually sufficient. This is true for the basic off-the-shelf (OTS) software applications such as MS Excel spreadsheets, but is not so for complex commercialof-the-shelf (COTS) software such as LIMS, SAP, etc. THE QUALITY PLAN AND THE PROJECT MANAGEMENT PLAN Assuring that the project performance goals are met is clearly the project manager s responsibility. It is also the responsibility of the quality and compliance professionals to support the PM to ensure that compliance to the project goals is achieved. Equally important is compliance to regulatory and company procedural requirements that may not be cited in the project plan. It is quality s major goal to assure that the computer system software was developed correctly in terms of faithfulness to customer and design requirements through adequate design control. This is a responsibility that the quality assurance (QA) or quality control (QC) team has that cannot be fulfilled without a detailed and accurate QP or SQAP. This does not mean plans that have been prepared hastily from a template and filled with boilerplate content. One of the primary requirements for a quality plan is the existence of a well-prepared project management plan (PMP) and an equally well-prepared project schedule. The PMP must clearly define the tasks to be accomplished, who is doing them, and what deliverables they will produce. This information is necessary for quality assurance to know where its compliance role is to be directed and how much effort will be needed. Creation of the user requirements specification (URS) is next in the process to provide equipment functionality from which quality and regulatory impact can be identified. AN INTEGRATED PLAN Project management and quality management have a symbiotic relationship. The QP cannot direct and implement its activities without system development having created a PMP. The plan defines the following: Applicable standards and regulations, responsibilities, etc. Definitions of the design tasks to be accomplished Schedule for the sequence and timing of those tasks. For less complex systems, the project manager and quality professionals may decide to create an integrated plan that covers both disciplines (i.e., development and quality). There are many advantages to the integrated plan in that the system development activities are shown in their working relationship and not as abstractions. Approval is concurrent with this type of plan. SEPARATE PLANS Quality assurance and compliance departments may choose to create a quality plan independent of the PMP to cope with the expansion of software types in a complex system. In a company that is subject to US Food and Drug Administration regulation, the role of the QA or compliance department is critical and must be care- 34
35 Bernard T. O Connor fully defined. One of the first QA tasks in the approval cycle, therefore, is to confirm that the PMP provides the information that is necessary and sufficient to prepare the QP. At a minimum, a PMP must answer the following questions: Why? What is the goal or goals addressed by the project? Why is it being authorized? What? What are the tasks that will be performed on the project? What are the deliverables? What are the restrictions, standards, and other items that will impact the team s efforts? What resources are required? Which development lifecycle will be used? What capabilities must the team members have? Who? Who will be involved and what will their responsibilities be on the project? How will they be organized? Who will provide supporting services for the project? When? What is the project timeline? When will task milestones be completed? In some organizations, the term project plan sometimes simply refers to a Gantt chart or some other document that shows project activities along a timeline. For a computer system project, this simply cannot be the case. A project schedule is only one component of a true project plan. For a software quality engineer or compliance professionals to be able to fully understand the development process of a computer system, they must first understand all of the elements of the system life cycle (SLC) and the software development life cycle (SDLC). What may be surprising when first preparing a QP or SQAP is how many elements of the SDLC are not specifically about writing the code or configuring the system itself. Instead they support the implementation effort either through design requirements input or design requirements verification. This revelation often is in conflict with the software developer s view. They believed that preparing the various instructions, algorithms, etc. as code or configuration, was the whole ball game. The major importance of code and configuration preparation is understandably true. This is because the project will meet or not meet its schedule based on successful implementation and release of the code for design transfer and system launch. Coding and configuration is not the whole ball game. Without correct design input, extensive rework or scrapping of the code can destroy a project schedule. To examine all of the work products and documents that are created in addition to the code and configuration, let us examine the process of developing a system and its software and their role in creating the PMP and the QP or SQAP. PROJECT MANAGEMENT PLAN LIFE CYCLE ORGANIZATION As we discussed previously, it is not unusual for someone to find it difficult to prepare a quality plan. These individuals may not be familiar with the structure of the plans. On the other extreme, there are others who dilute or obscure key things in the plan by trying to fill every section of a standard industry template with loads of non-relevant or excess material (sometimes known as boilerplate content). Aside from the Why, What, Who, and When information contained in the QP or SQAP, the following are three other factors for success that are particularly applicable to preparing a plan: The plan should be appropriate for the organization using it by developing it early and collaboratively with all of the relevant departments and work groups The plan should provide a framework such as the correct life cycle for establishing the quality objectives and their sequence And last but not least, the plan should be communicated to and be understood by all in the project, program, or applicable organizations. APPROPRIATENESS If you have anything dealing with software that will exceed more than one or two topics in a QP, bite the bullet and prepare an SQAP. You will be surprised how the SQAP will grow as the list of necessary things to cover expands to adequately cover the topic. FRAMEWORK While there are many legitimate section topics that should be in a QP, it is the organization of them that is critical. Aside from the introductory material necessary 35
36 Bernard T. O Connor to identify what the plan is addressing, and a section to put miscellaneous but important items such as design, regulatory, or statutory constraints or applicable general statements, the critical section is the system life cycle methodology that is the sequential framework on which all tasks and work products are hung. Many people make the mistake of putting development instructions or other how-to information in the introductory sections of the plan (particularly the responsibilities section). This really belongs in the SDLC section. Perhaps they should start with the development section first and not work on the introductory sections until the QP is done. Also, if the SLC or SDLC methodology used is already proceduralized, reference the procedure in the plan rather than duplicating the description of the methodology. COMMUNICATION The requirement to reach everyone means that the plan must be created, approved, and published (released and placed in change control). It must be understood by all who will play a role. Input from the cross-functional team will assure that the basic outline is understood and the key points fully addressed. If training is required, insist on it. Do not fall into the trap of believing that they know what to do when the supervisors and managers hesitate to send their people to the training classes. Above all, do not depend on management to make sure the message is getting out to all. Manage quality and compliance actively by walking around and asking meaningful questions of the team members. THE PLAN OUTLINE To effectively review and screen the individual contributions of the PMP to the QP and ensure their applicability, the SQA and compliance professionals must have a clear understanding of the tasks the QP or SQAP must direct and how it must be organized. These task drivers must flow from the PMP. If they do not, the PMP must be challenged. The following is a broad outline of a system life cycle: Planning Requirements development Design Build Test Implement (into production) Maintain Retire. Typically, the PMP would cover all but the last two items, but would cover preparation for the maintenance phase (e.g., training people, writing procedures for using and maintaining the system, doing backups, etc.). SOFTWARE DEVELOPMENT LIFE CYCLE WORK PRODUCTS Work product documents used or created in the SDLC are in a fairly fixed sequence within a phase and wholly dependant on the task output of a previous step. The task sequence is pretty much fixed by the task handoffs from inside or outside of the team. As the old saying goes: you cannot test software until it has been created, no matter how hard you try. Using the phases that were defined in the plan outline section, we have the basis for the SDLC. SQAP Outline The SDLC established by the organization procedures can be shown as it may be integrated with the other sections of the SQAP: Introduction. Description of the software portion of the project. Project deliverables. Software to be created, purchased, subcontracted, documented, and tested. Project objectives. System software goals established by the software requirements document. Applicable standards. Quality, design, or regulatory documents that govern your tasks. Determine the need for evidence of compliance to the QP. Lifecycle phases and descriptions. Use the lifecycle that was previously described and with procedural and instructive text, show the work product tasks, and reviews for each phase. Schedule. Time phase the preparation, review, approval, and hand-off of the team software outputs in the lifecycle and prepare a schedule. Team structure and responsibilities. Determine 36
37 Bernard T. O Connor what individual team skills are needed to design, prepare, and test the software. Team interfaces. Determine task output to those inside and outside of the team in terms of deliverables, records, and tools. CONCLUSION System development is like a coin. There is one overall development process with two separate sides: design and compliance. The different faces of a coin as shown on each side often mask a complex meaning for each. The hidden complexity of the many tasks involved in the system life cycle requires that the quality professionals ensure that the project team remains steadfast to the commitments made at the start of the project. It is the quality professional s mission to ensure that quality is built into the system. Remember Juran s definition of quality: fit for its purpose. The best start in achieving a computer system design that remains true to its original requirements is the preparation of solid and meaningful project and quality plans that unambiguously spell out a testable, compliant design a computer system design that will permit clear pass or fail judgments to be made at the end of the system development process. ARTICLE ACRONYM LISTING COTS Commercial Off the Shelf GUI Graphical User Interface IQ Installation Qualification OQ Operational Qualification OTS Off the Shelf PM Project Manager PMP Project Management Plan PQ Performance Qualification QA Quality Assurance QC Quality Control QP Quality Plan SDLC Software Development Life Cycle SLC System Life Cycle SQAP Software Quality Assurance Plan Originally published in Journal of GXP Compliance Volume 15 Number 1 37
38 John E. Lincoln Validation of Software In-Product or As-Product John E. Lincoln KEY POINTS The following key points are discussed: Methodology and documentation required by the US Food and Drug Administration to effectively verify, test, and validate software or firmware that is in a medical device Software validation is required for software used in-product, as-product, in processes, facilities, or in quality systems FDA has specified 11 elements of documentation recommended for software validation Software terminology definitions are provided All FDA-regulated companies should have a master validation plan that addresses all processes, equipment, and product requiring validation, frequency of revalidation, and methodologies Contents of the software verification and validation documentation model are described Test cases for Part 11 compliance must be developed All testing must be documented in a retrievable validation package that is filed in a quality assurance documentation center. INTRODUCTION This issue of Device Validation Forum addresses methodology and documentation required by the US Food and Drug Administration to effectively verify and validate software or firmware that is in a medical device to assist it to perform its functions, provide a display, accept input, or similar functionality. It also discusses the situation where software is the actual product, such as imaging software. Although this discussion is based on the model provided in the FDA s guidance document on the submission of a 510(k) for devices using software (1), it can also be used for software used in processes, product equipment, facilities, test equipment, and even the quality management system (QMS) under the current good manufacturing practice (CGMP) guidelines. This requires additional validation under 21 CFR Part 11, Electronic Records/Electronic Signatures. Software validation can be defined as the company s program for the verification (and testing) and validation (V&V) of software commercial off the shelf (COTS), contract- or in-house developed to be used as follows: In-product As-product Processes or facilities QMS. While we are focusing on software in or as a medical device, we will discuss software under the CGMPs in other areas such as process and equipment V&V, and CGMP records generation and retention V&V, which would also involve 21 CFR Part 11. This is because the V&V methodology and the model presented are useful across all these disciplines, not just in or as a medical device. The device software itself may be used in other company processes (e.g., for testing that device) requiring more rigorous V&V due to the higher risk involved. Software V&V is under design and change control as applicable, for the development of new products, processes, and facilities or the management of changes to existing products, processes, or facilities. The purpose and use of the 11 elements of documentation under the FDA scheme (1) as an all purpose software V&V model will be discussed. Software V&V involves all company disciplines (i.e., marketing, research and development [R&D], manufacturing engineering, production, quality assurance 38
39 John E. Lincoln and regulatory affairs [QA/RA], document control, materials management, information technology [IT], and personnel and training). However, it is usually R&D, QA, engineering, or dedicated software programming test and verification functions that are responsible to design the protocols and test cases and administer the validation. As with any system, a company s product, process, and facility validation systems and procedures including software must be developed, monitored, and updated under an actively involved senior management team together with the QA/RA function to be effective. They also have the ultimate responsibility for ensuring that it is understood, implemented, and maintained at all levels of the organization. This includes areas of the company that tend to feel that the CGMPs may not apply to them (e.g., R&D, software programming, and IT). These areas may feel that regulations stifle creativity. This can be avoided by including a CGMP component to the job description of all company personnel, and recognizing that the CGMPs protect a company s intellectual property (IP) by forcing documentation. DEFINITIONS Software has its own unique terminology. The following is a listing of some of the more commonly used terms and the definitions used by the author: Black box: testing of the results of software function; treating the software s logic and operation as a black box which operation can only be verified by its outputs in controlling the function of the hardware or process. COTS: commercial off-the-shelf software; purchased software commonly available and not custom or customized. Hybrid QMS system: an electronic software program used by the company as a hybrid system, as defined by 21 CFR Part 11. As defined by the company, this system allows for the electronic generation and review of documents. A hard copy is then printed out for final review and approval, requiring manual, handwritten signatures and dates. The hard copy original then becomes the controlled copy. Little, if any, Part 11 V&V is required if the hard copy is truly the used and controlled document. Macros: formulas used in spreadsheets. In-house developed macros would have more supporting data and documentation. Formulas should be verified against a textbook or a calculator. Verification should be documented. Macros should be printed out, signed, and dated, and documents should be retained as part of the V&V activity and document package. Verification: this article defines verification as inspection or test activities. Verification becomes part of the design history file (DHF) (3). For example, white box testing would be a verification activity under software validation. Validation: this article defines validation as a collection of verification activities, combined into a validation package, and defined by a validation protocol. It may also involve destructive testing and verification activities used to establish specified operation parameters (e.g., sterilization validation, pouch sealer validation, and similar). Software validation shall ensure that software-driven processes, actions, functions, or outputs conform to defined user needs and intended uses. It shall include testing of outputs under actual or simulated use conditions. All software-driven products, processes, and systems must be validated. Validation is the sum total of a collection of verification activities, often including destructive testing, as well as incorporation of check lists, etc. It includes reference to and utilization of risk analysis, where appropriate. It is documented in the DHF or retained in the document center. Software V&V must be risk-based. This is because software is so complex with so many lines of code and logic paths that it is resource prohibitive to V&V all possibilities. It is important to define the critical outputs and address them in the protocol s test cases. White box: verification of software by a line-byline review of the actual code for proper operation. There must be no obvious errors in programs, loops, etc. Should be performed by skilled programmers who are not part of that software design or QC team. 39
40 John E. Lincoln MASTER VALIDATION PLAN All FDA-regulated companies should have a master validation plan (MVP), which addresses all processes, equipment, and product requiring validation, frequency of, revalidation, and methodologies. The MVP should also address software validation issues. Where hardware or equipment and its software is to be validated together, the validation protocol may combine the requirements of hardware or equipment validation, such as design qualification (DQ), installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ) or similar requirements, with hardware qualifications and software black box validation. If code is custom, then white box validation would also be included. The hardware and software V&V may then be performed together. SOFTWARE VALIDATION As-product or in-product software controlling process and production equipment or test equipment should address the model containing the 11 elements recommended by FDA for 510(k) submissions for product containing software. The author had contacted the developer of the guidance document at the Center for Devices and Radiological Health (CDRH) several years ago and has discussed this model with Agency CSOs on company audits. All those spoken with had no problem seeing that model used for all software V&V because the requirements and terms are based on common software industry practices and terminology. The author has successfully used this approach for software reading human brain waves and transferring thought to a cursor s movement on a computer screen. Contents of the software V&V documentation model are as follows: 1. Risk 2. Level of concern 3. Software description 4. Software requirements specification (SRS) 5. Design requirements specification 6. Architecture 7. Design and development history 8. Verification (or testing) and validation (V[T]&V) 9. Traceability 10. Unresolved anomalies (bugs) 11. Revision and release numbers. Software that supports a company s QMS requires the use of the principles of 21 CFR Part 11, Electronic Records/ Electronic Signatures as applicable. PRODUCT, EQUIPMENT, PROCESS, AND FACILITY VALIDATIONS Upon determination that a new product or change to an existing product is desired, the originator (i.e., management, marketing, R&D, or manufacturing engineering department) would initiate a meeting with representatives of affected plant activities. Major product software development or changes should be in a formal validation package, whereas minor changes may be covered in a lab book. The suggested format for the validation package can be found in the Appendix. 21 CFR PART 11 In order to develop test cases for V&V of compliance to 21 CFR Part 11, Electronic Records/Electronic Signatures for the QMS, take each of the subpart requirements and reframe them as questions, as follows: Can invalid or altered records be determined? Is system capable of producing accurate and complete hard copies of electronic records? Are e-records readily retrievable throughout their retention period? Is system access limited to authorized personnel? Does system create and maintain a secure, timestamped audit trail? Does it record date, time, entries, and actions for any activity that creates, modifies, or deletes electronic records? Are changed or deleted records archived and retrievable? The questions should then be reframed into test case elements (see Figure). Note: Include all 21 CFR Part 11 elements as 40
41 John E. Lincoln Figure: Test case elements. Risk: The audit trail for altered records is high risk based on regulatory compliance and Lack of hardcopy backup. Verify whether an electronic record can be altered and alterations can be determined: Verification Element Expected Outcome Observed Outcome Alter a record, and check record database for alterations Original record exits Does an altered record also exist? If so: Alteration date / time stamped Alteration has user ID Audit trail complete Tested by: Verified by: questions. Some will be answered by the validation of the software. Others will be answered by reference to offline systems and procedures; reference the standard operating procedure (SOP) or other controlled document. VALIDATION DOCUMENTATION PACKAGE Address the 11 elements discussed previously with the supporting documentation. If 21 CFR Part 11 applies, include those findings and test cases. For additional suggestions on content and suggested format, see related articles in past issues of this Journal or FDA guidance documents on process and equipment validation. Date: Date: Project Leaders The project leader is responsible for drafting the protocol and supporting the generation and gathering of the resulting documentation. Once a validation project is determined to be required, a project leader will be assigned. The project leader does not need to be a computer programmer, but should be computer-literate and familiar with what is being validated and regulatory requirements. That person can tap into resources to augment their knowledge and will direct the gathering, documenting, development, running, and compiling of the V&V and the validation package. Post-Approvals FDA-regulated product is not to be approved to leave the company s physical control (and quarantine) until 41
42 John E. Lincoln validations have been fully post-approved. These post approval activities may be performed by in-house personnel, contracted services or labs, or a combination. However, ultimate responsibility for the accuracy of the data and the interpretation and use of the results and conclusions is the sole responsibility of the company and its senior management team. Completed Validation Files and Their Maintenance The completed, approved validation package is routed to the quality assurance (QA) department and document center for filing. It is made available for review, training, reference, or audit and may be supplemented by an addendum. Prior to the start of any validation effort, a line must be drawn in the sand, where revisions, changes, upgrades, and service packs are frozen. Any future change, including Internet-based, must be administered under change control. Such change control should include the following in addition to the company-defined change control system: Cloud-delivered changes are captured off-line Reviewed by capable personnel Reviewed against past V&V by QA Review documented and signed off and dated Documentation could be retained in a log. If re-validation is required after review (regression testing), this must be completed prior to implementation of the change. Whether or not a revalidation is required after change implementation, the system must be monitored for any unexpected changes after implementation. REFERENCES 1. FDA, Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, US Food and Drug Administration, May 11, FDA, Guidance for Industry, FDA Reviewers and Compliance on Off-The-Shelf Software Use in Medical Devices, US Food and Drug Administration, September 9, Lincoln, John, Medical Device Development Under Design Control, Journal of Validation Technology, Vol. 16, No. 1, Winter GENERAL REFERENCES FDA, Quality System Regulation, 21 CFR , , and FDA, 21 CFR Part 11, Electronic Records and Electronic Signatures. ISO, ISO 13485:2003, Medical Devices-Quality Management Systems-Requirements for Regulatory Purposes, 7.2, 7.3, 7.5, 7.6, and 8.2., ISO, ISO 14971:2007 [2009], Medical Devices Application of Risk Management to Medical Devices, ARTICLE ACRONYM LISTING CDRH Center for Devices and Radiological Health CGMP Current Good Manufacturing Practices CFR US Code of Federal Regulation COTS Commercial Off-The-Shelf CSO Consumer Safety Officer DQ Design Qualification DSMICA FDA s Division of Small Manufacturers, International and Consumer Assistance FDA US Food and Drug Administration FMEA Failure Mode and Effects Analysis GUI Graphical User Interface HMI Human Machine Interface IDE Investigational Device Exemption IP Intellectual Property ISO International Standards Organization IT Information Technology MVP Master Validation Plan OQ Operational Qualification PQ Performance Qualification QA Quality Assurance QMS Quality Management System RA Regulatory Affairs R&D Research and Development SOP Standard Operating Procedure SRS Software Requirements Specification V&V Verification and Validation V[T]&V Verification [Testing] and Validation APPENDIX: VALIDATION PACKAGE SUGGESTED FORMAT 1.0 Pre-Approvals (optional; use especially if major 42
43 John E. Lincoln Appendix Figure 1: Risk (tie to specific line in product risk management file per ISO 14971). Operation 1 Expected Outcome Observed Outcome Xxxxxxxxxxxx xxxxxxxxxxxxxxx [additional operations] Tested By: Date: Verified By: Date: resources are involved) 2.0 Description or Purpose of the V&V 3.0 Software Documentation 3.1 Reference to applicable risk management file or report 3.2 Level of concern: minor, moderate, or major based on product risk file ISO 14971, failure mode and effect analysis (FME[C]A), or similar risk document. 3.3 Software description: generalized description of what the software is to facilitate or accomplish. 3.4 Software requirements specification (SRS): a list of all requirements and issues that the software is required to address in order to function as intended in its desired application. If COTS software, list only those functions needed for the specific application(s). List desired platform (e.g., PC, network, PLC), interfaces (e.g., human, hardware, network, internet), programming language, if known, applicable standards to be met, etc. Note: This is a very important part of the validation package, because it forms the basis for what needs to be validated and the test cases developed. 3.5 Design requirements specifications: a description of how the software will be developed in order to address and meet the SRS to the degree possible given any design constraints. 3.6 Architecture: generally a block or flow diagram of all software modules and their data flows, human and machine interfaces (HMI), graphical user interfaces (GUI), and software or hardware interfaces. 3.7 Design and development history: record of the actual software development, modification, or macro development, etc. Include project Gantt charts, team members, milestones, tasks, time lines, changes, etc. History should include references to or copies of any programming conventions used by the vendor. If the software is in a device, or is the device, then this and other listed activities are to be performed in accordance with 21 CFR Design Control. COTS software documentation will generally have minimal information, because this is usually considered proprietary information by the vendor. Documentation should be commensurate with the degree of risk as supported by specific references to a product risk management file (ISO 14971) and amount of information the vendor will supply. Often contact with the technical support function of a software vendor and an explanation of the need to satisfy FDA requirements in validation may elicit additional nonconfidential documentation. Products with a very large installed database in the field generally have less risk. Reference may be made to consumer reports, tests, and potential or prevalent 43
44 John E. Lincoln Appendix Figure 2: Test case examples of startup, shutdown, cool down, and unplanned power outage. restart/recovery unplanned/power outage ( pull plug ), machine state, Cycled into correct machine state; gate operational. Cycled into correct machine state; gate operational. Cycled into correct machine state; gate operational. Cycled into correct machine state; safety operational. problems experienced in the field by users. Document whether such problems would affect the company s usage, and, if so, how such issues will be addressed in use at the company. Limited availability of documentation from the software vendor may have to be offset by more extensive black box testing under V&V. 3.8 V[T]&V: the actual verification (testing) and validation protocol and test cases, test results, and test conclusions. Include both white and black box testing, as applicable. The following is a possible format for black box testing. 3.9 Traceability: use of a matrix or common paragraph numbering to show how each requirement is addressed under each document, especially the SRS, but also the SDS, design and development, and V&V. This is an important tool, increasingly referred to by auditors, to prove that all software requirements have been considered and addressed throughout the validation Unresolved anomalies: these are identified program bugs that have been determined to be benign to the specific use to which the program will be subjected. Such bugs are usually left in because the software programmers or design team have determined that any reprogramming to eliminate such bugs may induce another set of problems Revisions and release numbers: all revisions to the software in its development, and upgrades during its lifecycle, are tracked by revision numbers. Those revisions that are determined 44
45 John E. Lincoln to be released for actual use in processes, products, or to manage the QMS are further designated by a release number. Revision and release numbers are controlled under design control and change control. Originally Published in Journal of Validation Technology Volume 17 Number 3 About the Author John E. Lincoln, principal consultant, J. E. Lincoln and Associates LLC, assists companies in the design and implementation of complete 21 CFR 111, 210, 211, 820 and ISO quality management systems, fully CGMP-compliant, and which have passed FDA audits. He may be reached by at [email protected]. 45
46 Orlando Lopez Requirements Management Orlando Lopez 46 INTRODUCTION The majority of computer system failures are due to poor requirements-gathering and management. This includes not understanding the business impact, what the customer wants, and how the end-user is to interact with the computer system. This situation negatively affects subsequent development activities, associated work products, and quality. A requirement is a need or expectation that is stated, generally implied, or obligatory. Generally implied means that it is common practice for the relevant stakeholders (i.e., business managers, endusers, software engineers, support people) such that its absence would be deemed an obvious shortcoming. The term requirement defines a bounded characterization of the system scope that can be generated by different relevant stakeholders. It includes the information essential to communicate an understanding of the problem and to support relevant stakeholders in its resolution. Various types of requirements include product, functionality, performance, regulatory, legal, reliability, supportability, usability, security, and non-functional requirements. According to the ASTM E , requirements come from four areas: product knowledge, process knowledge, regulatory agencies, and company quality standards. Requirements may be captured as user scenarios, function and feature lists, analysis models, or specifications. These work products are used as a framework to select the supplier or integrator, to develop other work products associated with the development of the computer system, and to develop the user acceptance test. Requirements engineering strives to understand what the relevant stakeholders need and want before beginning to design and build a solution. Tools that software engineers can leverage include Six Sigma, voice of the customer (VOC), quality function deployment (QFD), failure mode and effect analysis (FMEA), requirements analysis matrix (RAM), orthogonal array (OA), cause and effect analysis (C-E), design structure matrix (DSM), Pugh matrix, and so on. The requirements engineering tasks comprise the following: Inception. Context-free questions are used to establish a basic understanding of the problem, the perspectives of relevant stakeholders who want a solution, the nature of the solution, and the effectiveness of the collaboration between the relevant stakeholders and developers. Elicitation. Relevant stakeholders describe the product objectives, what is to be done, how the product fits into business needs, and how the product is used on a daily basis. Elaboration. A refined technical model of software User requirements specifications should describe the required functions of the computerized system and be based on documented risk assessment and GMP impact. User requirements should be traceable throughout the lifecycle. EU Annex 11 function, behavior, and information is developed. Negotiation. Requirements are categorized and organized into subsets, relations among requirements identified, requirements reviewed
47 Orlando Lopez for correctness, requirements prioritized based on the relevant stakeholders needs. Specification. Requirements are derived from or provided by the relevant stakeholders describing the function, performance, and development constraints for a computer system. These work products should be in a form suitable for requirements management through the lifecycle and beyond. Requirements validation. Formal technical reviews examine the specification work products to ensure requirement quality and that all work products conform to agreed-upon standards for the process, project, and products. Requirements management. Activities that help a project team to identify, control, and track requirements and requirement changes as the project proceeds. Figure 1 depicts a typical systems development distribution showing the percentage of effort needed for each key developmental activity. Except for requirements management, front-end activities should take between 40-50% of the system development lifecycle (SDLC) distribution. The importance of managing requirements is depicted in Figure 2. This figure correlates the cost to find and fix defects during the system lifecycle (SLC). For example, finding and fixing an error during the requirements costs 3/4 (.75) unit. Finding and fixing the same error during the system testing will cost around 10 units. This discussion addresses product requirements activities. Product requirements include high level features or capabilities that the business team has committed to delivering to a customer. Product requirements do not specify how the features or the capabilities will be designed. Project requirements are beyond the scope of this paper. REQUIREMENTS MANAGEMENT PURPOSE The purpose of the requirements management process is to maintain and control a current, correct, and documented common understanding of the relevant stakeholders, intended product use, development resources, constraints, and needed computer system capabilities that were captured and baselined in the requirements documentation. Figure 1: Systems development distribution. (Source: Roger S. Pressman, Software Engineering: A Practitioner s Approach, 7th ed., McGraw-Hill, NY, used with permission by Roger S. Pressman.) 40-50% 15-20% 30-40% Figure 2: Cost to find and fix a defect. log scale Req. front end activities customer communication analysis design review and modification construction activities coding or code generation testing and installation unit, integration white-box, black box regression Cost to find and fix a defect Design Code Test System Test Field Use The requirements management process describes the tasks necessary to correct, update, and control a requirements specification (RS). The result of requirements management process is an updated organized set of documented requirements throughout the system lifecycle that do the following: 47
48 Orlando Lopez Figure 3: Requirements management. Obtain an Onderstanding of Requirements Manage Requirements Identify Inconsisitencies Between Project Work and Requirements Obtain Commitment of Requirements Requirements Manage Requirements Changes Maintain Bidirectional Traceability of Requirements Traceability Matrix Supports the relevant stakeholder s needs, goals, and objectives Remains within a well-defined scope Identifies and quantifies impacts of changes including those of scope, schedule, cost, hardware, and staffing Controls the current, documented RS. A requirement management plan may be necessary for highly-complex and highly-critical projects. The requirements management plan is used to document the necessary activities required to effectively manage project requirements from traceability to delivery. The requirements management plan is created during the planning phase of the project. Its intended audience is the project manager, project team, project sponsor, and any senior leaders whose support is needed to carry out the plan. According to the capability maturity model integration (CMMI), activities that are part of requirements management and can be elements of the plan are as follows: Obtain an understanding of requirements Obtain commitment to requirements Manage requirements changes Maintain bidirectional traceability of requirements A software requirements traceability analysis should be conducted to trace software requirements to (and from) system requirements and to risk analysis results. US FDA General Principles of Software Validation Ensure alignment between project work and requirements. Figure 3 provides the relationships between these activities. OBTAINING AN UNDERSTANDING OF REQUIREMENTS The intent of these activities is to establish criteria for the project, to identify appropriate requirements providers, to set criteria for the evaluation and acceptance of requirements, and to ensure through an analysis or review process that the established criteria are met. Those who receive requirements analyze them with the providers to ensure that a compatible shared understanding is reached on the meaning of requirements. The result of these analyses and dialogs is an approved RS including the definition of the system boundaries. Example work products for this practice include the following: Lists of criteria for distinguishing appropriate requirements providers Criteria for evaluation and acceptance of requirements (see Table I) Results of analyses against criteria. Lack of evaluation and acceptance criteria often results in inadequate verification, costly rework (see Figure 2), or customer rejection. OBTAINING COMMITMENT TO REQUIREMENTS Once the understanding of the requirement is established, this practice deals with agreements and commitments among those who have to carry out the activities necessary to implement the requirements. The main deliverable after getting the commitment of the relevant stakeholders is the RS. It is an approved document by all relevant stakeholders. The RS establishes the relevant stakeholders requirements baseline, and retains changes of need and their origin throughout the system lifecycle. It is the basis for traceability to the computer system requirements and forms the definitive source of information about requirements for subsequent 48
49 Orlando Lopez systems and communications with stakeholders. Later, in the thick of development, the RS is critical in preventing scope creep or other unnecessary changes. As the system evolves, each new feature opens a world of new possibilities, so the RS anchors the team to the original vision and permits a controlled discussion of scope change. While many organizations use only documents to manage requirements, others manage their requirements baselines using software tools. These tools allow requirements to be managed in a database, and usually have functions to automate traceability (e.g., by allowing electronic links to be created between parent and child requirements, or between test cases and requirements), electronic baseline creation, version control, and change management. Usually such tools contain an export function that allows a specification document to be created by exporting the requirements data into a standard document application. The RS must include an overview of the process in order to familiarize the infrastructure and application developers with the user, business process and data acquisition requirements of the system, and any special considerations for the project. The system functionality must be well defined at the outset in order to provide the prospective supplier or integrator with enough information to provide a detailed and meaningful proposal. Specifically, on data acquisition systems, the RS must include data definitions; data usage information; data storage, retention, and security requirements; and operational requirements and constraints. The RS addresses the following: The scope of the system and strategic objectives Process overview, sequencing requirements, operational checks Sufficient information to enable the supplier or integrator to work on a solution to the problem (e.g., device-driven sequencing, the methods required of the presentation of data, data security, data backup, data and status reporting and trending, etc.) Redundancy and error-detection protocol Operating environment Interfaces (e.g., to field devices, data acquisition systems, reports and HMI), input/output lists, communications protocols, and data link requirements Table I: Attributes of requirements. Attribute Unambiguous Verifiable Traceable Modifiable Usable Consistent Complete Definition All requirements must have only one interpretation Clearly and properly stated Complete. A human being or a machine must be able to verify that the system correctly enacts the stated requirements. The requirements must be traceable from other design and test documents. To facilitate the traceability, each requirement should be uniquely identified. Unanticipated changes must be able to be made easily. Each requirement must be tied to business values, identified as a priority for the customer, and appropriate to implement by the development team. The RS must be usable by not only the development team but also subsequent maintenance teams who will be called upon to modify and change the system. Individual requirements must not conflict with each other. Requirements must be consistent as well with overall architectural approach and quality attribute priorities. It must contain clear descriptions of all features and functions of the system. It must clearly contain definitions of all known situations the system could encounter. Information gained from operators and supervisors on the system design requirements and expectations in order to influence how the system is designed and operated Type of control and process operations to be performed Data storage requirements Transaction and data timing requirements and considerations Regulatory requirements Preliminary evaluation of the technology Feasibility study and preliminary risk assessment Safety and security considerations Security, other requirements Non-functional requirements (e.g., SLC 49
50 Orlando Lopez Figure 4: Manage requirements changes. Requirement Change Request Manage Requirements Changes Impact Analysis Updated Requirements Changed Interface documents Updated Traceability Matrix Notification of Change to affected parties development standards, programming language standards, program naming convention standards, etc.). Each requirement in the RS should have the attributes depicted in Table I. The requirements should be verified or tested during the SDLC. In a classic V model, the requirements are tested during the user acceptance testing, aka the performance qualification (PQ) in the US Food and Drug Administration-regulated industries context. This testing should include the verification of the procedural controls associated with the system that were identified in the RS. MANAGING REQUIREMENTS CHANGES Once commitments to the requirements are established and plans have been made, the project will transform from planning mode to execution mode. During the project execution phase, possible conditions precipitating a change are as follows: A new requirement is identified The software project is re-scoped (requirements added, dropped, or modified) An inconsistency is identified between the capabilities of the product developed and the documented requirements. The quality system regulation requires a mechanism for addressing incomplete, ambiguous, or conflicting requirements. Each requirement (e.g., hardware, software, user, operator interface, and safety) identified in the software requirements specification should be evaluated for accuracy, completeness, consistency, testability, correctness, and clarity. US FDA 21 CFR (c). Figure 4 depicts that inputs and outputs of the management of requirements changes. The entry criterion to this process is A Requirement Change Request has been submitted. The tasks associated with this process include the following: Analyze change request for impact and feasibility Prepare an impact analysis statement Determine the disposition of the change request (e.g., accept, reject, defer). If the change request is accepted, then: Generate changed RS and change requests for other controlled products as applicable and necessary. Implement the change. Also, make the necessary modifications to related artifacts: RS, modeling diagrams, QFD, generic specifications, and so on. Verify and validate changed RS. Update and distribute updated documentation as necessary. The exit criteria to this process are as follows: Decision was made through the applicable process not to proceed with the change. Updates to requirements made, distributed, and controlled Other work product change requests have been generated as required. The impact analysis process, as part of the manage requirement changes, by itself has multiple steps. This process must be clearly defined and communicated to the customer as well as to the project team. The high-level steps of the change impact analysis process include the following: Get change request Expand the demand statement using VOC analysis (i.e., who, what, when, where, why, and how). Analyze engineering impacts The how yields clues around impacted features Identify, using for example quality function deployment (QFD), associated functionality, and design parameters. These design parameters are the engineering items to be scrutinized to determine their impact. In review meeting(s), analyze if a problem requires 50
51 Orlando Lopez only a hardware or machine tweak or if it requires a software change Analyze software impacts Using the QFD, identify all the functionalities (use cases) that are impacted For each use case, analyze graphical user interface (GUI) and controller change required using the traceability analysis Figure 5: Requirements traceability. REQUIREMENTS DESIGN TEST Conduct risk analysis, for example based on failure mode and effect analysis (FMEA), so the potential regressive damage due to the change is minimized. Create the change request impact analysis document and review with customer. MAINTAINING BIDIRECTIONAL TRACEABILITY OF REQUIREMENTS Requirements traceability (Figure 5) is concerned with documenting the origin of a requirement as well as the relationships between requirements and other development artifacts, such as designs, models, analysis results, code, and test plans, cases, procedures, or results. The purpose of requirements traceability is to facilitate the following: Overall quality Understanding of the product Tests necessary to verify Ability to manage change. The intent of this practice is to maintain the bidirectional traceability of requirements for each level of product decomposition (see Figure 6). The following two types of traceability are recommended: Forward traceability (i.e., to all documents spawned by the SRS). This depends upon each requirement in the SRS having a unique name or reference number. Backward traceability (i.e., to previous stages of development). This depends upon each requirement explicitly referencing its source in earlier documents. The forward traceability of the RS is especially important when the software product enters the operation and maintenance phase. As code and design documents CODE Figure 6: Forward and backward traceability. Requirement Specification Design Description Test Case are modified, it is essential to ascertain the complete set of requirements that may be affected by those modifications. It should be possible to trace back to the origin of each requirement, and every change made to the requirement should, therefore, be documented in order to achieve traceability. Requirements traceability is also useful when conducting impact assessments of changes to requirements, design, or other configured items. When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source. Table II shows the links that are to be maintained. To demonstrate bi-directional traceability evidence of sorting (and use) by each column is to be captured. Traceability is an essential aspect during the verification activities in the SLC, and it is an important input into design reviews. A formal design review is recommended to confirm that requirements are fully specified and appropriate before extensive software design efforts begin. Table III depicts the traceability activities during a typical SLC. The practice of requirements traceability is iterative. It is conducted throughout the SLC and involves the following activities: Functionality defining functional requirements 51
52 Orlando Lopez Table II: Example of links to be maintained. Unique Requirement ID Requirement Description Design Reference Module/ Configured Item Reference Release Reference Test Script Name/ Step Number Reference based on technical assumptions or client needs Traceability setting up and maintaining a requirements traceability system Log updating regularly (at least weekly) the traceability matrix with new information Communicating communicating regularly (at least weekly) with stakeholders about the status of requirements Log recording data used to track the requirements in the traceability matrix Communication communicating to stakeholders that a particular requirement has been completed. The traceability of requirements can be built into documentation and code without having to have a separate traceability document. Three common approaches are: traceability matrix, using computer databases to evaluate traceability, and building inherent traceability into the structure of the documentation and code. Software developers have flexibility in how they want to implement traceability. ENSURING ALIGNMENT BETWEEN PROJECT WORK AND REQUIREMENTS This practice focuses on maintaining consistency between requirements, project plans, work products, and, in case of inconsistencies, initiates corrective actions to resolve them. For example, the project plan should be revised if changes to the requirements are identified as impacting the schedule. Corrective actions taken by the project to resolve inconsistencies can also result in changes to project plans and supplier agreements. SUMMARY It is necessary to identify, control, and track requirements before design and construction of a computer system can begin. Requirements management consists of the following five activities: Obtaining an understanding of requirements Obtaining commitment to requirements Managing requirements changes Maintaining bidirectional traceability of requirements Ensuring alignment between project work and requirements. The relevance of requirements management to successfully manage a computer system implementation project is stressed by distributing over 40% of the SDLC on managing requirements The main activities of the requirements management are to meet the VOC and keep updated the requirements specification during the SDLC. The quality and completeness of the requirements contribute to the accuracy of the specification. The success of computer systems implementation and maintenance depends on these three tasks. As depicted on Figure 2, a practical and accurate requirements specification will contribute to the successful implementation of the computer system, on time, within budget, and on schedule. Requirements management can achieve the success described above if it is maintained throughout the system lifecycle. 52
53 Orlando Lopez Table III: Traceability activities and SLC. SLC Phase Typical Traceability Task(s) Comment Requirements Design Construction/source code/configuration Developer s software testing User acceptance testing Maintenance Software requirements to system requirements (and vice versa) Software requirements to risk analysis Design specification to software requirements (and vice versa) Source code to design specification (and vice versa) Test cases to source code and to design specification Unit (module) tests to detailed design Integration tests to high level design System tests to system requirements User acceptance test to system requirements A traceability analysis should be conducted to verify that the software design implements all of the software requirements. As a technique for identifying where requirements are not sufficient, the traceability analysis should also verify that all aspects of the design are traceable to software requirements. A traceability analysis should be conducted to verify that the software design implements all of the software requirements. As a technique for identifying where requirements are not sufficient, the traceability analysis should also verify that all aspects of the design are traceable to software requirements. An analysis of communication links should be conducted to evaluate the proposed design with respect to hardware, user, and related software requirements. A source code traceability analysis is an important tool to verify that all code is linked to established specifications and established test procedures. A source code traceability analysis should be conducted and documented to verify that: Each element of the software design specification has been implemented in code Modules and functions implemented in code can be traced back to an element in the software design specification and to the risk analysis Tests for modules and functions can be traced back to an element in the software design specification and to the risk analysis Tests for modules and functions can be traced to source code for the same modules and functions. Control measures, such as a traceability analysis, should be used to ensure that the intended coverage is achieved. In a typical V model, this is the ultimate test. Each requirement must map to at least one test case. Control measures, such as a traceability analysis, should be used to ensure that the intended coverage is achieved. After the computer system has been delivered for operations, the forward traceability of the RS is especially important when the software product enters the maintenance phase. As part of the activities to conclude a project, the requirements traceability matrix must be updated with the as-built information. The objective is to have an up-to-date matrix describing how the system final design satisfied the functional, business, security, and technical specifications in the requirements document. As code and design documents are modified, it is essential to be able to ascertain the complete set of requirements that may be affected by those modifications. With the new project, a new cycle begins to the requirements management process. 53
54 Orlando Lopez TOOLS The following are a sampling of requirements management tools. Note that the tools mentioned do not represent an endorsement and are presented for information purposes only. In most cases, tool names are trademarked by their respective developers. Cybernetic Intelligence GmbH-EasyRM, www. easy-rm.com IBM Rational RequisitePro IBM tool used for requirements management, com/software/awdtools/reqpro/ Integrated Chipware RTM, Rational Software-Rational RequisitePro, www. rational.com Siemens-Systems Engineering and Requirements, products/teamcenter/solutions_by_product/ systems_engineering.shtml Telelogic Inc. tool used for requirements management, software/awdtools/doors/?&ca=qapromos0swg-b0swg-l0-d0swgmer-n0363-o0doorsg0usen/. GENERAL REFERENCES ASTM, ASTM E , Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment, August Bonney Joseph, Requirements Engineering: Our Best Practices, StickyMinds.com n=edetail&objecttype=art&objectid=13169&tth=dyn&tt=sitee mail&idyn=37 Carnegie Mellon University, Software Engineering Institute, Capability Maturity Model Integration (CMMI) for Software, Rev 1.3. IEEE, IEEE Std 1233-Guide for Developing System Requirements Specifications, ISO, ISO 9001-Quality Management Systems Requirements, ISO, ISO/IEC 9126-Software Engineering-Product Quality, ISO, ISO/IEC Systems and Software Engineering Software Life Cycle Processes, ISPE/GAMP, GAMP Traceability for GxP Regulated Application, Pharmaceutical Engineering, Vol 26 No 1, January/February NASA, Software Engineering Documents, gov/isdpaall.cfm. US Center for Disease Control (CDC), Unified Process Practices Guide Requirements Management, UP Version 06/30/07. www2.cdc.gov/cdcup/. FDA, General Principles of Software Validation; Final Guidance for Industry and FDA Staff, June Additional regulatory or guidance references associated with this topic can be found at: PIC/S Guidance on Good Practices for Computerised Systems in Regulated GxP Environments (PI 011-3), US FDA 21 CFR ; US FDA 21 CFR (g), 21 CFR (i) and 21 CFR 11.10(a). DISCLAIMERS The information contained in this article is provided in good faith and reflects the personal views of the author. These views do not necessary reflect the perspective of the publisher of this article. No liability can be accepted in any way. The information provided does not constitute legal advice. The recommendations to implement requirements management, as described in this article, are purely from the standpoint and opinion of the author, and should serve as a suggestion only. There are many other ways to implement requirements management. Some descriptions are based on listed guidelines with judicious editing were necessary to fit the context of this manuscript. ARTICLE ACRONYM LISTING C-E Cause and Effect Analysis CMMI Capability Maturity Model Integration DSM Design Structure Matrix FMEA Failure Mode and Effect Analysis GUI Graphical User Interface OA Orthogonal Array PQ Performance Qualification QFD Quality Function Deployment RAM Requirements Analysis Matrix RS Requirements Specification SLC System Lifecycle SDLC System Development Lifecycle VOC Voice of the Customer Originally Published in Journal of Validation Technology Volume 17 Number 2 54
55 Orlando Lopez About the Author Orlando López, currently working with Smith & Nephew, Memphis (Tennessee) as an IT Manager, Global Regulatory/ Compliance, has over 20 years of experience within Pharmaceutical and Medical Devices manufacturers. His role in Smith & Nephew includes subject matter expert for quality and compliance oversight on new computer systems implementation and maintenance. He is the author of two books: 21 CFR Part 11 - A Complete Guide to International Compliance, published by Sue Horwood Publishing Limited, and Computer Infrastructure Qualification for FDA Regulatory Industries published by Davis Healthcare International Publishing (www. suehorwoodpubltd.com).he may be contacted by at [email protected]. 55
56 Timothy Fields Integrating Risk Management into Computer System Validation Timothy Fields INTRODUCTION The last decade has brought about a number of changes to how pharmaceutical companies address validation. These changes have been brought about primarily by regulatory changes and the economy. Rather than focusing on the documented evidence aspect of validation, companies and regulators are switching the focus to where is the biggest risk and managing thusly. In 2005, the International Conference on Harmonization (ICH) issued ICH Q9 Quality Risk Management and followed it up with ICH Q10 Quality Management Systems (2). These two documents along with ICH Q8 Pharmaceutical Development set the stage for using a risk-based approach to validation. In 2011, the European Commission revised Annex 11 to the European Union (EU) Good Manufacturing Practices (GMPs) to increase the focus on risk. The shift in focus to use of a risk-based approach and management responsibility should result in more appropriate validation efforts rather than paperwork. RISK Risk is the combination of the probability of occurrence of harm and the severity of that harm. To manage risk, one must first identify the risk. Risk management provides a proactive approach to identifying, controlling, or mitigating risks. Use of a risk management approach also aids in the decision-making process as risks and their potential consequences are assessed. Risk assessments and risk management can be used throughout the computer system validation lifecycle. PLANNING EU Annex 11 Section 4.1 states: The validation documentation and reports should cover the relevant steps of the life cycle. Manufacturers should be able to justify their standards, protocols, acceptance criteria, procedures and records based on their risk assessment [emphasis added]. (4). Risk assessment can be used to determine whether a computer system requires validation and how much validation effort and resources should be applied to the system. When US Food and Drug Administration issued Code of Federal Regulations Title 21 Part 11 on electronic records and systems in 1997, many companies used risk assessments to determine which computer systems to validate in order to be in compliance with Part 11 and also prioritize their validation efforts. Obviously these risk assessments need to be documented to support the decisions made. Risk assessments should be reviewed with management so that the decisions and the potential consequences of such decisions are understood and accepted. Risk assessments can be used to define the scope of the validation efforts. For example, a warehouse inventory system that is part of a financial system may involve different risk assessments. The financial system may be viewed for risk-to-business while the warehouse inventory system may have compliance risks. The scope of the validation effort may therefore be different for the different components of the overall system. Two frequently asked questions are: Do I need to validate this system? and How much validation do I need to do? Appropriate use of risk assessment and risk management tools can aid in answering such questions. As most companies today purchase software and do not develop it in-house, the risks of selecting a software supplier should be considered, particularly if the software is custom made. An onsite audit may be performed to mitigate the risk by assessing the controls in place at the software supplier to ensure that high quality software is being developed and well-documented. DEFINING REQUIREMENTS Defining the functions of a computer system should use risk assessment to ensure that critical functions are appropriately defined. For example, risk assessments can help determine whether to implement a computer system or use paper. 56
57 Timothy Fields 57 Risks to be considered when defining the requirements include risk to a patient or end user. Could system failure result in harm to the end user or patient? This may not always be evident when assessing systems. Certainly a computer malfunction of a pacemaker could be fatal while failure of a training database is less likely to have a significant impact on a patient. Safety risks should be considered. For example, if a building management system controls the airflow as part of a hazardous material containment system, failure could result in safety concerns. Even the definition of alarms should be carefully considered as nuisance alarms can often lead to ignoring all alarms. Compliance risks should also be considered. As was mentioned earlier, compliance to 21 CFR Part 11 or other aspects of the cgmps should be assessed. Compliance risk assessments are frequently used to determine which systems require validation. Data integrity risks should be considered. What happens if data are corrupted or lost? What happens if the data is overwritten? When defining requirements, it is critical to define the data type and quantity of data that the system will be handling. The frequency of data collection and length of time that data must be stored should be defined and included in risk assessments. DESIGN PHASE During the design phase, risk assessments can be used to determine the impact of data transfer between systems. For example, if data are passed from one system to another, is there a risk that one system will truncate the number or use different rounding rules that could result in a risk? There is also a risk that the design will not meet the requirements, in which case validation and compliance may be affected. CONSTRUCTION PHASE As mentioned earlier, if custom code is being developed, the risk associated with the developer should be considered, including the need for an audit and frequent follow-up. Risk mitigation may include performing additional validation work or requiring additional documentation from the developer to support validation and ensure compliance. TESTING PHASE The testing phase is probably the lifecycle stage wherein risk assessments are used most frequently. Risk assessments are performed to determine the amount of testing needed to demonstrate that the system is in a state of control and is performing as intended. Risk assessments can also be used to evaluate testing failures and errors and make decisions as to the criticality of the failure or errors. If different environments (e.g., development vs. production) are used, an assessment should be made to demonstrate that testing performed in the development environment is equivalent to testing in the production environment. Such an assessment may also determine that certain tests need to be repeated in the production environment regardless of whether they passed or not in the development environment. OPERATIONAL AND MAINTENANCE PHASE The operational and maintenance lifecycle phase includes a number of tasks that also lend themselves to risk management. The amount and level of training required for personnel that will be using the system can be assessed. Different levels of access may result in different training needs and should be assessed. Ability to access the system without training can present a risk and should be evaluated. Access to the system also should be considered as part of the system security, and the risks associated with defining access rights should be assessed. Other risks to security such as virus protection should be considered. If employees have laptops that they are allowed to take home and then reconnect to the system, the risks associated with potential viruses to the system should be assessed. Access from remote locations should be reviewed to ensure that system security is not breeched or, if it is, that controls are in place to minimize data integrity issues. Cloud computing is being used by pharmaceutical companies today. The risks associated with system access, changes, and data integrity and confidentiality should be assessed. For example, what security is in place for storage of clinical trial databases on a cloud server? Can someone access records that show patient information and thus violate confidentiality? Or worse yet, can these records be deleted either purposefully or accidentally? Change control is an obvious use of risk assessments. Each change should be assessed as to the risk associated with it. Changes should be assessed for impact to other sys-
58 Timothy Fields tems, functionality, documentation, validation, and data integrity. Changes to a system that shares data with another system need to be assessed to ensure that the data sharing and the data itself will not be impacted by the change. Configuration changes should be assessed to determine the risks associated with such changes. The addition of new hardware or expansion of the system should be assessed to determine the potential impact on the system including any risks associated with a potential slowdown of the system. BACK-UP AND ARCHIVAL Risk assessments can be used to make decisions about how systems will be backed up and the subsequent back-up frequency. Changes to the amount of data collected and system usage should be assessed for potential impact on the decisions on frequency and method of system backups. If a decision is made to collect data more frequently and store it on a computer server, the server storage capacity may fill up faster and increase risks of crashing the system, overwriting data, or failure to record data. More frequent downloads or back-ups or additional storage capacity may be needed. Risk assessments can be presented to management to ensure that the decisions regarding back-ups are appropriately based on the risk. Where the back-up data will be stored should be considered. Storage in the same building as the system may be risky. What would happen to the data if an explosion occurred? Similar risks are associated with longer term data archival. The decision of where archived data will be stored should be based on a risk assessment including access to the archived data and security of the data. The length of time that the data needs to be stored should be assessed and considered when determining the media on which the archived information will be stored. Retrieval of archived data should also be considered in the risk assessment. For example, as technology changes, the data will be able to be restored in a usable fashion when needed, especially data that are stored for 20 or more years. DOCUMENTATION So, if risk assessments are a part of the validation lifecycle, how are these assessments documented? The use of risk assessments should be defined in the validation plan. The plan should clearly state when and how risk assessments will be used. The specific tools (failure mode effects analysis, fault tree analysis, Fishbone, etc.) may vary depending on the objective of the risk assessment and therefore may not be defined in the validation plan. The documentation method and risk assessment tools may vary from lifecycle phase-to-phase. The key is to use risk assessments but not to let risk assessments become the focus of the system implementation or validation. The completed risk assessments should be used to help guide informed decisions by management and should be maintained with the validation documentation. PERIODIC RE-ASSESSMENTS It is beneficial to periodically review the risk assessments that were completed and controls that were implemented as a result of the assessments. Review of system performance and system errors may lead to the need to perform additional risk assessments. Including risk assessments in routine change control can help mitigate unexpected system errors or unexpected risks. CONCLUSIONS Validation efforts should focus on what is important (i.e., riskiest) with use of risk assessments as a means of ensuring that the appropriate systems are being validated and that the appropriate level of effort is being expended to validate such systems. Management must be able to make informed decisions regarding validation efforts, and the use of risk assessments can help in making such decisions. Integrating risk assessments into the validation lifecycle and documenting the basis for what was done also provides a level of assurance to management and regulatory authorities that the system was properly defined, designed, built, tested, operated, and maintained. REFERENCES 1. ICH, ICH Q9, Quality Risk Management. 2. ICH, ICH Q10, Quality Management Systems. 3. ICH, ICH Q8(R2), Pharmaceutical Development. 4. European Commission, Annex 11 to Vol. 4 Good Manufacturing Practice Medicinal Products for Human and Veterinary Use; Computerised Systems, FDA, 21 CFR Part 11, Electronic Records; Electronic Signatures, Final Rule Federal Register 62 (13430),
59 Timothy Fields Originally Published in Journal of Validation Technology Volume 19 Number 3 About the Author Tim Fields is the Vice President of Quality at Protein Sciences Corporation Meriden, CT Mr. Fields joined Protein Sciences in 2010 as Director of Compliance and Training, and subsequently served as Senior Director of Quality Operations before becoming VP of Quality in He has more than 30 years of experience in the pharmaceutical industry, including more than 13 years at Pfizer and 16 years as a GMP compliance consultant. 59
This interpretation of the revised Annex
Reprinted from PHARMACEUTICAL ENGINEERING The Official Magazine of ISPE July/August 2011, Vol. 31 No. 4 www.ispe.org Copyright ISPE 2011 The ISPE GAMP Community of Practice (COP) provides its interpretation
INTRODUCTION. This book offers a systematic, ten-step approach, from the decision to validate to
INTRODUCTION This book offers a systematic, ten-step approach, from the decision to validate to the assessment of the validation outcome, for validating configurable off-the-shelf (COTS) computer software
GAMP 4 to GAMP 5 Summary
GAMP 4 to GAMP 5 Summary Introduction This document provides summary information on the GAMP 5 Guide and provides a mapping to the previous version, GAMP 4. It specifically provides: 1. Summary of Need
CONTENTS. List of Tables List of Figures
Prelims 13/3/06 9:11 pm Page iii CONTENTS List of Tables List of Figures ix xi 1 Introduction 1 1.1 The Need for Guidance on ERP System Validation 1 1.2 The Need to Validate ERP Systems 3 1.3 The ERP Implementation
How To Validate Software
General Principles of Software Validation; Final Guidance for Industry and FDA Staff Document issued on: January 11, 2002 This document supersedes the draft document, "General Principles of Software Validation,
OMCL Network of the Council of Europe QUALITY ASSURANCE DOCUMENT
OMCL Network of the Council of Europe QUALITY ASSURANCE DOCUMENT PA/PH/OMCL (08) 69 3R Full document title and reference Document type VALIDATION OF COMPUTERISED SYSTEMS Legislative basis - CORE DOCUMENT
Validating Enterprise Systems: A Practical Guide
Table of Contents Validating Enterprise Systems: A Practical Guide Foreword 1 Introduction The Need for Guidance on Compliant Enterprise Systems What is an Enterprise System The Need to Validate Enterprise
How To Write Software
1 Medical Device Software - Software Life Cycle Processes IEC 62304 2 Credits John F. Murray Software Compliance Expert U.S. Food and Drug Administration Marcie R. Williams Medical Device Fellow Ph.D.
Risk-Based Validation of Computer Systems Used In FDA-Regulated Activities
September 2, 2003 Risk-Based Validation of Computer Systems Used In FDA-Regulated Activities Purpose This document provides a summary of the requirements relating to use of computer-based systems in activities
Testing Automated Manufacturing Processes
Testing Automated Manufacturing Processes (PLC based architecture) 1 ❶ Introduction. ❷ Regulations. ❸ CSV Automated Manufacturing Systems. ❹ PLCs Validation Methodology / Approach. ❺ Testing. ❻ Controls
General Principles of Software Validation; Final Guidance for Industry and FDA Staff
General Principles of Software Validation; Final Guidance for Industry and FDA Staff Document issued on: January 11, 2002 This document supersedes the draft document, "General Principles of Software Validation,
Computerized System Audits In A GCP Pharmaceutical Laboratory Environment
IVTGXP_july06.qxd 6/28/06 1:09 PM Page 36 Computerized System Audits In A GCP Pharmaceutical Laboratory Environment By Maintaining data integrity for both clinical laboratory processes and patient data
GAMP5 - a lifecycle management framework for customized bioprocess solutions
GE Healthcare Life Sciences GAMP5 - a lifecycle management framework for customized bioprocess solutions imagination at work GE Healthcare s engineering department, Customized Bioprocess Solutions (CBS),
CMS Policy for Configuration Management
Chief Information Officer Centers for Medicare & Medicaid Services CMS Policy for Configuration April 2012 Document Number: CMS-CIO-POL-MGT01-01 TABLE OF CONTENTS 1. PURPOSE...1 2. BACKGROUND...1 3. CONFIGURATION
TIBCO Spotfire and S+ Product Family
TIBCO Spotfire and S+ Product Family Compliance with 21 CFR Part 11, GxP and Related Software Validation Issues The Code of Federal Regulations Title 21 Part 11 is a significant regulatory requirement
Full Compliance Contents
Full Compliance for and EU Annex 11 With the regulation support of Contents 1. Introduction 2 2. The regulations 2 3. FDA 3 Subpart B Electronic records 3 Subpart C Electronic Signatures 9 4. EU GMP Annex
Qualification Guideline
Qualification Guideline June 2013 Disclaimer: This document is meant as a reference to Life Science companies in regards to the Microsoft O365 platform. Montrium does not warrant that the use of the recommendations
Regulatory Asset Management: Harmonizing Calibration, Maintenance & Validation Systems
Regulatory Asset Management: Harmonizing Calibration, Maintenance & Validation Systems 800.982.2388 1 Introduction Calibration, maintenance and validation activity, despite operating within the same department
Off-the-Shelf Software: A Broader Picture By Bryan Chojnowski, Reglera Director of Quality
Off-the-Shelf Software: A Broader Picture By Bryan Chojnowski, Reglera Director of Quality In the past decade, there has been a sea change in the business software domain. Many companies are no longer
The SaaS LMS and Total Cost of Ownership in FDA-Regulated Companies
The SaaS LMS and Total Cost of Ownership in FDA-Regulated Companies The SaaS LMS and Total Cost of Ownership in FDA-Regulated Companies By Rob Sims, Director, Life Science, UL EduNeering When a Life Science
COTS Validation Post FDA & Other Regulations
COTS Validation Post FDA & Other Regulations TABLE OF CONTENTS 1. Abstract 3 2. What is COTS 3 3. Why should COTS require Validation? 3 4. Risk Based Approach 4 5. Validation Approach 6 6. Applicable Regulations
REGULATIONS COMPLIANCE ASSESSMENT
ALIX is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. REGULATIONS COMPLIANCE ASSESSMENT BUSINESS
EUROPEAN COMMISSION HEALTH AND CONSUMERS DIRECTORATE-GENERAL. EudraLex The Rules Governing Medicinal Products in the European Union
EUROPEAN COMMISSION HEALTH AND CONSUMERS DIRECTORATE-GENERAL Public Health and Risk Assessment Pharmaceuticals Brussels, SANCO/C8/AM/sl/ares(2010)1064599 EudraLex The Rules Governing Medicinal Products
Clinical Risk Management: Agile Development Implementation Guidance
Document filename: Directorate / Programme Document Reference NPFIT-FNT-TO-TOCLNSA-1306.02 CRM Agile Development Implementation Guidance v1.0 Solution Design Standards and Assurance Project Clinical Risk
Welcome Computer System Validation Training Delivered to FDA. ISPE Boston Area Chapter February 20, 2014
Welcome Computer System Validation Training Delivered to FDA ISPE Boston Area Chapter February 20, 2014 1 Background Training Conducted on April 24, 2012 Food & Drug Administration Division of Manufacturing
Services Providers. Ivan Soto
SOP s for Managing Application Services Providers Ivan Soto Learning Objectives At the end of this session we will have covered: Types of Managed Services Outsourcing process Quality expectations for Managed
Computer System Validation - It s More Than Just Testing
Computer System Validation - It s More Than Just Testing Introduction Computer System Validation is the technical discipline that Life Science companies use to ensure that each Information Technology application
Microsoft s Compliance Framework for Online Services
Microsoft s Compliance Framework for Online Services Online Services Security and Compliance Executive summary Contents Executive summary 1 The changing landscape for online services compliance 4 How Microsoft
Implementing Title 21 CFR Part 11 (Electronic Records ; Electronic Signatures) in Manufacturing Presented by: Steve Malyszko, P.E.
Implementing Title 21 CFR Part 11 (Electronic Records ; Electronic Signatures) in Manufacturing Presented by: Steve Malyszko, P.E. President & CEO Agenda Introduction Who is Malisko Engineering? Title
Considerations When Validating Your Analyst Software Per GAMP 5
WHITE PAPER Analyst Software Validation Service Considerations When Validating Your Analyst Software Per GAMP 5 Blair C. James, Stacy D. Nelson Introduction The purpose of this white paper is to assist
Calibration & Preventative Maintenance. Sally Wolfgang Manager, Quality Operations Merck & Co., Inc.
Calibration & Preventative Maintenance Sally Wolfgang Manager, Quality Operations Merck & Co., Inc. Calibration: A comparison of two instruments or measuring devices one of which is a standard of known
(COMPANY LOGO) CGMP COMPUTERIZED SYSTEM VENDOR AUDIT QUESTIONNAIRE
1. GENERAL COMPANY INFORMATION (COMPANY LOGO) 1.1 Name Address Years in Business Number of Employees Services Performed or Products Manufactured Prior Experience with (Company Name) 1.2 Please provide
OECD DRAFT ADVISORY DOCUMENT 16 1 THE APPLICATION OF GLP PRINCIPLES TO COMPUTERISED SYSTEMS FOREWARD
OECD DRAFT ADVISORY DOCUMENT 16 1 THE APPLICATION OF GLP PRINCIPLES TO COMPUTERISED SYSTEMS FOREWARD 1. The following draft Advisory Document will replace the 1995 OECD GLP Consensus Document number 10
MHRA GMP Data Integrity Definitions and Guidance for Industry March 2015
MHRA GMP Data Integrity Definitions and Guidance for Industry Introduction: Data integrity is fundamental in a pharmaceutical quality system which ensures that medicines are of the required quality. This
Implementation of ANSI/AAMI/IEC 62304 Medical Device Software Lifecycle Processes.
Implementation of ANSI/AAMI/IEC 62304 Medical Device Software Lifecycle Processes.. www.pharmout.net Page 1 of 15 Version-02 1. Scope 1.1. Purpose This paper reviews the implementation of the ANSI/AAMI/IEC
Compliance Response SIMATIC SIMATIC PCS 7 V8.1. Electronic Records / Electronic Signatures (ERES) Edition 03/2015. Answers for industry.
SIMATIC SIMATIC PCS 7 V8.1 Electronic Records / Electronic Signatures (ERES) Compliance Response Edition 03/2015 Answers for industry. Compliance Response Electronic Records / Electronic Signatures (ERES)
What is the correct title of this publication? What is the current status of understanding and implementation?
GMP Rules and Guidelines in 2013 for Computer System Validation / Computerises Systems / Electronic Records and Signatures/ IT Infrastructure and Application Compliance: What is the correct title of this
Clinical database/ecrf validation: effective processes and procedures
TITOLO SLIDE Testo Slide Testo Slide Testo Slide Clinical database/ecrf validation: effective processes and procedures IV BIAS ANNUAL CONGRESS Padova September, 26 th 2012 PQE WORKSHOP: What's new in Computerized
Computer System Validation for Clinical Trials:
Computer System Validation for Clinical Trials: Framework Standard Operating Procedure (F-SOP) Author: Tim Cross Version History: 0.1di DRAFT 24-April-2013 0.2 DRAFT 12-June-2013 Current Version: 1.0 17-June-2013
Compliance Response Edition 07/2009. SIMATIC WinCC V7.0 Compliance Response Electronic Records / Electronic Signatures. simatic wincc DOKUMENTATION
Compliance Response Edition 07/2009 SIMATIC WinCC V7.0 Compliance Response Electronic Records / Electronic Signatures simatic wincc DOKUMENTATION Compliance Response Electronic Records / Electronic Signatures
The purpose of this Supplier Quality Standard is to communicate the expectations and requirements of Baxter Healthcare Corporation to its suppliers.
Supplier Quality Standard 1.0 Purpose The purpose of this Supplier Quality Standard is to communicate the expectations and requirements of Baxter Healthcare Corporation to its suppliers. These expectations
Eclipsys Sunrise Clinical Manager Enterprise Electronic Medical Record (SCM) and Title 21 Code of Federal Regulations Part 11 (21CFR11)
Eclipsys Sunrise Clinical Manager Enterprise Electronic Medical Record (SCM) and Title 21 Code of Federal Regulations Part 11 (21CFR11) The title 21 code of federal regulations part 11 deals with an institutions
Computer System Configuration Management and Change Control
Computer System Configuration Management and Change Control What Your IT Department Is Really Doing Justin J. Fisher, Pfizer IT Quality and Compliance Manager Agenda 1. Background 2. Audience Demographics
Preparing for Unannounced Inspections from Notified Bodies
Preparing for Unannounced Inspections from Notified Bodies Europe has introduced further measures for unannounced audits of manufacturers by notified bodies. With this in mind, James Pink, VP Europe-Health
CIP 010 1 Cyber Security Configuration Change Management and Vulnerability Assessments
CIP 010 1 Cyber Security Configuration Change Management and Vulnerability Assessments A. Introduction 1. Title: Cyber Security Configuration Change Management and Vulnerability Assessments 2. Number:
Electronic records and electronic signatures in the regulated environment of the pharmaceutical and medical device industries
White Paper No 01 I December 2010 Implementation of 21 CFR Part 11 in the epmotion Software Electronic records and electronic signatures in the regulated environment of the pharmaceutical and medical device
A Model for Training/Qualification Record Validation within the Talent Management System
A Model for Training/Qualification Record Validation within the Talent Management System IN THIS PAPER: Meeting 21 CFR Part 11 and Annex 11 Requirements Delivering Qualification Transcripts During Audits
MHRA GMP Data Integrity Definitions and Guidance for Industry January 2015
MHRA GMP Data Integrity Definitions and Guidance for Industry Introduction: Data integrity is fundamental in a pharmaceutical quality system which ensures that medicines are of the required quality. This
ICH guideline Q10 on pharmaceutical quality system
September 2015 EMA/CHMP/ICH/214732/2007 Committee for Human Medicinal Products Step 5 Transmission to CHMP May 2007 Transmission to interested parties May 2007 Deadline for comments November 2007 Final
Request for Proposal for Application Development and Maintenance Services for XML Store platforms
Request for Proposal for Application Development and Maintenance s for ML Store platforms Annex 4: Application Development & Maintenance Requirements Description TABLE OF CONTENTS Page 1 1.0 s Overview...
Training Course Computerized System Validation in the Pharmaceutical Industry Istanbul, 16-17 January 2003. Change Control
Training Course Computerized System Validation in the Pharmaceutical Industry Istanbul, 16-17 January 2003 Change Control Wolfgang Schumacher Roche Pharmaceuticals, Basel Agenda Change Control Definitions
WHITEPAPER: SOFTWARE APPS AS MEDICAL DEVICES THE REGULATORY LANDSCAPE
WHITEPAPER: SOFTWARE APPS AS MEDICAL DEVICES THE REGULATORY LANDSCAPE White paper produced by Maetrics For more information, please contact global sales +1 610 458 9312 +1 877 623 8742 [email protected]
OPERATIONAL STANDARD
1 of 11 1. Introduction The International Safe Transit Association (ISTA), a non-profit association whose objective is to prevent product damage and excess packaging usage within the distribution environment.
Computer System Configuration Management and Change Control
Computer System Configuration Management and Change Control Using Risk-Based Decision Making to Plan and Implement IT Change Justin J. Fisher Senior Manager, BT Quality and Compliance Pfizer Agenda 1.
DNV GL Assessment Checklist ISO 9001:2015
DNV GL Assessment Checklist ISO 9001:2015 Rev 0 - December 2015 4 Context of the Organization No. Question Proc. Ref. Comments 4.1 Understanding the Organization and its context 1 Has the organization
TrackWise - Quality Management System
TrackWise - Quality Management System Focus area: Electronic Management of CAPA Systems in the Regulated Industry May 11, 2007 Yaniv Vardi VP, Operations Sparta Systems Europe, Ltd. Agenda Sparta Systems
Aligning Quality Management Processes to Compliance Goals
Aligning Quality Management Processes to Compliance Goals MetricStream.com Smart Consulting Group Joint Webinar February 23 rd 2012 Nigel J. Smart, Ph.D. Smart Consulting Group 20 E. Market Street West
Exhibit F. VA-130620-CAI - Staff Aug Job Titles and Descriptions Effective 2015
Applications... 3 1. Programmer Analyst... 3 2. Programmer... 5 3. Software Test Analyst... 6 4. Technical Writer... 9 5. Business Analyst... 10 6. System Analyst... 12 7. Software Solutions Architect...
CIP-010-2 Cyber Security Configuration Change Management and Vulnerability Assessments
CIP-010-2 Cyber Security Configuration Change Management and Vulnerability Assessments A. Introduction 1. Title: Cyber Security Configuration Change Management and Vulnerability Assessments 2. Number:
The Configuration Management process area involves the following:
CONFIGURATION MANAGEMENT A Support Process Area at Maturity Level 2 Purpose The purpose of is to establish and maintain the integrity of work products using configuration identification, configuration
Domain 1 The Process of Auditing Information Systems
Certified Information Systems Auditor (CISA ) Certification Course Description Our 5-day ISACA Certified Information Systems Auditor (CISA) training course equips information professionals with the knowledge
December 21, 2012. The services being procured through the proposed amendment are Hosting Services, and Application Development and Support for CITSS.
Justification for a Contract Amendment to Contract 2012-01: Interim Hosting and Jurisdiction Functionality for the Compliance Instrument Tracking System Service (CITSS) December 21, 2012 Introduction WCI,
SaaS Adoption Lifecycle in Life-Sciences Companies
www.arisglobal.com A White Paper Presented By ArisGlobal SaaS Adoption Lifecycle in Life-Sciences Companies by Achal Verma, Associate Director - Program Delivery, Cloud Services Abstract With increasing
Standardizing Best Industry Practices
MEDICAL DEVICES Current market conditions have created a highly competitive and challenging environment for the medical device industry. With stricter FDA regulatory oversight, increasing material costs
Risk-Based Validation of Commercial Off-the-Shelf Computer Systems
Risk-Based Validation of Commercial Off-the-Shelf Computer Systems Published by Advanstar Communications in Journal of Validation Technology May 2005, Vol. 11, No. 3 Supplied by (*) www.labcompliance.com
Cost of Poor Quality:
Cost of Poor Quality: Analysis for IT Service Management Software Software Concurrent Session: ISE 09 Wed. May 23, 8:00 AM 9:00 AM Presenter: Daniel Zrymiak Key Points: Use the Cost of Poor Quality: Failure
Internal Quality Management System Audit Checklist (ISO9001:2015) Q# ISO 9001:2015 Clause Audit Question Audit Evidence 4 Context of the Organization
Internal Quality Management System Audit Checklist (ISO9001:2015) Q# ISO 9001:2015 Clause Audit Question Audit Evidence 4 Context of the Organization 4.1 Understanding the organization and its context
The PNC Financial Services Group, Inc. Business Continuity Program
The PNC Financial Services Group, Inc. Business Continuity Program 1 Content Overview A. Introduction Page 3 B. Governance Model Page 4 C. Program Components Page 4 Business Impact Analysis (BIA) Page
The FDA recently announced a significant
This article illustrates the risk analysis guidance discussed in GAMP 4. 5 By applying GAMP s risk analysis method to three generic classes of software systems, this article acts as both an introduction
SOFTWARE-IMPLEMENTED SAFETY LOGIC Angela E. Summers, Ph.D., P.E., President, SIS-TECH Solutions, LP
SOFTWARE-IMPLEMENTED SAFETY LOGIC Angela E. Summers, Ph.D., P.E., President, SIS-TECH Solutions, LP Software-Implemented Safety Logic, Loss Prevention Symposium, American Institute of Chemical Engineers,
DeltaV Capabilities for Electronic Records Management
January 2013 Page 1 DeltaV Capabilities for Electronic Records Management This paper describes DeltaV s integrated solution for meeting FDA 21CFR Part 11 requirements in process automation applications
GxP Process Management Software. White Paper: Software Automation Trends in the Medical Device Industry
GxP Process Management Software : Software Automation Trends in the Medical Device Industry Introduction The development and manufacturing of a medical device is an increasingly difficult endeavor as competition
AS9100 Quality Manual
Origination Date: August 14, 2009 Document Identifier: Quality Manual Revision Date: 8/5/2015 Revision Level: Q AS 9100 UNCONTROLLED IF PRINTED Page 1 of 17 1 Scope Advanced Companies (Advanced) has established
ISO/IEC 17025 QUALITY MANUAL
1800 NW 169 th Pl, Beaverton, OR 97006 Revision F Date: 9/18/06 PAGE 1 OF 18 TABLE OF CONTENTS Quality Manual Section Applicable ISO/IEC 17025:2005 clause(s) Page Quality Policy 4.2.2 3 Introduction 4
Shiny Server Pro: Regulatory Compliance and Validation Issues
Shiny Server Pro: Regulatory Compliance and Validation Issues A Guidance Document for the Use of Shiny Server Pro in Regulated Clinical Trial Environments June 19, 2014 RStudio, Inc. 250 Northern Ave.
ISO 20000-1:2005 Requirements Summary
Contents 3. Requirements for a Management System... 3 3.1 Management Responsibility... 3 3.2 Documentation Requirements... 3 3.3 Competence, Awareness, and Training... 4 4. Planning and Implementing Service
How To Use A Court Record Electronically In Idaho
Idaho Judicial Branch Scanning and Imaging Guidelines DRAFT - October 25, 2013 A. Introduction Many of Idaho s courts have considered or implemented the use of digital imaging systems to scan court documents
Data Management Implementation Plan
Appendix 8.H Data Management Implementation Plan Prepared by Vikram Vyas CRESP-Amchitka Data Management Component 1. INTRODUCTION... 2 1.1. OBJECTIVES AND SCOPE... 2 2. DATA REPORTING CONVENTIONS... 2
QUESTIONS FOR YOUR SOFTWARE VENDOR: TO ASK BEFORE YOUR AUDIT
QUESTIONS FOR YOUR SOFTWARE VENDOR: TO ASK BEFORE YOUR AUDIT Heather Longden Senior Marketing Manager Waters Corporation Boston Chapter Educational Meeting June 2016 About Waters Lab Informatics Separations
How to implement a Quality Management System
How to implement a Quality Management System This whitepaper will help you to implement a Quality Management System (QMS), based on Good Manufacturing Practice (GMP), ISO 9001 or ISO 13485 within your
STS Federal Government Consulting Practice IV&V Offering
STS Federal Government Consulting Practice IV&V Offering WBE Certified GSA Contract GS-35F-0108T For information Please contact: [email protected] 2007 by STS, Inc. Outline Background on STS What is IV&V?
Union County. Electronic Records and Document Imaging Policy
Union County Electronic Records and Document Imaging Policy Adopted by the Union County Board of Commissioners December 2, 2013 1 Table of Contents 1. Purpose... 3 2. Responsible Parties... 3 3. Availability
<name of project> Software Project Management Plan
The document in this file is adapted from the IEEE standards for Software Project Management Plans, 1058-1998, which conforms to the requirements of ISO standard 12207 Software Life Cycle Processes. Tailor
ISO 9001 (2000) QUALITY MANAGEMENT SYSTEM ASSESSMENT REPORT SUPPLIER/ SUBCONTRACTOR
Page 1 of 20 ISO 9001 (2000) QUALITY MANAGEMENT SYSTEM ASSESSMENT REPORT SUPPLIER/ SUBCONTRACTOR SUPPLIER/ SUBCONTRACTOR NAME: ADDRESS: CITY AND STATE: ZIP CODE: SUPPLIER/MANUFACTURER NO PHONE: DIVISION:
Back to index of articles. Qualification of Computer Networks and Infrastructure
Back to index of articles Qualification of Computer Networks and Infrastructure R.D.McDowall McDowall Consulting Validation of computerised systems generally focuses on the providing documented evidence
ISO 9001:2015 Internal Audit Checklist
Page 1 of 14 Client: Date: Client ID: Auditor Audit Report Key - SAT: Satisfactory; OBS: Observation; NC: Nonconformance; N/A: Not Applicable at this time Clause Requirement Comply Auditor Notes / Evidence
unless the manufacturer upgrades the firmware, whereas the effort is repeated.
Software Validation in Accredited Laboratories A Practical Guide Gregory D. Gogates Fasor Inc., 3101 Skippack Pike, Lansdale, Pennsylvania 19446-5864 USA [email protected] www.fasor.com Abstract Software
Guidance for Industry. Q10 Pharmaceutical Quality System
Guidance for Industry Q10 Pharmaceutical Quality System U.S. Department of Health and Human Services Food and Drug Administration Center for Drug Evaluation and Research (CDER) Center for Biologics Evaluation
LOW RISK APPROACH TO ACHIEVE PART 11 COMPLIANCE WITH SOLABS QM AND MS SHAREPOINT
LOW RISK APPROACH TO ACHIEVE PART 11 COMPLIANCE WITH SOLABS QM AND MS SHAREPOINT Implementation of MS SharePoint provides companywide functionalities for general document management and workflow. The use
Build (develop) and document Acceptance Transition to production (installation) Operations and maintenance support (postinstallation)
It is a well-known fact in computer security that security problems are very often a direct result of software bugs. That leads security researches to pay lots of attention to software engineering. The
CONTENTS. 1 Introduction 1
Prelims 25/7/06 1:49 pm Page iii CONTENTS List of Tables List of Figures Preface 1 1 2 Infrastructure Lifecycle Approach Recommendation and Conceptualization Design Design Reviews Development and Integration
Computerised Systems. Seeing the Wood from the Trees
Computerised Systems Seeing the Wood from the Trees Scope WHAT IS A COMPUTERISED SYSTEM? WHY DO WE NEED VALIDATED SYSTEMS? WHAT NEEDS VALIDATING? HOW DO WE PERFORM CSV? WHO DOES WHAT? IT S VALIDATED -
TG 47-01. TRANSITIONAL GUIDELINES FOR ISO/IEC 17021-1:2015, ISO 9001:2015 and ISO 14001:2015 CERTIFICATION BODIES
TRANSITIONAL GUIDELINES FOR ISO/IEC 17021-1:2015, ISO 9001:2015 and ISO 14001:2015 CERTIFICATION BODIES Approved By: Senior Manager: Mpho Phaloane Created By: Field Manager: John Ndalamo Date of Approval:
CDC UNIFIED PROCESS JOB AID
CDC UNIFIED PROCESS JOB AID Independent Verification & Validation Activities Document Purpose This Job Aid is a brief document listing the items to be noted, checked, remembered, and delivered when completing
Information Security Policies. Version 6.1
Information Security Policies Version 6.1 Information Security Policies Contents: 1. Information Security page 3 2. Business Continuity page 5 3. Compliance page 6 4. Outsourcing and Third Party Access
Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.
INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc. February 2013 1 Executive Summary Adnet is pleased to provide this white paper, describing our approach to performing
A Pragmatic Approach to the Testing of Excel Spreadsheets
A Pragmatic Approach to the Many GxP critical spreadsheets need to undergo validation and testing to ensure that the data they generate is accurate and secure. This paper describes a pragmatic approach
IT General Controls Domain COBIT Domain Control Objective Control Activity Test Plan Test of Controls Results
Acquire or develop application systems software Controls provide reasonable assurance that application and system software is acquired or developed that effectively supports financial reporting requirements.
