QRM and the System Classification “Big Reshuffle”: Don’t Miss the Forest for the Trees
Recently, under its new Insights offering, KenX published an article, The Big Reshuffle – Impact Assessments and System Criticality, discussing the implications regarding the change in approach from System Level Impact Assessment (SLIA) and Component Criticality Assessment (CCA) to System Classification (SC) and System Risk Assessment (SRA) in ISPE Baseline Guide Volume 5, Commissioning & Qualification, 2nd Edition (2019). It is, on the whole, an excellent article and well-worth the time to read. In describing the impact of the “big reshuffle”, the article addresses some common practices and paradigms that I believe merit further discussion, particularly with respect to understanding the role of System Classification and System Risk Assessment in the application of Quality Risk Management (QRM) principles to the integrated Commissioning & Qualification (C&Q) process.
Based on input from owner company focus groups and C&Q and QRM SMEs, the article indicates several perceived concerns with SC/SRA, including that SC/SRA are not as robust as SLIA/CCA, that use of SC/SRA could thus lead to “gaps” due to details from CCA being missed, that “direct” and “not direct” classification results in ambiguity, that SRA is a duplication of effort, and that lack of documented CQAs and CPPs precluded performance of SRA. The article then discusses several responses: a paradigm shift (“reshuffle”) in the understanding of system classification and impact assessments, applying SC/SRA as a primer for the verification testing strategy, viewing SRA as a scalable QRM tool, and using SC/SRA to “connect the dots” from product and process to control strategy.
While I agree at a high level with these responses, I think the article misses an opportunity to explain how the QRM-based approach using SC/SRA is more robust, more rigorous, and more streamlined, in that the article allows to stand some assumptions, misunderstandings, and vestiges of the legacy C&Q approach; namely that SLIA (and, therefore, SC) is intended to be a “filter” to determine which systems to apply QRM, that SRA is analogous to and intended to be a replacement for CCA, and that understanding of CQAs and CPPs is a new requirement in the C&Q process.
CQAs and CPPs as C&Q Inputs
Starting with the latter: the article indicates that, while noteworthy, the lack of availability of CQAs and CPPs in an accessible format is an industry issue that is outside the scope of C&Q. I fundamentally disagree with this conclusion. The QRM-based integrated C&Q process described by the revised Baseline Guide assumes and requires product and process knowledge (PPK) – i.e. CQAs and CPPs – as an input to the process. Without CQAs and CPPs, QRM cannot be applied to the C&Q process. I would argue further that the legacy C&Q approach similarly required this knowledge. Both the SLIA and CCA criteria provide the following caveat (emphasis added): “These criteria should be used to inform a judgment based on a comprehensive understanding of the product, process and the nature of the system. They should not be used to replace the exercise of informed judgment by appropriately qualified personnel.”
Thus, even in the legacy C&Q approach described by the original Baseline Guide, understanding of the product and process – i.e. CQAs and CPPs – was required.
I discussed this article with a lead author for both the original and revised Baseline Guide, who stated:
The article correctly noted that many companies lacked initial documentation of CQA/CPP at the start of a project and incorrectly viewed the SC process as a question-based approach to classify systems as direct for continuing on the Guide Process. The article missed the point that once the decision is made to design a facility around a defined process (as a CMO does) that one can associate the CPPs associated with the process steps, with most likely 99% accuracy, and apply a RA to the designated system. As such, SC is not a required QRM C&Q prior process step. Therefore, it is not and was not designed to be a key determination factor for connecting the dots in control strategy. (Note, the Guide includes a list of systems and normally associated CQAs/CPPs in an appendix).
One might describe the legacy C&Q approach as a partial application of QRM to the C&Q process. CQAs and CPPs are identified as part of the process but are not used as the basis for risk assessment. SLIA and CCA are used as a form of “bottom-up” risk assessment based on understanding of the system/component rather than a “top-down” risk assessment based on understanding of product quality and process-related risks to product quality. Thus, system and component impact to product quality are determined, but specific process-related quality risks are neither identified nor assessed. As a result, risk evaluation becomes a binary: a system has impact, or it does not; a component is critical, or it is not. Similarly, risk control is merely implicit: components assessed as “critical” comprise the risk control strategy. Accordingly, verification of the risk control strategy – ensuring that testing is commensurate with risk – is also merely implicit (and binary): critical components are commissioned and qualified; non-critical components are commissioned-only.
So, this process incorporates risk-based rationale and decision-making, but that rationale and those decisions are binary. As made clear in regulatory guidance, we must understand risk not as a binary but as a continuum. Further, the risk basis for the legacy C&Q approach is based on implicit understanding of processes and systems that does not explicitly tie the risk control strategy to specific, identified process risks to product quality.
By contrast, System Risk Assessment identifies specific process risks to identified product quality – i.e. specific CQA failures related to process failure to maintain specific CPPs. Those risks are assessed qualitatively (high/medium/low) rather than as a binary, and specific risk controls – design controls, procedural controls, alarms, etc. – are identified to mitigate those risks to an acceptable level. Thus, the verification strategy can be tied directly to an explicit risk control strategy, and the rigor (effort, formality, and documentation) of verification testing can be demonstrated to be commensurate with the degree of risk (and with the degree of applied risk mitigation).
System Impact/Component Criticality Assessment as a QRM “Filter”
In the legacy C&Q approach, SLIA was often implemented as a “filter” or “gatekeeper” for further application of QRM. Systems would first be classified, and then systems classified as direct impact would undergo CCA. The assessed component criticality then dictated the verification strategy.
The original Baseline Guide suggested that CCA should be performed on indirect, and sometimes on no-impact, systems, to ensure that such systems “have not subsequently acquired a critical function as the detailed design has progressed to conclusion.” Informed SME judgment was intended to be used as the sanity check to ensure that system classification remained correct; systems that contain critical components should, by definition, be classified as direct impact.
This is the inherent danger both in assessing risk from a system/component perspective rather than from a product/process perspective and in using system impact (and to a lesser extent, component criticality) assessment as a QRM “filter”: a change in design definition or system boundaries could cause process-related risks to product quality to be missed and/or could lead to an inappropriate verification testing strategy. This inherent danger is even more readily apparent when applying the same approach to the SC/SRA process in the revised Baseline Guide. Unfortunately, the Baseline Guide itself states that SRA is performed for systems identified as direct impact through System Classification. More on that, later.
The article discusses perceived concerns with SRA as a replacement for CCA. The article states, “It must be noted, nevertheless, that the SRA is not a direct replacement for the CCA. It is more like a lens, through which aspects that impact on CQA and CPP are viewed.” However, this directly contradicts the Baseline Guide, which states, “In the previous edition of this Baseline® Guide, component classification could be carried out as a subset of system classification. Component classification is no longer necessary since the System Risk Assessment…identifies CDEs for the system in a more efficient manner.”
I think part of this disparity comes from the view – as reflected by industry representatives referenced in the article – that SRA is merely a repackaged FMEA. Much of the concerns regarding SRA may stem from conflating the purpose and outputs of these two risk tools. SRA is intended to be a direct replacement for CCA; SRA is not intended to be a direct replacement for FMEA.
As stated by the previously referenced Baseline Guide lead author:
There also appears to be a good bit of confusion concerning the “SRA”. The primary motivation in developing the rational for the SIA was to take advantage of the reality that most equipment/systems used by pharma is of relative standard/generic design and most likely has most (80%) of the controls/CAs and CDEs to implement the CPPs designed in. This has a positive benefit in changing the approach of RA from “getting down into the weeds” of identifying as many failure modes as possible to focusing of the CPPs being controlled by the system and having adequate SMEs assess the adequacy of the existing system “control strategy” for a defined product. If unacceptable risk is identified additional controls can be added. The SRA suggested format does this by inclusion of identification of engineering and automation design elements (CDEs), procedural controls, and also identifies critical instruments associated with critical CPP alarms. As noted in the article FMEA is usually the tool of choice and often not the desirable tool for the scope of the RA.
The purpose of FMEA is to assess the impact of various failure modes quantitatively based on severity, likelihood of occurrence, and detectability. The output of FMEA is a relative ranking of process risks according to Risk Priority Number (RPN), the purpose of which is to inform decision-making regarding the focus of risk mitigation. The purpose of SRA is to identify specific process-related risks to product quality, to identify the applied risk controls strategy, and to evaluate residual risk. The outputs of SRA are 1) a list of identified process risks to product quality, 2) the risk profile (qualitative risk assessment), and 3) the identified risk control strategy (CAs/CDEs, procedural controls, and alarms).
Understanding this purpose and these outputs addresses the misconception that SC/SRA can lead to “gaps” and eliminates the confusion regarding the manner in which critical design elements and critical alarms are identified. Criticality is directly linked to product quality through identified process-related risks to product quality and the risk control strategy identified to mitigate those risks. If a component or alarm either contributes a risk to product quality or acts as a mitigation for an identified risk, then it is, by definition, critical. Further, rather than being viewed as redundant exercises, SRA can be used to inform/as an input to process FMEA, or vice versa, depending on when each activity is performed.
Indirect Impact Classification Ambiguity
The article discusses the inconsistency in C&Q strategy within the industry with respect to “indirect impact” systems as assessed via SLIA in the legacy C&Q process, indicating that some firms commission these systems while others commission and quality these systems – speculating that the difference may be a matter of degree of (quality) risk aversion on the part of a given firm. However, the original Baseline Guide was prescriptive on this point, stating that “’Indirect Impact’ …systems and their components are designed, installed, and commissioned according to GEP only.”
This industry confusion and inconsistency further demonstrates the limitation of applying QRM based on a binary understanding of risk and based on a component-focused rather than a product-focused perspective.
The “CAI Way”
To address several of these questions, concerns, and points of confusion, CAI recommends applying QRM to the integrated C&Q process as described in the revised Baseline Guide, but with some subtle, yet key, changes in perspective:
1. Risk assessment should be performed by process, rather than by system
2. System classification is redundant with robust risk assessment
3. System classification should not be used as a QRM “filter” or “gatekeeper”
The problem with assessing risk by system rather than by process is three-fold: one, the process development risk assessment that initially identifies CQAs, CPPs, and the baseline risk control strategy (i.e. initial CAs) is developed prior to and independently from the eventual process Basis of Design; two, differentiating between systems is highly sensitive to definition of system boundaries; and three, assessing risk by process results in a more holistic understanding of risks that might exist outside of defined system boundaries. Further, in terms of the overall process, system distinctions are largely arbitrary. They serve primarily as a means to facilitate design and delivery of the facilities, utility systems, and equipment that comprise the process. Thus, it is advantageous to assess risk, if not for the process as a whole, at least by functional area rather than by system.
Where a project has already identified specific systems as part of the Basis of Design, the risk profile and associated risk controls for those systems can be incorporated into the risk assessment to facilitate evaluation. For most scenarios, the SRA template in the Baseline Guide can serve as a useful tool.
Obviously, this ideal approach must account for the current industry reality that CQAs and CPPs (and the process development risk assessment that should identify them) are often not well-documented. Mitigation strategies exist, but they are outside the scope of this blog post.
When we understand the relationship between product quality (CQAs), process-related risks to product quality (e.g. CPPs), the risk control strategy (Critical Aspects), and the design definition (CDEs), it should make sense why system classification is a redundant activity in terms of the integrated C&Q process. SRA outputs include identified process risks and the design controls, procedural controls, and alarms that comprise the risk control strategy. Each of those risks and each of those risk controls can be associated with a system. Thus, any system that includes an identified risk or an identified risk control is, by definition, direct impact – and any system that does not is, by definition, not direct impact. Further, this understanding is both more robust and more explicit that the output of system classification.
Therefore, since system classification is a “bottom-up” form of risk assessment not directly tied to product and process knowledge and is less robust than SRA, it should not be seen as a “filter” or “lens” and should not be used as a “gatekeeper” to determine further QRM application. That is not to say that system classification can’t serve a useful purpose. In fact, SC can be incredibly useful as a project delivery tool, especially in early project phases. As stated in the Baseline Guide, “Projects are commonly divided up into systems to facilitate construction management, document collation, turnover, and C&Q.” Identifying direct impact systems using SC can be useful to prioritize long-lead design focus, schedule, and effort (such as hygienic piping systems vs plant piping systems) and to define contractual obligations for system testing, documentation, and handover requirements for vendors and fabricators.
Arriving at the Same Conclusion
Having said all that, I want to give credit to the article and to its authors for a thoughtful, well-written discussion of the issues at hand. Ultimately, I believe that we arrive at the same conclusion on some of the more-critical points. As the article states:
Modern validation is a series of activities planned, designed and executed to demonstrate that the controls you have implemented as a result of risk-based decision-making are valid and protect the patient. The SCA and SRA concentrates qualification rationale on quality, via CQA/ CPPs with an efficient route to test planning. This supports a state of control within the pharmaceutical manufacturing space which is critical to achieving quality while still allowing scope to innovate and continually improve.
As the article recognizes, product and process knowledge must ultimately drive our approach to C&Q and therefore documented CQAs and CPPs must serve as inputs to the C&Q process. The article notes that the underlying intent of SC/SRA is to drive C&Q scope, strategy, and decision-making based on understanding the definition of product quality, process-related risks to product quality, the risk control strategy. Thus, proper application of SC/SRA leads to understanding of Qualification as verification that installation and operation of the risk control strategy is fit for intended use to ensure that the systems perform as intended to deliver CPPs. The article correctly places C&Q in context of both ICH Q8/Q9/Q10 and the Process Validation lifecycle, and notes that applying this understanding results in a more robust, more focused, and streamlined C&Q process. And that is exactly the sort of “reshuffle” that could benefit our industry, and ultimately, our patients.
About the Author:
Chip Bennett, PMP
Chip is a consultant and a PMI® Certified Project Management Professional (PMP) with over 20 years of experience as a validation engineer and project manager in the pharmaceutical and regulated non-pharmaceutical industries, Chip is the lead subject matter expert for commissioning, qualification, and validation program development, and is a subject matter expert in Quality Risk Management, aseptic manufacturing, cleaning validation, quality systems, and owner project management. Chip is responsible for developing and implementing QRM-based Commissioning and Qualification programs and projects, with a focus on assessing and training clients regarding transition to risk-based approaches.