Don’t Get Stuck In A Clinical Trial Rut:
Study Challenges, Practical Solutions, and Three Reasons Why You Need to Change
The entire biopharmaceutical industry is focused like never before. We are looking critically at mega-spending of global development operations and doing more with less. In an international environment of shifting regulatory sands, the goal is to be both effective and efficient, while generating products and producing results. We are often asked to provide broad industry best practice insights to assist with this effort, so we chose our top findings to encourage self-examination and heighten adoption of best practices. We will touch upon three key areas acknowledged as problematic, and then provide some practical ideas to consider. 1. Over-Designing Protocols Ken Getz, from the Tufts Center for Drug Development, presented recent early data on a study, which examined the cost associated with protocols. His figures revealed that of the large pharma companies surveyed, respondents believed that 30% of data commonly collected in clinical trials is not critical to meeting the safety and efficacy goals of their protocols. If one considers that the downstream effects of the verification and management of that data were not included in respondents’ approximations, the cost and effort associated with collecting information for protocols was conservatively estimated over $1.5 million per protocol in just site payments alone.1 Although the data were preliminary, and likely to change quite a bit as it matures, knowing that this money is wasted right out of the gate creates a long overdue discussion about unsustainable legacy practices. We need to ask ourselves why we collect information that isn’t core to the protocol’s key objectives of learning and confirming the product’s safety and efficacy? This mindset is representative of the No. 1 issue that continues to pervade this industry once a product gets to its development stage. There is little critical judgment built in to most processes. For example, in this instance protocol designers confront two disturbing precedents. First, there is little to no challenge of the status quo, asking “why" a physical exam, vital signs, and other routine measurements are collected at every visit as a relevant historical record when long resolved symptoms are otherwise verbally reported … or denied. In many instances there would be a heated discussion if these examinations were challenged. Second, it is rare that scientific mindsets are aligned with (or in the presence of) business outlooks within the protocol design team; for instance, the recognition is often lacking that spending $8,000 per patient to collect information about a secondary or tertiary objective will not produce adequate return on the investment, but spending $100 on collecting information for the primary objective will. Julian Jenkins, VP in the Center for Project and Study Excellence at GSK, suggests various restraints to manage this costly dilemma. First, appoint protocol review committee members who are competent in two key areas. They should be individuals who can be highly critical of the information collected and will ask “why" in a non-judgmental format, so the core requirements of the safety and efficacy are clear and defendable, and who fully understand the cost ramifications of doing procedures which may have some future scientific relevance but no obvious economic return on investment. Second, enlist the study coordinators who will need to execute the protocol involved to reality-check it. They are the most pragmatic detailed evaluators on the feasibility of data acquisition and will add value vs. create busy work. At an FDA public hearing on April 23, 2012, Andreas Koester, who is leading clinical trial innovation at Janssen Pharmaceuticals Companies of Johnson & Johnson, suggested that efficacy data standards should be a collaborative effort between industry and the FDA, and that previously submitted studies could inform near-term answers on endpoints and trial designs to guide standardization even further.2 In a different part of the world of calibration, there is an initiative called the BRIDG Model project, where an online library of standardized protocols provides for wider use, and seeks to provide translation to use from protocol to other documents (study reports) as well as defines a base set of common elements across protocols that clearly form a “data layer" and can populate a database.3 It appears that there are multiple efforts calling for similar outcomes. As enlightenment on these matters is the goal of this epistle, the challenge for the average protocol designer is to recognize all of these initiatives and harness some of their power within his or her own organization. Lastly, Medidata Solutions Worldwide is offering a new product that addresses the business-science divide called Designer. Designer provides a method to cross-check objectives against procedures, and calculates procedure costs to obtain a rapid ROI on esoteric assays to tickle the fancy of the academically oriented protocol designer who wants to test out a hypothesis.4 For companies already utilizing Rave for their EDC solution, it is probably worth having a look at this product to evaluate the business case for using it. 2. Confusing “Built-In" Quality vs. Taking Calculated Risks The FDA has been vocal, and should be congratulated on taking the lead, in recognizing the massive, expensive, and largely unnecessary efforts associated with the current practices surrounding the monitoring of clinical trials while simultaneously amplifying the vigor with which sponsors must oversee outsourcing to CROs. It seems as though there is a reluctance to “publicly be the first" to take a “risk-based approach," because the conservative mindset pervasive in clinical research, coupled with a reticence to take chances not only limits failsafe options but also defeats the goal. The common practices of site-based visits and detailed review and verification of every data field consumes enormous resources, yet they have been in place for over 25 years. The Risk Based Monitoring5 guidance urges a pre-defined, thoughtful approach in determining a prospective, protocol-specific plan for oversight, incorporating quality by design principals. It suggests judiciously using costly “on-site" time for study staff management and hands-on work such as test article accountability, and harnessing the efficiency of technology to support centralized monitoring to evaluate consistency, completeness, and trending of data. Sponsors are urged to identify sites that need more aggressive management, which is now possible with the ubiquitous use of EDC, but is seems there is reluctance to pre-define what “risks" are acceptable, due to a fear that FDA inspectors will catch small errors or omissions and send warning letters as a result. This fear instills enough uncertainty that movement toward change is not discernable. We believe this concern is not a good excuse for delaying a shift, especially because the FDA has requested submission of the monitoring plan, signaling its agreement to evaluate and weigh in on the approach, and because about 95% of data collected in EDC are unchanged through the SDV processes employed in monitoring.6 Until it is tried on several studies and feedback on the process is broadly and transparently presented, the reticence will continue. We urge those of you who are trying the new approach to be brave, and to share your experience. In the meantime, we will suggest one approach to consider. Quality by design monitoring would describe several key risk factors and make use of an objective scale-based scoring system, which would be monitored remotely on a real time basis. Key factors assessed at site initiation and early enrollment would inform the calibration of ongoing risk. The sponsor would define alert levels to trigger more aggressive approaches, which could include third party QC to assess and mitigate risks. Quality-based factors would include prior experiences (or lack thereof) with the investigator, data cleanliness, adherence to study procedures, have a proactive problem solving approach, and others. These factors — coupled with the centralized monitoring of patient enrollment, speed of data entry, consistency, quality, and trending — would be utilized to trigger extra visits as needed to sites who need them and less frequent visits to low risk sites. Andreas Koester of Janssen proposed creating an integrated end of Phase 2 quality management plan that would describe all sponsor oversight activities specific for the compound under investigation that would then serve as the basis for FDA submission, input and review, and inspections. It’s another simple and practical approach, and we think it’s worth trying, especially if it prevents issues noted during subsequent regulatory inspections. 3. Believing Tools and Technology Will Solve Problems with Processes or People When we conduct workshops on a topic like vendor management, we share examples of tools such as vendor oversight plans. It’s always startling when participants assume that the tool will confer the knowledge and wisdom needed to manage vendors, and that getting a copy of the tool will solve all their problems. Our third no-no is failing to recognize that using a tool, whether it’s a template timeline or a $2 million CTMS is not a replacement for simple value-added processes and well-qualified people. This dangerous assumption dramatically expands the likelihood of waste and inefficiency with technology projects, without fixing the underlying gap. Acquisition and implementation of a clinical trial management system is a very expensive undertaking. We have seen countless examples of an organization purchasing the system then failing to integrate it into work processes. The hard work of changing the process and including the individuals who will use the system at every step is underestimated. It is pure change management, and must be included in the project of technology adoption. Business and quality processes can be the best foundation or the worst tangle of convoluted hoops that teams must jump through, expending enormous energy and time to stay compliant with either 2,000 high-level procedures that effectively say nothing, or one 98-page tome. We have worked with companies where geographically separated teams conduct clinical trials completely differently and have done so for years, because they have never integrated their processes, effectively doing everything twice on a global study because their SOPs call for it. We have also worked with companies that have no business process for execution and storage of vendor contracts, and can’t find them in anticipation of a due diligence exercise. It is rare that internal employees will find the time or be able to look objectively at what works and doesn’t. Meanwhile, years can pass where funds that could be allocated to high-value development activities are instead spent filling out the checklist or collecting the useless form to stay within the procedure. Through each of these big ticket issues, the origins are a complex combination of fear, inability to recognize when an approach no longer works or adds value, reluctance to let go of old ways, and unconscious incompetence on the part of individuals assigned to important management responsibilities. We always look to senior management to diagnose issues that trickle down that cause these challenges, and typically, there are areas for enlightenment. Bridging the Gap We see two key challenges with implementing these ambitious ideas: there are disconnects between the executive-level making proposals and the rank and file’s ability to both interpret and implement the ideas and only in big pharma is there a mandate for change with resources focused on making that change happen. Small companies with a “heads-down, lean-and-mean" approach at every level don’t have the knowledge, bandwidth, or funds to appoint a clinical innovation officer, and CROs don’t have any incentives for slimming down their part of the bloat. The overarching concern is that in an industry where rewards have historically been geared only toward embracing the collective conservatism of the herd, not breaking from the pack, and taking calculated risks, no one is willing to be “first." It is time that individuals adjust their behavior, be brave, and welcome an opportunity to take the lead in change. References: 1. Presentation by Ken Getz, Fellow at Tufts Center for Drug Development during Americas Medidata User Group annual meeting 2012 www.mdsol.com/conferences/mug/amug.html 2. https://collaboration.fda.gov/p96362676/ 3. http://www.bridgmodel.org/ 4. http://www.mdsol.com/products/designer.htm 5. Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring FDA, August 2011. http://www.fda.gov/ downloads/Drugs/…/Guidances/UCM269919.pdf 6. Personal communication, Steve Young, Medidata Solutions Worldwide Laurie Halloran, BSN, MS, President and CEO, Halloran Consulting Group Halloran Consulting Group is a specialty management consulting firm for the life- sciences industry. { For more information, visit hallorancg.com.