Conducting clinical trials is a highly specialized business that requires highly specific supporting software. The combination of complex functional and technical requirements, along with stringent regulatory requirements, means that standard cross-industry applications are rarely adequate.
data collection
A need for specialization has created a market of mostly small vendors offering relatively niche solutions, meaning that companies implementing clinical-trial systems must expend a lot of effort to integrate the different products. The Systems to Consider Major application systems typically used to support the planning, conduct, and analysis of clinical trials include: clinical trial management systems (CTMS), electronic data capture (EDC), clinical data management systems (CDMS), laboratory information systems (LIMS), interactive voice response systems (IVRS), safety reporting, and electronic patient diaries. A number of these systems are often put into place and used simultaneously, and, as a result, the number of required interfaces can become quite large very quickly. The problem is further exacerbated by the fact that at least half of these application systems may need to take data feeds from external sources, such as from a contract research organization (CRO). Each of these systems implicitly needs to send information to the data warehouse for analysis. It’s clear that designing, building, validating, and maintaining such a large number of interfaces is a daunting task. In the real world, the task typically never finishes, leaving both a backlog of unfinished work and a legacy of rather old systems that nobody dares to touch. With each system having an average of four interfaces to other systems, replacing any one of them requires a great deal of surgical precision. Before we start building next decade’s obsolete code, let’s see if there’s a simpler architecture that will reduce the number of interfaces that are needed in the first place. This can be done by changing from a point-to-point strategy to a hub-and-spoke strategy, as shown in the diagrams on the next page. The diagrams are a little oversimplified in that a handful of point-to-point integrations are still likely to be desirable, but in general, this approach represents a pragmatic and workable solution.
Choosing a data-collection hub
Since the data collection hub implicitly has a large number of interfaces and supports a large number of functions, choosing the right technology is clearly a significant and long-term decision. There are pros and cons of the two possible choices. First, let’s work on the assumption that the data collection hub is a commercial, off-the-shelf application package. Historically, the hub has been the CDMS, which was installed to handle paper-based studies and then extended to handle a variety of other electronic feeds, such as laboratory data. But with EDC now being adopted on an enterprise scale, many companies now process all of their clinical data electronically or plan to do so within the next few years. If the primary purpose of a CDMS is, therefore, on the point of becoming obsolete should its role as the integration hub also be challenged? It makes more sense for the EDC system to become the integration hub since that is where the bulk of the source data will be collected. The decision pits the old-fashioned but proven against the new but risky. All commercial CDMSs are at least 10 years old, so they are well-established but hardly state of the art. They are written in older technologies and have architectures that expect simple data entry mechanisms and limited workflow. For instance, if a protocol requires pregnancy information only for female patients, a paper-based system will provide fields for both sets of information and thus allow entry of a pregnant male. By contrast, an EDC system is more likely to include workflow features that only collect the pregnancy information if the patient is female. As site users demand a user interface that is as friendly as other Websites they visit, such configurability has proven to be remarkably important. If the EDC system must be constantly mapped back to a data management system that is missing these features, then either the user interface must be compromised or the mapping process must become increasingly inefficient. The risk with switching to an EDC system as the integration hub is that EDC products are less mature and may not be quite ready for such an all-encompassing role. An EDC product may have architectural limitations that only become apparent when new interfaces are designed or built, the underlying data model may prove obscure or poorly documented, and it may be hard to inherit the application’s security mechanisms. Probably the biggest concern is performance. If a custom interface is running against the same database that is supporting interactive data entry, there is a risk that poorly written code could consume large amounts of the available CPU or place locks on critical database tables, thereby impacting performance or the EDC application itself. Despite these risks, the market is clearly gravitating toward EDC applications as the integration hub for clinical data. The expectation that an EDC application will handle all of a company’s clinical trials requires that many of these architectural and performance issues be solved anyway. The growing EDC boom means that the EDC vendors have the revenue to fund the necessary product development efforts, leading to a rapid rate of innovation. This innovation fuels product differentiation, which in turn fuels additional revenue. Given the stronger funding base, the leading EDC vendors will build the necessary integration capabilities fairly rapidly. By contrast, the rate of innovation for CDMS is slow or nonexistent. The products are being treated as cash cows by the vendors and are going nowhere. F Source: Keith Howells is VP, Engineering of Medidata Solutions Worldwide Inc., New York. Medidata Solutions helps the world’s leading pharmaceutical, biotechnology, medical-device, and research organizations maximize the value of their clinical research investments. For more information, visit mdsol.com.
PharmaVOICE welcomes comments about this article. E-mail us at [email protected]. As companies aggressively adopt e-clinical solutions, two key areas have to be considered: how to integrate all of the different systems and whether the EDC system can simultaneously act as an integration hub. Keith Howells Medidata Solutions Data Collection Architecture Models Current Point-to-Point Integration Proposed Hub-and-Spoke Integration The automation of the clinical research process requires a variety of different systems. This includes protocol authoring tools, electronic data capture (EDC), clinical trials management systems (CTMS), laboratory information management systems (LIMS), randomization, clinical supplies tracking, and data warehousing and analysis. This means that the systems must be supplied by specialist vendors, typically with niche products. This means that the biopharmaceutical company must figure out how to integrate these disparate systems. The figure on the left shows a typical set of application systems and required interfaces. This diagram shows that the number of required interfaces can become quite large very quickly. The challenge to achieve integration is further exacerbated by the fact that at least half of these application systems may need to take data feeds from external sources, such as from a contract research organization (CRO). Finally, each of these systems implicitly needs to send information to the data warehouse for analysis. With each system having an average of four interfaces to other systems, replacing any one of them requires a great deal of surgical precision. But there is a more simple architecture that will reduce the number of interfaces that are needed in the first place. It turns out this can be done by changing from a point-to-strategy to a hub-and-spoke strategy, as shown in the figure on the right. Interfaces as shown 23 Interfaces to external sources 6 Interfaces to data warehouse 11 Total 40