Risk-Based Monitoring Guidance

Risk-Based Monitoring Are we having a fight

Risk-Based Monitoring Guidance August 2013

The FDA recently issued risk-based monitoring guidance in August 2013.  This guidance for sponsors of investigational new drug/device trials details a risk-based monitoring approach to monitoring safety and efficacy in studies. As with all guidances, it isn’t a flat out endorsement, recommended procedure document, or enforceable requirement list; “sponsors can use a variety of approaches to fulfill their responsibilities for monitoring” Investigator conduct and performance.

The guidance is 19 pages long so I don’t intend to repeat it in this post. Please download and read it if you have not already done so (here).  In this post, I’ll just hit the highlights and reiterate some of the background for you.  More importantly in my summary,  I hope to spur some dialogue in the ClinOps Toolkit community. Why do we send monitors out to sites at all?  Are we getting what we paid for? Are we incentivizing the right skill-set and activities on-site?

Download Here ==> Oversight of Clinical Investigations

Why risk-based monitoring guidance now?

With the increasing utilization of Electronic Data Capture (EDC), rapid communications (i.e webinars, e-mail), electronic source data/ERM, and improved data analytics sponsors have more visibility into study conduct and data in real-time.  Now more than ever, we can review eligibility earlier and remotely monitor for more safety signals.  We can audit screening data and be on the lookout for enrollment criteria that are too restrictive in practice and make adjustment to the protocol, if required.  We can easily aggregate safety data and provide more oversight during conduct.

Trials have also become more complex.  More sponsors are using adaptive trial design.  Global trials allow for increased reach for enrollment/exposure but also greater “dispersion” of the data.  A single on-site monitor will be challenged to detect a safety signal to protocol execution error that may be systemic and affecting other trial sites – this is where remote monitoring comes in.

We can use statistics to look for different data patterns, atypical/unexpected data, and improve data quality and integrity with targeted on-site or remote monitoring.  We can actually write statistical programs to detect deviations, excursions, missing/late data, implausible data, suspicious data, and other ratios of interest and deploy resources to investigate and re-mediate, as appropriate.

All Studies? It’s not really set in stone…

Actually, the guidance supports that the risk-based approach should be tailored to the needs of a specific trial. Risk-indicators can include the phase, the enrollment rate, the complexity of the trial, the indication, the safety expectations, etc (Check out page 13 and “Factors to Consider when Developing a Monitoring Plan”).  The “risk-based approach is dynamic, more readily facilitating continual improvement in trials conduct and oversight.”  Basically, you look at your past experiences in clinical trials and develop a plan for overseeing the trial.  Then you start the trial, check-in and adapt that plan as needed. The guidance supports tailoring the monitoring plan along the way.  You set out to do it one way in the beginning and then the risk-based approach encourages you to shift resources and oversight as dictated by the needs and execution of the trial.

How do we monitor now?

The discussion on page 3 discusses current approaches to monitoring, who is visiting the sites, how often, and what their objectives are. In a nutshell, sponsors are using a variety of approaches and notably “academic centers, cooperative groups, and government organizations use on-site monitoring less extensively.”

There are monitoring tasks that are best achieved with an on-site visit.  The guidance recognizes for example that many compliance checks, quality assessments of conduct and documentation, assessing “the familiarity of the site’s study staff with the protocol  required procedure”, and discrepancy identification are “particularly helpful early in a study”.

So a monitor just does source data verification (SDV) and that’s it?

As a  monitor, I never actually considered my primary responsibility to identify data entry errors (and the guidance cites a study that says that 90% of the errors on-site monitors catch could be found via centralized monitoring).  SDV is time-consuming and I spent a lot of my visit doing it.  To be honest, I spent a lot more of my time negotiating/begging the data entry personnel to get things current in the EDC.  I propose that what I could do best that central monitoring still can’t achieve, was discovering data that was present in the source but missing in the Case Report Form (CRF).  As a monitor, I always felt like my role as a cheerleader/sticky-note queen (or pest?) encouraging coordinators to catch up on back-entry brought more value to the sponsor than the actual data comparisons I did while on-site.

As a program manager, I see this phenomenon repeated most times I deploy a monitor to the site.  I’ll see data entry rates that are inconsistent within a region and I’ll send a monitor to investigate it and low-and-behold, there will be a flurry of entry in the days prior to the visit, during the visit itself, and for about a week afterwards.  I’ve been considering pulling some of my audit trail data to graph data modifications by time-stamp from the last large trial I worked on to show this graphically (a future post!).  I worked earlier in my career as a data manager so I’m a total data-junkie.  Crunching data and running algorithms and metrics is a blast but you can’t analyze what you never captured. Garbage in, garbage out and you have to be wary of false discovery rates and other problems with multiple comparisons. I still rely very heavily on monitors to be on-site and keep the study present in site staff’s minds and checkup that the data entry is complete and reflective of what actually happened in conduct.

Risk-Based Monitoring Are we having a fightI’m calling for more monitoring (maybe just not necessarily more resources on-site). Can I get a, “Here here!”?

As a monitor, I felt more obligated to be present on-site and available remotely for oversight, providing encouragement, recommending documentation best practices, identifying in-adherence to regulations, and confirming understanding and proper execution of the protocol.  I am hopeful that implementation of this guidance validates and incentives development of the site manager partnering skill-set rather than the SDV-only audit mindset.  Monitors are more valuable than just showing up, looking at the source versus the CRF and simply asking, “does this match that?”

As the sponsor’s eyes-on-the-ground, I still believe there is no substitute to early and frequent monitoring, reducing the number and frequency of on-site visits is not an objective for me.  If anything, I’ll be adding remote resources to help with central statistical monitoring and site management.  I’ll be ensuring that the resources I use for on-site monitoring are more focused on effective monitoring rather than SDV through co-monitoring, sponsor/ambassador visits, and ongoing review of activities.  In a nutshell, I interpret this guidance as an agreement, 100% SDV is not the best use of my monitor’s time on-site.

What’s next on the blog?

I was a little amused when reading the guidance position on “communication of monitoring results” (see page 15).  It talked about how we document and report monitoring activities.  I don’t hear a lot of chatter yet from the RBM advocates of how we will review centralized monitoring or even reconcile what was done.  In my trials, my team strives for more transparency across roles.  For example, medical monitors, data managers, statisticians, and other stakeholders are made aware of “significant monitoring issues.”  Monitoring report review is completed by Clinical Operations but we share operations insights across a number of functional groups.  I think there are easily a few more topics to explore in the ClinOps Toolkit community regarding how we document monitoring outcomes and turn those into actions for the study team or clinical investigators.

I was also pleased to see on the last page (almost as a footnote, section D of page 19) a very brief discussion of “Clinical Investigator and Site Selection and Initiation.”  There is no greater predictor of operational success in a trial than thoughtful qualification and initiation of trial sites.  I have a lot more thoughts on this topic and look forward to connecting with more readers this Fall to chat about statistical approaches and improved feasibility.

Further reading at the ClinOps Toolkit blog:

You may also like…from The Lead CRA archives:

About The Author

Nadia

Nadia Bracken, lead contributor to the Lead CRA blog and the ClinOps Toolkit blog, is a Clinical Program Manager in the San Francisco Bay Area.