Recruiting for Your Pragmatic Clinical Study
1. Welcome & Setting the Stage
The "gold standard" prospective randomized controlled trial tells us how well a therapy works in an ideal and controlled setting, but what about in the nuanced and complicated real world?
Published on
January 11, 2019
Tune in every week for the newest episode (blog post) of Recruiting for Your Pragmatic Study
“In theory, there’s no difference between theory and practice. In practice, there is.” - Often attributed to Yogi Berra

“A clinical trial showed that drug X improves an outcome by 25% compared to placebo, but how does it compare with drug Y which improved the outcome in a different trial?”

“How well do drugs X and Y work when the adherence is less than that found in the clinical trials?”

“What patient, healthcare setting, geography-based and other treatment factors are associated with better outcomes?”

“When multiple treatment options are available for a clinical condition, which is best for improving outcomes that patients care about?”

Welcome to TeloChain’s Real-World Healthcare Insights, where we’ll explore how to discover the value in healthcare therapeutics--pharmaceuticals and medical devices--through real-world clinical studies. These series will look at what makes up value (roughly, health outcomes per dollar); how value is created and from what perspective; the practical world of value metrics; how to gain clinical and market insights into how value is created in the messy real world (in other words--clinical study design and interpretation); and how to develop value-based arrangements or contracts so that you charge--or pay--for the value you actually get.

Who is this for? Understanding and maximizing healthcare value is important to everyone - patients, clinicians, delivery systems, researchers, payers and policy-makers! However, TeloChain’s Real-World Healthcare Insights will focus initially on perspectives of drug/device makers and healthplan payers.

What can I expect? TeloChain’s Real-Word Healthcare Insights will be organized like TV shows as Series containing multiple episodes. A series will have 6-8 episodes that explore the series topic (Series 1 is about the value of and successful recruiting for pragmatic clinical studies). We’ll publish a new episode weekly. You’re invited to comment in the section below each episode. You can also reach out to us to know more or talk with us directly--HERE.

OK, let’s get started! The quote (maybe Yogi’s...maybe not) at the top sets the stage: The prospective randomized clinical trial (PRCT) is considered the gold standard for showing--with a high degree of confidence--whether a therapy “works” (achieves a specified improvement in one or more outcomes). It’s also the only way to establish that the treatment caused the results because it incorporates several elements that together mitigate bias (NOTE 1). PRCTs are an essential part of drug/device regulatory approval. Value (outcomes per dollar) can even be inferred if the data are sufficiently granular about resource utilization and costs.

How you ask the question determines (or at least impacts) what kind of answer you get.

So what’s not to like about the gold standard? Beyond being costly and slow to produce results, they offer a limited prediction of how well therapies work in the messy real world. Think of the PRCT as answering the question: “How well does this therapy work in patients who meet the study’s inclusion and exclusion criteria, have the subjects’ demographics, are treated and monitored in the kinds of delivery systems required for clinical trials--and are the kinds of people who volunteer to be randomized?” (NOTE 2)

We want to know how well the treatment works on patients who don’t quite meet the PRCT inclusion and exclusion criteria, or have a different demographic, or are less adherent, or are treated in settings with variably-informed clinicians rushed for time and squishier quality control! Importantly, most PRCTs that are part of product approval compare treatment to placebo--but in the real world, there’s often more than one reasonable treatment option.

Lies, damn lies design? How you ask the question determines (or at least impacts) what kind of answer you get. This is so important that we’ll dedicate the next episode to the topic of study designs vs. insights. For now let’s jump to the design we’ll feature in Real-World Healthcare Insights: Pragmatic (often called ‘real-world’), especially one of its flavors--the widely-distributed pragmatic study (WDPS).

The WDPS is a window into how well therapies work in the messy conditions of a world in which there’s a wide variety of patients (and their clinical scenarios, prior treatments and preferences, adherence patterns), clinicians (primary care, specialists, practice styles, up-to-dateness), healthcare delivery systems (urban/ suburban/ rural; small vs. large; primary care vs. multispecialty; community vs. academic; thoroughness of data completeness and integration; and quality systems); and healthcare coverage (e.g. high- vs. low-deductibles; tight vs. inclusive networks).

‍‍Some factors that may affect the link between treatment and outcomes in the real world. Importantly, these factors may be differently distributed among treated vs. comparison (untreated or usual care) patients, as well as among treated patients.

If properly planned, delivered (optimally with a technology platform that facilitates this kind of study), a WDPS can be that window: Perhaps not as bulletproof as the gold standard PRCT, but complementing with the usefulness of its reach.

Where do we go from here? Like PRCTs, pragmatic studies (including the WDPS) are coordinated by contract research organizations (CROs), which may subcontract certain activities for which a particular type of expertise is required. This series will focus on one such activity-set: recruiting for a WDPS, using (though not limited to) healthplan demographic and claims data, which permits efficient screening of millions of individuals or thousands of clinicians who may be candidates for the study. (NOTE 3)

Failure to recruit a sufficient number of subjects quickly enough is a common roadblock to a study’s success.

What’s included in recruiting for a WDPS? We’ll focus on the steps and activities inside the figure’s pink marquee--from converting the plain-English inclusion and exclusion criteria (listed in the study protocol) through an initial screen based on healthplan (and possibly additional) data, reaching out to and educating physicians who may have patients who match the criteria, enrolling them in the study, their matching patients to criteria, and all the activities required for the patients to enroll and consent to participate. Failure to recruit a sufficient number of subjects quickly enough is a common roadblock to a study’s success.

The example illustrates that physicians who are likely to have a concentration of qualifying patients are ‘recruited,’ and that the physicians, in turn, identify qualifying patients and discuss participation in the study with them (some pragmatic studies may not require patient consent, as determined by your Institutional Research Board).

The widely-distributed pragmatic study recruitment architecture
‍The widely-distributed pragmatic study recruitment architecture

We’ll walk you through recruitment (importantly focusing on how to use data effectively and the capabilities a WDPS platform must embody to drive a successful recruiting campaign). We’ll cover not only what works--but shed some light on what doesn’t--the gotchas (hope you don’t mind saving time, angst and money in your recruiting effort).

Why do we need to do this? Don’t we learn enough from those gold standard prospective randomized clinical trials? How can I get to the market and clinical insights we need to drive strategy, prove value, and improve the outcomes of our products? Tune in for episode 2: What insights can different study designs give us?

Do you want to know more? Reach out to us! We're here to help you maximize your return on intelligence.


  1. Together, these elements mitigate selection bias (differing tendencies to respond among those treated vs. not) and observer bias (differing patient responses based on whether the observer knows which treatment a patient is allocated to). Proper application of the PRCT design acceptibly reduces differences between treatment (or treatment and placebo) groups to the point where the probabilities of erroneously concluding a treatment does (type I error) or doesn’t (type II error) work can be calculated in advance.
  2. Volunteering to be randomized may be a different kind of bias than volunteering to be treated (or being offered the treatment in the first place), but at least the latter reflects what happens in the real world.
  3. We’ll dive into the topic of healthplan data in future episodes - it takes a lot of expertise to use it to full advantage, and there are plenty of ways to stub your toe on it (as we’ll discuss). Recruiting from healthplan data may be set within a relationship between manufacturer and one or more healthplans, and there are emerging ways, outside a direct healthplan partnership, that such data may be used to support recruiting. In all cases, HIPAA compliance is, of course, mandatory.
Recruiting for Your Pragmatic Clinical Study