Recruiting for Your Pragmatic Clinical Study
2. Can Your Study or Analysis Answer Your Burning Questions?
In clinical studies, how you ask the question and go about finding the answers influences what kinds of questions you can validly ask. What are the strengths and limitations of different ways of "asking the question?" (study design) An emerging design bears the potential to transform how we understand--and impact--healthcare outcomes in the real world.
Series 
1
Episode 
2
Published on
January 11, 2019
Can your study or analysis answer your burning questions?
"What is the question?" - Plato (1)

“How well does treatment X work compared to treatment Y in patients with type 2 diabetes with poor glycemic control despite 6 months of lifestyle management and metformin? Are patients who visited an endocrinologist in the past 18 months likelier to do better on X or Y?”

“Which asthmatics would be better off managed by a specialist?”

“How and why is disease Y treated differently in different settings, and to what extent does that make a clinical or economic difference?”

“What are the characteristics of doctors whose patients are referred late vs. early for sequencing of their cancer genome?”

Welcome back to TeloChain’s Real-World Healthcare Insights, where we explore how to discover the value in healthcare therapeutics - pharmaceuticals and medical devices - through real-world clinical studies. This series focuses on efficient and effective recruiting--critical to your study’s success. Orientation to the series: HERE.

Cut to the punchline: Start by asking meaningful and clearly-defined questions based on your overall objectives. Then plan your study, based on the advantages and disadvantages of the various designs, in light of what you want to find out. We’ll focus on pragmatic (real-world) studies. You’ll likely work with a CRO to define and refine your questions and outcome metrics, study structure, and--importantly--recruiting of subjects (patients), a common roadblock to success.

Clinical studies aim to answer questions about how well a drug or device is used or works in a defined clinical scenario, such as those above. The insights gained from studies are used to support many kinds of decision-making--in treatment, marketing and policy.

The process of answering clinical (or market insight) questions involves study design, which includes decisions like (2):

  • What you want to know (your questions)
  • Whether your study will look backward (retrospective) or forward (prospective)
  • Whether your study will be observational or experimental
  • What you’ll compare to what (e.g. treatment to placebo or some other treatment; treatment across different healthcare settings; or treatments and outcomes for people with different clinical scenarios, such as moderate vs. severe heart failure)
  • What outcomes you want to measure and how
  • What data you’ll use to identify people, clinicians, clinical scenarios, tests, medications, health services, outcomes...
  • If your study is an experiment, how you’ll recruit subjects (patients) and clinicians
  • How you’ll capture the treatments and results
  • Your results-analysis plan

All these (and more!) are formalized in the study protocol--a recipe for how you’ll go about getting answers to your burning questions. Getting these decisions right will make your life a whole lot easier as you design and do your study. It’s enormously helpful to engage outside expertise during this set-up process, both to make sure the design has a good shot at shedding light on your questions, and--critically!--to ensure they are the right questions based on the intent of your study.

Today’s point: the kinds of questions you can validly answer about a therapy (e.g. how it’s used, how effective it is) depend on how you look for the answers--in other words, upon study design and methodology.

The prospective randomized controlled trial--”The” or “a” gold standard?

The fundamental question all studies aim to answer is “how well does this treatment do compared to X?” where X can be placebo, usual or standard of care, some other treatment, or some other clinical setting.

In the mid-20th century, the prospective randomized controlled trial (PRCT) was developed to answer “Does this work? If so, how well?” by incorporating elements that together mitigated selection bias (differing tendencies to respond among those treated vs. not) and observer bias (differing patient responses based on whether the observer knows which treatment a patient is allocated to). (3) Proper application of the PRCT design reduces differences between treatment (or treatment and placebo) groups to the point where the probabilities of erroneously concluding a treatment does (type I error) or doesn’t (type II error) work can be calculated in advance.

How the PRCT aims to neutralize sources of bias that might influence the relationship between treatment and outcomes. Biases are named across the lower panel.

The PRCT transformed how we approach questions about outcomes, it’s a foundation of today’s evidence-based medicine. PRCTs are an essential part of new drug approvals. The methodology is well-established, yet evolving. (4) But they are sometimes limited in providing insights into how well therapies work in the messy real world. Think of the PRCT as answering the question: “How well does this therapy work in patients who meet the study’s inclusion and exclusion criteria, possess the pre-specified or happenstance demographics, are treated and monitored in the kinds of delivery systems required for clinical trials--and volunteer to be randomized?”

The real world is messier than clinical trials. 1: People who are both using and would benefit from the therapy. 2: People who meet the PRCT criteria and are both using and would benefit from the therapy. 3: People who meet the PRCT criteria and would benefit from the therapy, but aren’t using it. 4: People who meet the PRCT criteria, are using the therapy, but shouldn’t be (won’t benefit or are at  risk of harm)

As Yogi Berra supposedly (5) said, “In theory, there’s no difference between theory and practice. In practice, there is.” So what about “in practice,” where all health care happens, and where we want to know how well the treatment works on patients who don’t quite meet the PRCT inclusion and exclusion criteria... or have a different demographic... or are less adherent... or are treated in settings with variably-informed clinicians rushed for time and squishier quality control?

Importantly, most PRCTs that are part of product approval compare treatment to placebo--but in the real world, there’s often more than one reasonable treatment option.

Studying the messy real world: Our focus is on pragmatic (“real-world”) studies, but first, let’s set the table by looking at what kinds of questions study designs can (validly) answer, and how their results can be used.

What insights can we gain from the different study designs?
  • PRCTs are used to demonstrate treatment efficacy – how well it works in a well-specified population, under controlled circumstances, usually against a placebo or ‘usual care’.
  • Prospective randomized pragmatic clinical trials (PRPCTs) are used to demonstrate treatment effectiveness – how well it works in the patients who it’s generally be offered to, under usual clinical circumstances, against alternative treatments or usual care.
  • Prospective pragmatic observational studies (PPOS) don’t randomize, but can yield valuable insights about effectiveness if carefully designed. Potential for bias, confounding of results and random noise (6) must be scrupulously attended to. PPOS must have a control (no or alternative treatment) situation, such as practices that didn’t (or were assigned to not) take up the new treatment, or the outcomes of patients before treatment was available). Strategies in the planning, execution, and analysis phases can help with this, but are not as bulletproof as randomization. Still, a well-done PPOS can yield actionable insights about how well a treatment works in the real world to support clinical, and possibly financial, decision-making. PPOS may be useful as well in informing value-based arrangements, or to help healthcare systems decide when or how to implement new treatments or clinical programs (such as quality improvement or condition management)
  • Widely-distributed pragmatic studies (WDPS)--often a type of PPOS (though can randomize)--are emerging as a way to gain insights into the use, outcomes and value of one or more treatments across a wide gamut of clinical scenarios, patient characteristics, and healthcare settings. If properly planned, executed and analyzed, WDPS has great potential for deepening our ability to deliver personalized healthcare. But--as we’ll explore--WDPS poses new challenges in recruiting and data management.

Retrospective observational studies (ROS) can yield valuable insights on how treatments have been used in clinical practice; clinicians’ decision-making processes; patients’ treatment journeys; and clinical and economic outcomes. But, PPOS considerations apply even more to ROS, and the type of data available is likely to be more limited (choice of data types strongly influences what questions you can ask). ROS can be very useful in determining whether to go ahead with a prospective study.

What design to use depends on the perspective and how the results will be used:

Ready to take the plunge into pragmatic studies? Where are you going to find the patients? What data will you use to find them? What are the strengths, limitations, and the “gotchas” for the various kinds of data--especially the very useful healthplan data? Why would you want to do a study with healthplan data anyway? Tune in for Episode 3: Why use healthplan data for recruiting and outcomes analysis?

Do you want to know more? Reach out to us!

NOTES

  1. OK, maybe that’s not an exact quote. Plato was Socrates’ student, and Socrates had a habit of annoying people by getting them to clarify what they really wanted to know, thus getting to a question that was meaningful to the asker, and that was capable of being answered.
  2. This list is not intended to be a complete listing of what must be included in a study (and documented in its protocol). Entire textbooks have been written on this topic, and because you’re reading this, we assume you’re familiar with what goes into designing a study. We chose the items in our list because we’ll refer to them in this series as factors that highlight the value of pragmatic (real-world) clinical studies.
  3. The first controlled trial (which compares treatments or treatment to no treatment) is generally attributed to James Lind’s 1753 study comparing various treatments (including lemons and oranges, vinegar, elixir vitriol, sea water, and a paste of honey and a drug). What made the design controlled was that the sailors had similar degrees of scurvy and were fed the same standard sailor’s diet. The first placebo-controlled trial was planned in 1863 by Austin Flint; and what is usually considered the study with all key elements of the PRCT was performed in 1946 to study the effects of streptomycin on tuberculosis, published in the British Medical Journal in 1948. For an illuminating dive into the history of the modern clinical trial, see Bhat A. Evolution of clinical research: A history before and after James Lind. Perspect Clin Res 2010;1(1):6-10.
  4. For example, the adaptive platform trial. See: Stern AD, Mehta S. Adaptive platform trials: The clinical trial of the future? Harvard Bus Rev Sep 26, 2017; and Berry SM, Connor JT, Lewis RJ. The platform trial: An efficient strategy for evaluating multiple treatments. JAMA 2015;313(16):1619-20.
  5. Maybe not: https://www.snopes.com/quotes/berra/practicetheory.asp.
  6. A study protocol’s statistical analysis plan should indicate an assessment of risk for bias, confounding and noise; and how these factors will be detected, measured and minimized. Briefly, bias prejudices the results because of unequal distribution of some outcomes-affecting characteristic between intervention and control groups; confounding occurs when one or more factors influences both the therapy and the outcome; and noise refers to the amount of randomness in the results--making any signal (effect of the treatment) difficult to detect.
Recruiting for Your Pragmatic Clinical Study
 S
1
 E
2