“How well does treatment X work compared to treatment Y in patients with type 2 diabetes with poor glycemic control despite 6 months of lifestyle management and metformin? Are patients who visited an endocrinologist in the past 18 months likelier to do better on X or Y?”
“Which asthmatics would be better off managed by a specialist?”
“How and why is disease Y treated differently in different settings, and to what extent does that make a clinical or economic difference?”
“What are the characteristics of doctors whose patients are referred late vs. early for sequencing of their cancer genome?”
Welcome back to TeloChain’s Real-World Healthcare Insights, where we explore how to discover the value in healthcare therapeutics - pharmaceuticals and medical devices - through real-world clinical studies. This series focuses on efficient and effective recruiting--critical to your study’s success. Orientation to the series: HERE.
Cut to the punchline: Start by asking meaningful and clearly-defined questions based on your overall objectives. Then plan your study, based on the advantages and disadvantages of the various designs, in light of what you want to find out. We’ll focus on pragmatic (real-world) studies. You’ll likely work with a CRO to define and refine your questions and outcome metrics, study structure, and--importantly--recruiting of subjects (patients), a common roadblock to success.
Clinical studies aim to answer questions about how well a drug or device is used or works in a defined clinical scenario, such as those above. The insights gained from studies are used to support many kinds of decision-making--in treatment, marketing and policy.
The process of answering clinical (or market insight) questions involves study design, which includes decisions like (2):
All these (and more!) are formalized in the study protocol--a recipe for how you’ll go about getting answers to your burning questions. Getting these decisions right will make your life a whole lot easier as you design and do your study. It’s enormously helpful to engage outside expertise during this set-up process, both to make sure the design has a good shot at shedding light on your questions, and--critically!--to ensure they are the right questions based on the intent of your study.
The prospective randomized controlled trial--”The” or “a” gold standard?
The fundamental question all studies aim to answer is “how well does this treatment do compared to X?” where X can be placebo, usual or standard of care, some other treatment, or some other clinical setting.
In the mid-20th century, the prospective randomized controlled trial (PRCT) was developed to answer “Does this work? If so, how well?” by incorporating elements that together mitigated selection bias (differing tendencies to respond among those treated vs. not) and observer bias (differing patient responses based on whether the observer knows which treatment a patient is allocated to). (3) Proper application of the PRCT design reduces differences between treatment (or treatment and placebo) groups to the point where the probabilities of erroneously concluding a treatment does (type I error) or doesn’t (type II error) work can be calculated in advance.
The PRCT transformed how we approach questions about outcomes, it’s a foundation of today’s evidence-based medicine. PRCTs are an essential part of new drug approvals. The methodology is well-established, yet evolving. (4) But they are sometimes limited in providing insights into how well therapies work in the messy real world. Think of the PRCT as answering the question: “How well does this therapy work in patients who meet the study’s inclusion and exclusion criteria, possess the pre-specified or happenstance demographics, are treated and monitored in the kinds of delivery systems required for clinical trials--and volunteer to be randomized?”
As Yogi Berra supposedly (5) said, “In theory, there’s no difference between theory and practice. In practice, there is.” So what about “in practice,” where all health care happens, and where we want to know how well the treatment works on patients who don’t quite meet the PRCT inclusion and exclusion criteria... or have a different demographic... or are less adherent... or are treated in settings with variably-informed clinicians rushed for time and squishier quality control?
Importantly, most PRCTs that are part of product approval compare treatment to placebo--but in the real world, there’s often more than one reasonable treatment option.
Studying the messy real world: Our focus is on pragmatic (“real-world”) studies, but first, let’s set the table by looking at what kinds of questions study designs can (validly) answer, and how their results can be used.
Retrospective observational studies (ROS) can yield valuable insights on how treatments have been used in clinical practice; clinicians’ decision-making processes; patients’ treatment journeys; and clinical and economic outcomes. But, PPOS considerations apply even more to ROS, and the type of data available is likely to be more limited (choice of data types strongly influences what questions you can ask). ROS can be very useful in determining whether to go ahead with a prospective study.
What design to use depends on the perspective and how the results will be used:
Ready to take the plunge into pragmatic studies? Where are you going to find the patients? What data will you use to find them? What are the strengths, limitations, and the “gotchas” for the various kinds of data--especially the very useful healthplan data? Why would you want to do a study with healthplan data anyway? Tune in for Episode 3: Why use healthplan data for recruiting and outcomes analysis?
Do you want to know more? Reach out to us!
NOTES