Last month, as Republicans debated Medicaid reform, media outlets reported on a striking academic working paper. Authored by Angela Wyse of Dartmouth and Bruce Meyer of the University of Chicago, the study claimed that Obamacare’s Medicaid expansion saved thousands of low-income Americans’ lives.
But last week, the Coalition for Evidence-Based Policy posted an assessment of the paper. Its “No-Spin Evidence Review” contended that the researchers had deviated from their own “preanalysis plan”—a document in which researchers spell out how they plan to assess their data—and that the findings should be viewed as tentative and preliminary.
Finally, a reason to check your email.
Sign up for our free newsletter today.
While Wyse and Meyer should have explicitly discussed the changes in their paper, there’s a reasonable case that their adjustments were justified. The episode should prompt conservatives to think carefully about how to reform Medicaid—and how best to make the case for those reforms.
Medicaid is a federal program that provides health insurance to poorer Americans. Traditionally, Washington has required participating states to cover “core” groups, such as the disabled and poor children, while allowing flexibility to cover some additional low-income people. The federal government covers at least half of each state’s Medicaid spending, with poorer states receiving a higher matching rate. Still, wealthier states have often been more aggressive in expanding coverage to draw down more federal dollars.
Obamacare’s Medicaid expansion, launched in 2014, encouraged states to cover all adults earning up to 138 percent of the poverty level. Rather than applying the standard matching rates, Washington initially covered the full cost for this new population, gradually reducing its share to 90 percent. As a result, most states received far more generous funding to cover the less-vulnerable expansion population than for the core groups Medicaid was originally designed to serve. Many states eagerly accepted the funds—a few had already expanded coverage on their own—though some, mostly Republican-led, chose to hold out.
In their paper, Wyse and Meyer combine several highly sensitive datasets to assess the impact of the Medicaid expansion. They use IRS records to track income, administrative data to measure Medicaid enrollment, and Social Security Administration (SSA) data to determine who is living and who has died, among other sources.
Their analysis is relatively straightforward. The researchers examined non-disabled adults aged 19–59 with incomes below 138 percent of the poverty level—the group targeted by the Medicaid expansion—and compared mortality trends in states that expanded Medicaid with those that did not. They found a 2.5 percent decline in mortality in expansion states relative to non-expansion states, amounting to roughly 27,000 lives saved through mid-2022. Unfortunately, they are not able to say which specific causes of death declined, because SSA data does not contain that information.
To confirm their findings, the researchers compared mortality rates in the expansion population with those of individuals who initially had higher incomes. If expansion states had coincidentally experienced some unrelated, positive health trend when the policy took effect, it would likely have benefited both groups. As expected, the higher-income group saw only a slight uptick in Medicaid enrollment—likely due to some earning less over time—and no measurable decline in mortality.
So, what’s the problem? Buckle in for a bit of statistical minutiae.
If someone analyzes two variables—say, temperature and ice cream sales—and reports that the correlation is “statistically significant at the 5 percent level,” it means there’s only a 5 percent (or one-in-20) chance that such a strong relationship would occur randomly. This is a key concept in social science, but it becomes less meaningful when researchers run multiple analyses on datasets with many variables. If I run 20 different tests and one yields a result that would occur by chance only one in 20 times, that’s not especially impressive—it’s exactly what you’d expect.
Scientists have become increasingly aware of this problem in recent years, and one common solution is to pre-register their analysis plan. By stating in advance exactly how they intend to analyze the data, researchers offer reassurance that they didn’t run 19 other versions and simply publish the one with results they liked.
In Wyse and Meyer’s pre-analysis plan, they stated that their “base sample” would include all low-income adults 19-59—not just the non-disabled. They listed disability, instead, among other types of “subgroups,” such as race and income categories, that they also planned to analyze. Of course, if the non-disabled subgroup is just one in a sea of analyzed groups, rather than the main target of the study, the evidentiary value of statistically significant findings associated with that group becomes more tentative.
Seen in this light, one might focus first on the results for the full sample—where the mortality reduction was smaller and only borderline significant, hovering around the 10 percent level rather than the stricter 5 percent threshold—and view the stronger findings for the non-disabled subsample as less definitive. According to the “No-Spin Evidence Review,” the authors should have written that:
The mortality reduction approached, but fell short of, statistical significance, and is therefore best viewed as tentative. We found larger mortality reductions among the subgroup of low-income adults without disabilities, and a possible mortality increase among the subgroup with disabilities. These subgroup findings should be considered preliminary until confirmed in future research, as they could have appeared by chance given the multiple subgroups examined.
But there’s a reasonable counterargument: the disabled were already eligible for public health insurance before the Medicaid expansion and thus weren’t actually affected by the policy. The researchers should have excluded them from the outset—and erred in failing to do so in their pre-analysis plan. At the very least, the non-disabled shouldn’t be treated as just one subgroup among many. Excluding individuals who weren’t affected by the policy being studied is fundamentally different from, say, exploring whether results vary by race.
I reached out to Wyse and Meyer for their response to these criticisms, and they replied, in part:
To answer your question of why the non-disabled sample became the main sample, we were new to pre-analysis plans. Neither [of us] had done one before (or since) and we were not as careful as we should have been. We quickly realized that it did not make sense to be looking at the effect of expanding Medicaid for people who already had Medicaid or Medicare (disabled was determined in our data by receipt of [Supplemental Security Income] or Social Security Disability insurance which come with Medicaid or Medicare). We thus did all our analyses with the non-disabled sample for whom it was possible to gain public health insurance coverage under the expansions. Nevertheless, we felt bound to also report the full sample because we listed it in our pre-analysis plan.
As for the possible increase in mortality among the disabled—implied by the gap between the full-sample and non-disabled results—the researchers noted that they did not analyze the disabled group separately. Running analyses on such a large dataset is time-consuming, they explained, and all published results must pass privacy screening by the Census Bureau. They added, “We suspect the confidence intervals would be wide and that you shouldn’t conclude that Medicaid significantly increased mortality for the disabled.”
Nonetheless, critics have sometimes contended that expanding Medicaid could undermine care for the original “core” population. Further study of this possibility is warranted.
Where does all this fit into the broader debate over Medicaid reform, once again raging on Capitol Hill?
Conservatives have long sought to downplay the significance of Medicaid coverage, citing, for example, an experiment in Oregon finding the program offers limited health benefits. They’ve also rightly noted that, for many, the alternative to Medicaid isn’t dying in the street but receiving uncompensated care—ultimately covered by charity or taxpayer-funded hospital subsidies.
The political reality, however, is that Medicaid isn’t going anywhere—especially now that Republicans draw more support from working-class voters. Whatever its impact on mortality and health outcomes, the program provides basic coverage to roughly one-fifth of Americans and serves as a crucial funding source for hospitals.
President Trump himself recently warned Congress not to “f*** around with Medicaid.” Even as House Republicans’ tax bill would impose work requirements on able-bodied enrollees, it would leave Obamacare’s expansion of the program in place. Some in the Senate are leery of any reform that could cut benefits or harm rural hospitals.
The strongest case for Medicaid reform isn’t that the program serves no purpose. It’s that the current funding system is dysfunctional, unfair, and excessively costly, with many states gaming the rules to maximize their share at others’ expense.
The federal government shouldn’t subsidize able-bodied adults more generously than the disabled, nor should wealthy, high-spending states enjoy unlimited access to matching federal funds. Even if Medicaid saves lives, it must be run fairly—and cost-effectively.
Photo by Tasos Katopodis/Getty Images for People's Action Institute