top of page

Be Notified of New Research Summaries -

It's Free!

Why Do Eligible Individuals Fail to Claim Benefits and How Can Policy Interventions Fix It?

  • Writer: Greg Thorson
    Greg Thorson
  • 5 hours ago
  • 6 min read

Bendtsen (2026) examines whether reducing administrative burdens increases the take-up of social benefits. He analyzes data from 51 field experiments covering 187 treatment effect sizes across multiple countries and programs. Using a meta-analytic framework, he compares interventions that reduce learning demands (information) versus compliance demands (assistance), and distinguishes between application and actual receipt outcomes. He finds that interventions raise application rates by about 10.1 percentage points, but actual take-up by only 4.8 points. Interventions reducing compliance burdens are most effective, increasing take-up by roughly 8.3 percentage points, compared to about 3.4 points for information-based approaches.


Why This Article Was Selected for The Policy Scientist

The question of why eligible individuals fail to receive public benefits has broad implications for state capacity, inequality, and policy effectiveness. Bendtsen (2026) addresses a central implementation failure that undermines otherwise well-designed programs, making this work especially timely amid rising administrative complexity and expanded benefit systems. The article contributes by consolidating a fragmented experimental literature and clarifying that compliance burdens—not just information gaps—are the primary constraint. The dataset is strong, drawing on 51 field experiments, though heavily U.S.-centered. The reliance on randomized field experiments is a clear strength, supporting credible causal inference.


Full Citation and Link to Article

Bendtsen, K.-E. (2026). Increasing take-up of social benefits: A meta-analysis of field experiments. Journal of Policy Analysis and Management, 45(2), e70085. https://doi.org/10.1002/pam.70085 


Central Research Question

The article investigates whether reducing administrative burdens can meaningfully increase the take-up of social benefits among eligible individuals. Specifically, Bendtsen (2026) asks not only whether such interventions are effective, but also which types of interventions—those that reduce learning demands (information provision) versus compliance demands (direct assistance)—produce larger effects. A further dimension of the inquiry distinguishes between outcomes measured at different stages of the process: initial application versus final receipt of benefits. This distinction is central, as it allows the study to assess whether policy interventions merely encourage entry into the system or actually enable individuals to successfully navigate bureaucratic processes and receive benefits. The broader aim is to synthesize a fragmented empirical literature and identify consistent patterns across diverse institutional and policy contexts.


Previous Literature

The study builds on a substantial body of work examining the persistent gap between eligibility for social benefits and actual participation. Prior research has identified several barriers to take-up, including lack of information, administrative complexity, stigma, and transaction costs. Foundational contributions by Currie (2004), Moffitt (1983), and more recent work by Bhargava and Manoli (2015) emphasize informational frictions and behavioral responses to program design. The administrative burden framework developed by Herd and Moynihan (2019) provides a unifying conceptual lens, categorizing barriers into learning, compliance, and psychological costs.


At the same time, the empirical literature has increasingly relied on randomized field experiments to test interventions aimed at increasing take-up. These studies, often conducted in specific programmatic or geographic contexts, have produced mixed findings. Some demonstrate that simple informational nudges can increase participation, while others find limited or no effects. Recent meta-analytic efforts, such as DellaVigna and Linos (2022), have attempted to synthesize evidence on behavioral interventions more broadly, but without a focused treatment of social benefit take-up. Bendtsen contributes by consolidating this literature within a single framework and systematically comparing intervention types and outcome stages, thereby clarifying sources of heterogeneity across studies.


Data

The analysis draws on a dataset of 51 field experimental studies, yielding 187 treatment effect estimates. These studies span a wide range of social benefits, including tax credits, food assistance programs, subsidized health insurance, housing benefits, and student financial aid. While the dataset includes international cases, it is heavily concentrated in the United States, which accounts for the majority of studies and effect sizes.


Each study provides a causal estimate of the effect of a specific intervention on benefit take-up, typically measured as the difference in participation rates between treatment and control groups. The dataset is harmonized to focus on binary outcomes—whether individuals applied for or received benefits—thereby ensuring comparability across studies. Interventions are coded along two key dimensions: whether they target learning demands (e.g., providing information or reminders) or compliance demands (e.g., offering assistance with applications), and whether the outcome is measured at the application stage or the final receipt stage.


The dataset is notable for its reliance on randomized controlled trials, which enhances internal validity. However, the heterogeneity in program types, populations, and institutional settings introduces challenges for external validity and cross-context generalization.


Methods

The study employs a three-level meta-analytic framework to estimate average treatment effects and assess variation across intervention types and outcome measures. The primary outcome is the intention-to-treat (ITT) effect, expressed in percentage-point differences between treatment and control groups. This approach ensures consistency across studies and avoids complications associated with varying compliance rates or treatment intensities.


The model incorporates both within-study and between-study heterogeneity, allowing for more precise estimation of average effects while accounting for clustering of multiple treatment arms within individual studies. Observations are weighted by the inverse of their sampling variance, giving greater influence to more precise estimates derived from larger samples. Cluster-robust standard errors further adjust for within-study dependence.


Importantly, the analysis is explicitly comparative rather than causal at the meta level. While each underlying study provides a causal estimate, the aggregation of results across diverse contexts introduces potential confounding from unobserved heterogeneity. The study also conducts several robustness checks, including models excluding outliers, restricting the sample to U.S.-based studies, and adjusting for potential publication bias using established techniques. These steps strengthen confidence in the stability of the findings, though they do not eliminate all sources of bias.


Findings/Size Effects

The results indicate that interventions aimed at increasing take-up are generally effective, with an overall average effect of approximately 6.7 percentage points across all studies. However, substantial heterogeneity exists depending on both the type of intervention and the stage at which outcomes are measured.


A central finding is that it is significantly easier to increase application rates than actual benefit receipt. Interventions increase applications by an average of 10.1 percentage points, compared to only 4.8 percentage points for final take-up. This suggests that many interventions succeed in prompting initial engagement but fail to help individuals navigate subsequent administrative hurdles required for benefit receipt. The gap between application and receipt highlights the cumulative nature of administrative burdens and indicates that barriers persist beyond initial awareness or motivation.


The study also finds that interventions reducing compliance demands are more effective than those targeting learning demands. On average, compliance-reducing interventions increase actual take-up by approximately 8.3 percentage points, compared to 3.4 percentage points for information-based interventions. This difference is substantial and consistent across model specifications, suggesting that simplifying processes or providing direct assistance is more impactful than merely improving information access.


Additional analyses reveal that effect sizes are larger in earlier studies and may decline over time, a pattern consistent with the “decline effect” observed in other experimental literatures. Adjustments for publication bias reduce estimated effects but do not eliminate them, with corrected estimates still indicating positive and statistically significant impacts. These findings reinforce the conclusion that administrative interventions can improve take-up, while also suggesting that reported effects may be somewhat overstated in the published literature.


Conclusion

The study concludes that administrative burdens play a central role in limiting access to social benefits and that reducing these burdens can meaningfully increase participation. However, the effectiveness of interventions depends critically on their design. Interventions that address compliance demands—by simplifying processes or providing direct assistance—produce larger and more consistent effects than those focused solely on information provision.


Equally important, the distinction between application and actual receipt underscores the need to evaluate policy interventions at multiple stages of the administrative process. Interventions that appear successful based on application metrics may overstate their true impact if they do not translate into completed enrollment. This has implications for both research design and policy evaluation, suggesting that outcome measures should align more closely with ultimate program goals.


While the study provides a comprehensive synthesis of existing evidence, it also highlights limitations in the current literature, including geographic concentration, heterogeneity across contexts, and potential publication bias. Nevertheless, the consistent use of randomized field experiments across studies strengthens the credibility of the findings and distinguishes this literature from observational analyses. Overall, the article advances understanding of how administrative processes shape policy outcomes and provides a structured framework for evaluating interventions aimed at improving access to public benefits.

Screenshot of Greg Thorson
  • Facebook
  • Twitter
  • LinkedIn


The Policy Scientist

Offering Concise Summaries*
of the
Most Recent, Impactful 
Public Policy Research

*Summaries Powered by ChatGPT

bottom of page