top of page

Be Notified of New Research Summaries -

It's Free!

Do Academic Promise Pledges Help or Harm Student Achievement?

  • Writer: Greg Thorson
    Greg Thorson
  • 2 days ago
  • 5 min read

Wright, Arora, and Wright (2025) examine whether a non-binding commitment pledge combined with goal setting affects student achievement  . They ask if having students state a target grade, identify study actions, and sign a pledge improves academic performance. The study uses a randomized field experiment in four Principles of Macroeconomics sections at a public university, analyzing survey responses and official course records. Although treated students pledged higher grades and more study time, their outcomes worsened. Test scores fell (about −0.28 SD on the final exam and −0.27 SD on midterm 2), overall course grades declined (−0.23 SD), and pass rates dropped by roughly 15 percentage points.


Why This Article Was Selected for The Policy Scientist

This study addresses a policy question of wide relevance: whether low-cost behavioral interventions can reliably improve academic outcomes. As institutions face pressure to raise completion rates while containing costs, commitment pledges and goal-setting exercises are increasingly attractive. The article is timely given the rapid adoption of “student success” nudges across higher education. It contributes to the literature by providing causal evidence that such interventions may generate unintended adverse effects, extending prior work on goals and commitment devices. The dataset—linking baseline surveys to official course records—is well suited to the research design. While drawn from a single university context, the findings plausibly generalize to similar gateway courses. The randomized field experiment provides strong causal identification.

Full Citation and Link to Article

Wright, N. A., Arora, P., & Wright, J. (2025). I Promise to Work Hard: The Impact of a Non-Binding Commitment Pledge on Academic Performance. Education Finance and Policy, 1–54. https://doi.org/10.1162/EDFP.A.30


Central Research Question

The study investigates whether a low-cost behavioral intervention—combining goal setting with a non-binding commitment pledge—causally affects academic performance in a college course. Specifically, the authors examine whether asking students to state a target grade, identify concrete actions (such as study time and attendance), and sign a pledge to work toward that grade improves measurable outcomes. The intervention is motivated by behavioral economic theories suggesting that explicit commitments and salient goals may help individuals overcome self-control problems, reduce procrastination, and increase effort. The central question is therefore not descriptive but causal: does this pledge-and-goal mechanism change exam scores, course grades, and pass rates relative to a control group?


Previous Literature

The article is situated at the intersection of behavioral economics and education policy. Prior research has frequently shown that goal setting can enhance performance by creating reference points that motivate effort, particularly among individuals exhibiting present bias or loss aversion. Studies in educational contexts have reported improvements in outcomes when students articulate academic goals, whether framed at the course level or for specific tasks. However, more recent work has revealed important boundary conditions. Some evidence indicates that externally induced goals may be ineffective or even counterproductive, especially when goals are overly ambitious or insufficiently aligned with students’ beliefs about their capabilities.


The pledge component draws on a parallel literature on commitment devices. Non-binding pledges have been examined across domains such as charitable giving, tax compliance, loan repayment, and health behaviors. These studies often find modest positive or null effects, attributed to mechanisms including guilt aversion and preferences for promise keeping. Yet theoretical and empirical work also highlights that non-binding commitments can fail when reneging is easy or when unmet commitments trigger discouragement. Within education, relatively few studies isolate the causal impact of pledges, and fewer still analyze pledges paired explicitly with goal setting. This study contributes by experimentally evaluating the joint effect of both mechanisms on student achievement.


Data

The empirical analysis relies on two primary data sources: a baseline survey administered early in the semester and official academic records obtained from course assessments. The survey captures demographic characteristics, prior academic performance (including GPA), study habits, employment status, hours worked, and predicted course grades. These measures allow the authors to assess balance across treatment groups and control for pre-treatment characteristics. Academic records provide objective outcomes: scores on two midterm exams, homework performance, final exam results, overall course grades, assignment completion, and pass/fail status.


The experiment was conducted in four sections of a Principles of Macroeconomics course at a public university. The course structure was standardized across instructors, with common textbooks, assessments, grading weights, and automated scoring for multiple-choice exams. Approximately 82 percent of enrolled students participated in the survey, and the analytic sample includes those who completed the course. Summary statistics indicate that students were relatively high-achieving at baseline, with average GPAs above 3.3 and a substantial share reporting strong prior academic standing. Importantly, statistical tests show no significant differences between treatment and control groups on observed characteristics, supporting the validity of random assignment.


Methods

The authors implement a randomized field experiment embedded within a routine course setting. After completing baseline survey items, students were randomly assigned by Qualtrics software to treatment or control conditions. Control-group students reported only their expected course grade. Treated students, by contrast, selected a preferred target grade, specified intended behaviors (attendance rates and study hours), and signed a non-binding pledge committing to work toward their goal. Instructors were unaware of individual treatment status during the semester, minimizing the risk of differential treatment or grading bias.


The primary estimation strategy employs regression models with section fixed effects and pre-treatment covariates to estimate the average treatment effect. Outcomes are standardized to facilitate interpretation across assessments. Robust standard errors account for heteroskedasticity. To strengthen causal claims given the moderate sample size, the authors conduct randomization inference tests, repeatedly simulating placebo assignments to evaluate whether observed effects could plausibly arise by chance. Additional analyses include ordered probit models examining predicted versus pledged grades, mediation models exploring behavioral pathways, and heterogeneity tests assessing differential effects across subgroups defined by GPA, class standing, employment intensity, and gender.


Findings/Size Effects

The intervention produced clear behavioral responses but adverse academic outcomes. Treated students pledged higher target grades and reported greater intended study commitments, indicating that the mechanism successfully altered stated aspirations. However, performance measures moved in the opposite direction. The treatment effect on the final exam was approximately −0.28 standard deviations, and the overall course grade declined by roughly −0.23 standard deviations. The probability of passing the course fell by about 15 percentage points. These estimates were statistically significant and robust to alternative specifications and randomization inference procedures.


Disaggregated analyses reveal that the negative effects emerged primarily after the first midterm. There was no detectable treatment difference on midterm 1, but treated students experienced larger declines on midterm 2 and the final exam. Homework scores also fell by approximately −0.28 standard deviations. The authors interpret this pattern as consistent with a discouragement mechanism. Many students underperformed relative to their predicted or pledged outcomes on the first exam. For treated students, the pledge may have increased the psychological salience of this gap, amplifying disappointment and reducing subsequent effort.


Interaction analyses show that students who slightly or moderately underperformed on midterm 1 exhibited the largest negative treatment effects. Students who either met expectations or severely underperformed displayed smaller or statistically insignificant differences, suggesting nonlinear responses to feedback. Mediation models indicate that declines in homework performance and subsequent exam scores explain a substantial share of the total treatment effect. Heterogeneity tests further show that adverse effects were concentrated among freshmen and sophomores, students with GPAs above the median, and those working longer hours. No systematic differences were observed across gender.


Conclusion

The study provides causal evidence that a non-binding commitment pledge combined with goal setting can reduce academic performance under certain conditions. While the intervention increased aspirational targets and intended effort, objective outcomes deteriorated. The authors argue that when students face early setbacks, pledges tied to ambitious goals may heighten disappointment, leading to reduced engagement. The findings complicate assumptions that behavioral nudges are uniformly beneficial, highlighting the possibility of unintended negative effects.


More broadly, the article underscores the importance of careful intervention design. Behavioral mechanisms may interact with feedback, expectations, and perceived control in ways that reverse intended benefits. The randomized experimental design strengthens the credibility of these conclusions, offering internally valid estimates of treatment effects. Although conducted within a specific institutional context, the results are relevant for educators and policymakers considering similar goal-and-pledge strategies, particularly in high-stakes introductory courses where early assessments strongly shape trajectories.

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
srollins25
2 days ago
Rated 5 out of 5 stars.

I very much so enjoyed the article and the summary.i am curious as to your thoughts on the "stick and carrot" rather than nonbinding agreements. As of now, my understanding and belief is that we hold to "stick and carrot" with more stick than carrot in most, not all, educational settings. I would also offer that this could also lead to risk vs. reward anal from both the perspectives of students and learning institutions. Thoughts?

Like
Screenshot of Greg Thorson
  • Facebook
  • Twitter
  • LinkedIn


The Policy Scientist

Offering Concise Summaries*
of the
Most Recent, Impactful 
Public Policy Research

*Summaries Powered by ChatGPT

bottom of page