Assessing Nudge Scalability: Two Lessons from Large-Scale RCTs (with Hengchen Dai, Maria Han, Naveen Raja, Sitaram Vangala, and Daniel Croymans)
- Date: Jun 8, 2022
- Time: 04:00 PM (Local Time Germany)
- Speaker: Silvia Saccardo (Carnegie Mellon University)
- Location: Zoom meeting
- Room: Please contact Zita Green for Zoom link: green@coll.mpg.de
Field experimentation and behavioral science
have the potential to inform policy. Yet, many initially promising ideas show
substantially lower efficacy at scale, reflecting the broader issue of the
instability of scientific findings. Here, we identify two important factors
that can explain variation in estimated intervention efficacy across
evaluations and help policymakers better predict behavioral responses to
interventions in their settings. To do so, we leverage data from (1) two
randomized controlled trials (RCTs; N=187,134 and 149,720) that we conducted to
nudge COVID-19 vaccinations, and (2) 111 nudge RCTs involving approximately 22
million people that were conducted by either academics or a government agency.
Across those datasets, we find that nudges’ estimated efficacy is higher when
outcomes are more narrowly (vs. broadly) defined and measured over a shorter
(vs. longer) horizon, which can partially explain why nudges evaluated by
academics show substantially larger effect sizes than nudges evaluated at scale
by the government agency. Further, we show that nudges’ impact is smaller among
individuals with low baseline motivation to act—a finding that is masked when
only focusing on average effects. Altogether, we highlight that considering how
intervention effectiveness is measured and who is nudged is critical to
reconciling differences in effect sizes across evaluations and assessing the
scalability of empirical findings.