Applebaum's article reflects much of the usual New York Times cheerleading for behaviorism and nudge/nanny programs.
Reading the report, I came away more approving of some aspects than blog readers might think, but a little more skeptical of some aspects than Applebaum's article.
- The bottom line is spam. The government wants to send you letters, email, and text messages to sell its programs. The limits and objections to the program are pretty obvious once you recognize that fact. Spam gets ineffective pretty quickly, and once we start getting spam from 150 different programs nudging us to do different things, spam will get even more ineffective even more quickly.
- If it's a good idea for the government to send us spam email and text messages, why are academic behavioral scientists the ones to do it, not professional spammers (sorry, "direct marketers")? The actual end result of this is more employment and consulting contracts for academic behavioral economics.
- The numbers in the report are surprisingly small. Sending spam raises the number of people taking advantage of some program from 2% to 2.2%, which can be sold as a 10 percent increase. Even I, somewhat of a skeptic to start, am amazed how low the effects are. And both before and after numbers are incredibly small. The big news in this report is that we're full of government programs that only a few percent of the available people are taking advantage of! That might be great news for the budget, but shocking news of effectiveness.
Research from behavioral science demonstrates that seemingly small barriers to engagement such as hard to- understand information, burdensome applications, or poorly presented choices can prevent programs from working effectively for the very people they are intended to serve (xi)Well, that seems completely unobjectionable. Anyone who has tried to fill out any government form or understand any government program, regulation, the tax code, or much of anything else can sympathize with the idea that it is insanely complex and obscure. And duh, that complexity is hindering its effectiveness.
It also seems breathtakingly obvious. Do we really need "research from behavioral science" to know that?
It also seems a little paternalistic of government. In a little google searching, the word "Byzantine" comes from crusader's complaints about the complexity of the Byzantine empire. "Red tape" goes back to the 1400s. This has been going on a long time. Is the fact that government programs are absurdly complicated simply because the bureaucrats who run them are so dumb they don't know that complex stuff doesn't work?
It's easy to suspect that many parts of our government, like the tax code, are deliberately complex and obscure, to keep us peasants from figuring out what's really going on, and to keep an army of government bureaucrats, lobbyists, attorneys, and various fixers employed. That will be harder to fix than just by parachuting in some academic consultants to craft spam emails.
Though the report trumpets "behavioral science" as having all the answers, most of the actual programs involve testing 9 or 10 bright ideas, and then reporting the most successful one.
"this process of translation requires constant evaluation and feedback. SBST works with agencies to, where possible, rigorously test the impact of these insights on program outcomes before implementing them widely. In this way, SBST can learn about what works, what works best, and what does not work. To achieve this goal, SBST often implements randomized trials?Again, these are wise words. Again, they are blisteringly obvious. Again, the fact that most government programs do NO retrospective analysis, no quantiative evaluation of methods, no measurement at all of this sort is damning by its absence. Google, Amazon, and Facebook are constantly trying different ways of presenting information and picking the winners.
Well, enough whining. Maybe by coming in with a gloss of "behavioral science" and a big program they can get agencies to clean up their acts a bit, raise participation in good programs above 5% sorts of numbers and do a modicum of analysis. (Though with this branch of psychology in a crisis of replicability, whether that piece of marketing is wise is another good question.)
But I read over the successes trumpeted in the report, they sadly melted away. Start with perhaps the most important, and the biggest success: getting low-income kids to college. The effort was a big success, you read early on,
"helping more low-income students get to college each year. "(iii)In fact, thanks to the pilot programs alone,
"college is now more readily accessible to millions of American families.? (xi)Hmm. "Accessible" doesn't actually mean "accessed."
What did they actually do? One problem, "summer melt" is kids who start applying to colleges, don't fill out all the forms over the summer, and then don't show up. (The report says they don't go to college "because" they did not fill out the form, but with no evidence for this strong statement. People who are not going to go to college for other reasons don't fill out forms.) To help,
. uAspire sent a series of eight text messages informed by behavioral insights to a random sample of students over the summer months, boosting college enrollment by 3.1 percentage points (from 64.9 percent to 68.0 percent). The impact of the texts was particularly large for the lowest-income students, who saw a 5.7 percentage point increase in college enrollment (from 66.4 percent to 72.1 percent; see Figure 6), ..(P. 9:)I've seen hyperbole before, but a three to five percentage point increase in takeup in response to nagging emails, in a small sample in an experiment, is a long way from having already made "college more readily accessible to millions of American families."
And how many of those extra students made it past the first week? How do we know they "successfully" enrolled in college? None of these studies reports any follow up.
Of course, it's hard to object. If all it takes is some text messages to get 72 rather than 66 percent of low income kids to make it to the first day of classes, that's good. Any parent of a teenager will be quick to tell you that nagging is vitally necessary for this demographic.
The other effects in this report are unbelievably small. Even I would have thought behavioralism could improve things more than this. And even I thought government programs were more effective.
"sent approximately 720,000 unenrolled Servicemembers one of nine email variants, the most effective message nearly doubled the rate at which Servicemembers signed up for TSP. Emails informed by behavioral insights led to roughly 4,930 new enrollments and $1.3 million in savings in just the first month after the emails were sent. .."Let's see, 4930/720,000 = 0.7% That must be "doubled" from 0.35%.
On college loans,
"SBST and FSA sent a reminder email to over 100,000 borrowers who had missed their first payments. The reminder email led to a 29.6 percent increase in the fraction of borrowers making a payment in the first week after it was sent, from 2.7 to 3.5 percent.An increase from 2.7 percent (catastrophically low) to 3.5 percent (only disastrously) is a 29.6 percent increase.
The prize winners:
Farms that were sent a personalized letter were 22 percent more likely to obtain a loan, representing an increase from 0.09 to 0.11 percent.
SBST and the Department of Health and Human Services (HHS) sent one of eight behaviorally designed letter variants to each of more than 700,000 Individuals. Those sent the most effective version of the letter were 13.2 percent more likely to enroll in health insurance than those sent no letter, with enrollment rates of 4.56 and 4.03 percent, respectively.
In addition to the unbelievably tiny rates, it does sound a bit like "Eight magicians reporting ESP abilities were tested. The best of the eight was able to predict 5 out of 10 cards in a row..."
I do have to commend the report for honesty, at least they put the tiny percentages in the summary rather than just present the percent increases in tiny percentages. They make for great case studies in statistics classes on the danger of selling an increase from 0.09% to 0.11% as a big 22% increase. But these are tiny, tiny effects
With these examples, you see my point. What is this about? In a word, spam. The government wants to send you spam emails, spam letters, and spam text messages.
OK, let's use the polite word "marketing." But once put that way, a reaction becomes a bit obvious. Yes, the government needs to do a better job marketing its programs. Is hiring a bunch of behavioral science academics the best way to do this, or is it more effective to hire some real marketing consultants? I'm sure it's better for the academic behavioral scientists who want to feel important or score government contracts, but really, if we're going to be scientific, we should compare letting any good marketing organization compete with the behavioralists in the writing of spam.
Equally obvious, spam might be effective the first time, but rapidly falls off. No wonder we're talking about raising 4.03% to 4.56% responses. The Nigerian princes with a gift for you have been consulting with the same behavioralists.
Spam may be quite difficult to scale. Once all 46 job training programs start sending weekly nudges, how effective will they be? Once you have to wade through nudges to buy an electric car, "clean diesel" (whoops), put solar panels on your roof, fill out your taxes, sign up for that 526 college savings plan, eat more cheese, and on and on, will each have any impact any more? And once the con artists and spammers learn to simulate government emails, and anti-spam programs weed them out, what happens?
This will be harder because so many programs work to cross purposes. I note wryly that the report starts with "boosting retirement savings nationwide" on page 1. Of course the Administration's economic policies have been desperately trying to get Americans not to save and to spend instead, from trillion dollar stimulus to ultra-low interest rates, for 8 years running. So which is it? Or will we soon get contradictory nudges?
In sum, yes, a simpler bureaucracy would be nice. If a few forms get simpler and a few people get help, It's hard to object. If bureaucracies start regularly monitoring the effectiveness of their programs, even better. But is America's problem right now not enough spam?
Will it have a big effect? This seems now mostly a device for behavioral science academics to get funding for more research, chewing up a small amount of Federal dollars and doing little harm along the way.
The good news: I expected grand plans for the Federal Department of Nudging. The effort so far seems limited to trying to get existing government programs to work better.