Program Evaluations Are a Waste of Money

Program Evaluations Are a Waste of Money

Business schools teach aspiring managers to avoid "information bias" — that is, the tendency to seek more information even when it will have no effect on one's decision-making. That sounds like an obvious lesson, but it's not one the federal government has learned. Lawmakers routinely pay for formal evaluations of social programs, apparently knowing all the while that the results will not affect their support for those programs.

From job training to preschool, this year's House, Senate, and White House budget proposals all continue to offer funding for programs that have performed poorly on the government's own evaluations. It is a wasteful, disingenuous approach to social policy, but it need not continue. If we were to tie funding directly to the results of evaluations, the whole conversation about program evaluation would become more serious.

The perfect case study is Head Start, the oldest federal preschool program. The Head Start Impact Study — a state-of-the-art, multi-site experimental evaluation set in motion by a law Bill Clinton signed in 1998 — came with a price tag of $28 million. Rationally, lawmakers should not have paid for that study unless they expected the results to affect their support for the program. If the Impact Study shows Head Start is effective, they should want to increase funding and look for ways to expand the program's reach. If Head Start is not proven effective, lawmakers should presumably want to eliminate the program, or at least decrease support and redirect some of the funding toward back-to-the-drawing-board research.

Rationality did not prevail. The Impact Study failed to show lasting effects, yet Head Start is still alive and well. In fact, a couple of months after the study's final results were released, the Obama administration proposed increasing funding for Head Start, touting the "success" of the program and the "historic investments" the administration had already made in it. The White House did not say what it meant by "success," but clearly it must have been judging Head Start on some criteria that the Impact Study did not cover. So why pay for the study in the first place?

Head Start's defenders argue that the Impact Study is not capturing "sleeper effects" that will emerge later in the participants' lives. So if the Impact Study had shown positive effects, they would have said, "We should support Head Start because of these positive effects." Instead, they say, "We should support Head Start because of sleeper effects suggested by other research." Since the decision is the same either way, the Impact Study was a waste of taxpayer money.

Another way that the White House deflected the Impact Study's results was to cite its upcoming rewrite of performance standards for Head Start providers. However, a follow-up to the main Impact Study found that variation in Head Start program quality had no significant effect on student outcomes. That was apparently no problem for the administration. When its new standards were finally proposed this summer, there was no reference to the follow-up report's findings. Again, the Impact Study appears remarkably useless to the very government that funded it.

Democrats and Republicans share the blame. The legislation that authorized the Impact Study passed with large majorities of both parties. And, like the White House, both houses of the Republican-controlled Congress proposed budgets this year that would fully fund Head Start. So there is a bipartisan consensus in Washington both for evaluating Head Start and for disregarding the results of that evaluation.

Dropping the studies altogether would be preferable to paying for them and then ignoring the results. The better solution, however, would be to legally tie program funding to the evaluations. Make the existence of Head Start and other programs contingent on showing impacts on pre-specified outcome measures. That would require lawmakers to be clear about the reasons they support or oppose particular programs. If they protest that the benefits of their favorite program are not necessarily captured by a formal study, the natural question would be, "Since the study has no chance of changing your mind, why do you want taxpayers to fund it?"

There would be logistical difficulties, of course. One can imagine the special pleading that would follow a poor evaluation: "My favorite program almost achieved its required impact, so we shouldn't penalize it." A stubborn Congress might pass new legislation that simply restores funding to pre-evaluation levels. But the purpose of tying dollars to results is not so much to force an immediate policy change as it is to generate a more serious discussion about what we expect from social programs. It's a discussion that is long overdue.

Jason Richwine is a public-policy analyst in Washington, D.C.

Comment
Show commentsHide Comments

Related Articles