Filter | only charts
Robert VerBruggen - September 1, 2015

ProPublica has an interesting new analysis:

Prices for The Princeton Review's online SAT tutoring packages vary substantially depending on where customers live. If they type some ZIP codes into the company's website, they are offered The Princeton Review’s Premier course for as little as $6,600. For other ZIP codes, the same course costs as much as $8,400.

...

ProPublica tested whether The Princeton Review prices were tied to different characteristics of each ZIP code, including income, race and education level. When it came to getting the highest prices, living in a ZIP code with a high median income or a large Asian population seemed to make the greatest difference.

The analysis showed that higher income areas are twice as likely to receive higher prices than the general population. ... Customers in areas with a high density of Asian residents were 1.8 times as likely to be offered higher prices, regardless of income.

Apparently Princeton Review sets its prices using a system that takes into account things like the cost of doing business and the "competitive attributes" of different places. The tutors typically live in the same place as the students, though the tutoring is done online in addition to in-person.

ProPublica ties this in with concerns about race and online transactions, quoting a White House report worrying that "algorithmic decisions raise the specter of 'redlining' in the digital economy — the potential to discriminate against the most vulnerable classes of our society under the guise of neutral algorithms."

Is Princeton Review's system — which is not an algorithm, and draws distinctions among entire cities, states, and regions rather than invididual neighborhoods — a guise, or does it really just reflect supply and demand? The notion that Asians use test prep disproportionately lurks in the background of the article (there's even a reference to Tiger Moms in the title), but it never comes to the fore.

So, here's a chart with data from an academic study, via Inside Higher Ed (hat tip to Education Realist):

Basically, Asians, especially East Asians, use test prep much more than kids from other racial and ethnic groups — especially whites and Hispanics, who together constitute about 80 percent of the population. Therefore, all else equal, we should expect areas with high concentrations of Asians to have higher test-prep prices, based on nothing more than the law of supply and demand. (By the way, the high use of test prep among blacks is consistent with other studies.)

ProPublica also notes the concept of "disparate impact," which happens when a seemingly non-discriminatory business practice — in this case, charging higher prices in places with higher demand — affects some racial groups more than others. In some areas of the law, disparate impact can make a business's policy presumptively illegal, with the company given the burden of showing that its policy is a business necessity.

The disparate-impact theory doesn't (yet) apply to online pricing, and respecting the laws of supply and demand would seem to be a business necessity, but this does raise a fundamental question about the doctrine: If racial groups differ in terms of how they interact with various businesses, why would we expect business practices to affect all racial groups equally?

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Questioning the Common Core

Frederick M. Hess - September 1, 2015

This summer, Republican presidential aspirants have roundly criticized the Common Core reading and math standards as federally supported and educationally unsound — and been lambasted by Beltway pundits for having the temerity to do so. The Washington Post editorial page laments that Republican "ideologues have so disfigured Common Core that supporters ... now dare not speak its name." The vice president for education at the Center for American Progress charges that "opponents of the Common Core have embarked on misinformation campaigns." Frank Bruni of the New York Times dismisses GOP candidates as "excessively alarmed."

In fact, the concerns the candidates are airing are legitimate and relevant.

The Common Core started with a reasonable and easy-to-like premise: American students would benefit if more states chose to use similar reading and math standards. From there, advocates made three giant mistakes. First, they convinced themselves that the Common Core's transformative potential meant it was worth rushing into place — with aggressive support from the federal government via the Race to the Top program and more than $350 million to support Common Core tests. Second, they then switched gears and publicly insisted that the Common Core was really just an innocuous, technical exercise, and that public discussion and debate were unnecessary. Third, when skeptics raised questions about any of this, they dismissed them as either malicious or obtuse.

As Common Core has gone from big idea to more prosaic reality, some who liked the notion's promise have been troubled by what they've seen in practice. Indeed, the fact that Republican governors like Scott Walker and Chris Christie have shifted from support to opposition looks pretty reasonable — and hardly evidence of perfidy — when seen in this light. In fact, rather than spreading misinformation, it turns out that Republican candidates are correct that the Common Core is not just about "higher" standards, but also about how teachers teach and how students are taught.

Common Core's architects saw a window of opportunity to bring transformative change to American schooling. They quite consciously bundled their standards with a dozen "instructional shifts" that were supposedly essential for effective implementation. The shifts stretch well beyond the usual stuff of reading and math standards. The shifts include dictates specifying that:

• "Informational text" should account for 50 percent of reading in elementary school and 70 percent of reading by high school, while fiction and poetry should account for no more than 50 percent of elementary reading and just 30 percent of high-school reading.

• "Close reading" (modeled on the way graduate students in literature deconstruct texts) should be the model for how students approach text.

• "Conceptual math" (think of the picture-driven worksheets that have garnered online notoriety) should be the foundation of math instruction.

Now, changes like these can make good sense, at least for some students, at some times, and when done well. But it's flat wrong to suggest that these don't change how students are taught, and it's ridiculous to dismiss those concerned about the changes as misinformed (especially because the changes are frequently not done well). Indeed, it's some of these shifts — like students reading fewer novels and having to wrestle with incomprehensible math worksheets — that have occasioned so much blowback. The supposedly misguided critics often seem more tuned in to the reality of the Common Core than the cheerleaders are.

Indeed, five years ago, when the Common Core was still brand spanking new, Chester E. Finn Jr. and Mike Petrilli (then president and vice president, respectively, of the Thomas Fordham Institute) — both ardent champions of the Common Core — acknowledged, "Standards often end up like wallpaper. They sit there on a state website, available for download, but mostly they're ignored." What really matters, they noted, is how standards affect state tests, curricula, teacher evaluations, and what teachers and students do — in other words, the stuff that gets brushed aside when advocates insist the Common Core is nothing more than an innocuous commitment to "higher" standards.

The Common Core was never intended to be a mere exercise in wallpapering. Its champions intended for Common Core to be the backbone for an extensive set of changes to testing, instruction, textbooks, teacher evaluation, and much else. There's nothing wrong with that aim; but there is something wrong with advocates denouncing those who call them on it, especially with the hubris and tone-deafness that have helped turn a reasonable notion into a divisive one.

There's much to be said for common state reading and math standards, in the abstract, and reasonable people can make a cogent case for the Common Core, in particular. But reasonable people can also look at the Common Core and see a federally supported effort to impose goofy instructional practices and half-baked reforms on America's schools. Perhaps that's why support for the Common Core has dropped steadily in recent years. In any event, raising these concerns is neither alarmist nor misinformed. Ultimately, advocates would do well to acknowledge that the shaky status of their enterprise is due more to their tactics and the Common Core itself than to the criticisms voiced by Republican contenders for the presidency.

Frederick M. Hess is director of education policy studies at the American Enterprise Institute. His books include Common Core Meets Education Reform (Teachers College Press 2013).

GOP Lawmakers and the 'Gig Economy'

Ian Adams - September 1, 2015

The partisan battle over the "gig economy" — epitomized by ride-sharing services like Uber and Lyft and space-sharing services like Airbnb — has crept into the early stages of the presidential election. Republican candidates like Jeb Bush and Marco Rubio have voiced their support for these new platforms, while Democratic candidates — Hillary Clinton in particular — have expressed concern about the contractor-dependent model upon which gig-economy firms rely.

The Los Angeles Times has proudly declared 2016 to be "the first Uber election." But despite the attention paid to the issue, regulating the gig economy has not, thus far, been a federal concern. While the candidates offer platitudes, the gritty details of policy have been the domain of states and municipalities.

In the nation's big cities — of which Democrats maintain control of 13 of the 15 most populous — gig-economy firms have been on the defensive. New York mayor Bill De Blasio sought unsuccessfully to rein in Uber by freezing the firm's growth in the city for one year. Philadelphia, another bastion of Democratic governance, undertook its own effort to crack down on Uber by seizing UberX vehicles. (UberX is Uber's low-cost option, relying on part-time drivers using their own vehicles.) In spite of its efforts, more than 1 million rides have been furnished by the service in the City of Brotherly Love.

At the state level — where Republicans maintain control of 33 governorships and 68 of the 98 partisan legislative chambers — gig-economy firms have fared better, if for no other reason than that Republicans' top priority has been to address questions surrounding the legality of the gig economy.

Looking solely to ridesharing laws, the trend is clear. Of the 24 states in which Republicans maintain a "trifecta" (the governor's office and both legislative chamber), two-thirds have passed laws legalizing ridesharing. Of the seven Democratic-trifecta states, only California has passed similar legislation. Among the 31 states with Republican governors, 19 have passed such laws, and among the 18 states with Democratic governors, only seven have.

It's evident that the gig economy is a priority where the Republican party is in control. But Republicans have not always covered themselves in glory when it comes to these new enterprises. In Kansas, the legislature fought to require that ridesharing drivers maintain otherwise optional "comp and collision" insurance, which ensures compensation for the driver in the event of an accident. The benefit of the coverage accrues directly to the driver, and the ridesharing companies cried foul about having to provide security for lending institutions with liens on the vehicles. What's more, the enhanced coverage level was not recommended by the National Association of Insurance Commissioners in its compromise white paper on the issue because of the costs associated with it. Only after Uber ceased operating in Kansas was a compromise reached that instead calls on Uber to notify drivers of the benefits of such coverage.

Now Utah — home to Republican legislative super-majorities and an early adopter of legislation to accommodate ride-sharing — is on the verge of making exactly the same misstep. Such blunders might be better attributed to cock-up than conspiracy. Republican states like Utah fundamentally want to see the gig economy succeed, but they are prone to poor execution.

Moving forward, it behooves red states to fine-tune their approach. The political stakes are high. Rural and suburban Republicans, the party's core constituencies, happen to be among the least likely to avail themselves of these new services. Instead, the electoral upside that could accrue to the GOP by embracing the gig economy lies in the services' decidedly urban user base. Using a free-market message to disrupt Democrats where they are most comfortable would be truly innovative.

Ian Adams is senior fellow and Western region director of the R Street Institute.

Do Federal Prisoner Reentry Grants Work?

David B. Muhlhausen - August 31, 2015

The Department of Labor has released the results of its two-year evaluation of the federal Reintegration of Ex-Offenders (RExO) grants, which are designed to help ex-offenders find employment and reduce recidivism. The findings shed important insight on how the nation helps the nearly 600,000 prisoners released back into society each year.

The prognosis for these individuals staying away from crime is not good. Over two-thirds of former prisoners are rearrested within three years. Given the high likelihood that former prisoners will continue their old ways, Americans naturally assume that providing employment-focused training will help these ex-cons law-abidingly reintegrate back into society.

But the RExO evaluation provides evidence that the grants are ineffective. While disappointing, the results are not surprising: Failure is the norm for federal social programs.

The RExO program, which began in 2005, provides grants to local organizations to administer employment-focused prisoner-reentry programs. The rigorous evaluation assessed the effectiveness of federal grants to 24 local employment-based reentry programs. Almost 4,700 former prisoners were randomly assigned to program and control groups. In addition to funding from their own sources, each of the programs received over $2.9 million in RExO grants over a five year period.

The services received by the participants turned out to have only a slight effect on employment and earnings, and virtually no impact on recidivism.

One year after random assignment, the program group had a 3.5-percentage-point higher rate of working at all during the year, compared with their counterparts in the control group. During the following year, the rates of participation in work by the program group were only 2.6 percentage points higher, and this impact was not statistically significant at the traditionally accepted level. On average, the program group earned $883 more in income than the control group over the two-year period.

The services provided by the RExO grantees failed to improve upon the recidivism, convictions, and reincarceration rates of participants. Over two years, 42 percent and 43.2 percent of the program and control groups were arrested, respectively — a statistically insignificant difference. Further, the services failed to have an impact on convictions for new crimes — including violent, property, and drug crimes. Members of the program group were no more or less likely to be admitted to prison for new crimes or parole/probation violations.

The results cast significant doubt on the widespread belief that helping released prisoners find employment is the best way to keep them out of prison in the future. Unfortunately for public-policy purposes, it seems likely that nothing so straightforward is likely to suffice.

Criminologist Ray Paternoster and his colleagues are positing a new theory that the process of changing an offender’s identity from a criminal to a law-abiding citizen is a complex process that needs to precede finding legitimate employment. For instance, former prisoners need to realize that criminal offending is more costly than beneficial. Once this realization occurs, the individual can adopt a more pro-social identity, that eschews “quick and easy money,” such as theft and drug dealing, for more conventional employment.

There is some evidence to support this theory. Based on a sample of 783 recidivist males from Norway, criminologists Torbjørn Skardhamar and Jukka Savolainen found that most of the offenders gave up criminal behavior before finding legitimate employment and that becoming employed was not linked to reduced criminal behavior. Giving up on the criminal lifestyle precedes finding and maintaining employment.

If the perspective of Paternoster and colleagues is a more accurate explanation of process of giving up on crime, then helping released prisoners find employment before they are ready to give up criminal behavior may be unproductive. And this means that prisoner-reentry efforts that rely mainly on job training are not likely to succeed.

David B. Muhlhausen is a research fellow for empirical policy analysis in the Center for Data Analysis, of the Institute for Economic Freedom and Opportunity, at the Heritage Foundation and author of Do Federal Social Programs Work?

Do Guns Cause Violence?

Robert VerBruggen - August 27, 2015

At The Week, my old friend Michael Brendan Dougherty makes a "conservative case for reforming America's sick gun culture." (He and I were roommates in a disturbingly messy four-bachelor Fairfax townhouse about a decade ago.) He supports the idea of an armed citizenry, and believes people should be able to own guns to protect themselves. But in light of yesterday's events, he would make it a requirement for anyone buying a gun to have some sort of training or socialization in the culture of gun clubs.

Of course, this is a political nonstarter — it amounts to background checks on steroids, where not only your criminal history but also your character and training can disqualify you. We do have criminal checks for gun-dealer sales (and yesterday's killer apparently passed one), but attempts to require these checks on sales between private individuals, a modest measure I tentatively support, have flopped outside a handful of liberal states. It's hard to see Dougherty's plan faring even that well. And then there's the question of whether such a rule would square with the Second Amendment as the Supreme Court interprets it.

But it's worth engaging with Dougherty's assumptions about the broad link between guns and violence, because they're shared by vast swathes of the population. So, here's some of the wisdom I've accumulated in more than a decade of following gun research.

Dougherty writes:

America is really the only nation that is orderly with an almost unchallengeable state, and yet has a gun-death rate similar to much poorer Latin American nations experiencing low-grade civil wars and disorder.

Yes, many of our firearm-related deaths are suicides. But our firearm-related homicide rate is noticeably higher than every comparable industrialized nation. And furthermore, there seems to be a strong correlation between reduced access to firearms and a reduced rate of suicide.

"Gun deaths" are a pet peeve of mine, and Dougherty only partly addresses my concern when he admits that they include suicides. The notion that guns and "gun deaths" go together is practically tautological, and unhelpful to boot. A country with no guns by definition has no gun deaths, but that doesn't mean it has fewer violent deaths overall.

To start with a point of agreement, I'm somewhat sympathetic to his point about suicides. Unlike with homicides — where a gun can enable one or prevent one — the effect of guns on suicide can only be bad. There's decent research suggesting that gun ownership does modestly increase suicide rates; suicide can be impulsive, so it's not true that someone without a gun will necessarily find another way. But many people — including me, and I'm guessing most conservatives in general — find repugnant the idea of reducing people's "access to firearms," not on the basis of any demonstrated suicide risk, but simply on the off chance that they might use a gun to harm themselves.

If not "gun deaths," what about "firearm-related homicide"? This too is a nearly useless concept, because gun homicides and non-gun homicides interact with each other. Someone who can't get a gun may simply kill with a different weapon instead. (Even in gun-drenched America, about a third of murders are committed with no gun.) And someone who can get a gun might defend himself against an assailant who doesn't have one. So we should always focus first on total violence, not gun violence, even when we're looking for the effects of guns.

The simple correlation between gun ownership and violence often disappears entirely when you take this into account, as I've shown with data on both states in the U.S. and developed countries. This shows that guns are not a primary driver of differences in murder rates — whatever effect they have is drowned out in the data by things like demographic differences, culture, and so forth.

Using complicated statistical techniques, you can try to tease the effect of guns out of this mess, and some researchers have purported to do so. But as statistical techniques become more complicated, they also become more subjective and run the risk of falling victim to political motivations. The two fundamental laws of gun studies are: One, if a given author reaches a pro- or anti-gun result in one study, all his future results will point in the same direction; two, if it appears in a public-health journal, the results will suggest guns are bad. Relatedly, a general note of caution is always in order when it comes to social science: It's impossible to "control" for everything besides guns that might affect violence, especially culture.

Essentially, the tools currently available to scientists aren't precise enough to resolve this debate, leaving too much wiggle room for researchers to reach the conclusions they want. We don't have consensus, but rather groups of researchers reaching conflicting results. Here's a criticism of the study linked above, for example. 

We see a similar thing in the debate over shall-issue concealed-carry laws, under which any civilian without a serious criminal record can get licensed to carry a gun. Some state laws are incredibly permissive — a few don't even require permits or training, and I got my Virginia license on the basis of a Wisconsin hunter's-safety certification I earned when I was 12. For all the state knew, I hadn't touched a gun in more than 15 years.

This would seem to be a prime example of the anyone-can-pack-heat culture Dougherty wants to reform. But as with the research on gun ownership, 20 years of studies on these laws have taught us almost nothing. Some studies suggest the laws reduce crime. Others suggest they have no effect. Still others say they increase crime. And even the most recent study reaching the anti-gun conclusion admitted that the results are incredibly sensitive. The most the authors could say is that the results are anti-gun if you use the techniques they happen to prefer.

I said we've learned almost nothing. What we have learned is this: A bunch of states started letting almost any random person walk around a gun, and if anything good or bad resulted, it doesn't reliably show up in the data. That's something in itself.

Other ways of studying gun restrictions are even less conclusive. For example, the "public health" crowd is quite fascinated by "case-control" studies, where they compare people who got murdered with demographically similar people who didn't get murdered, and pretend it means something that the people who got murdered were more likely to own guns. And studies looking at states before and after they implemented gun-control measures range from interesting if only suggestive to laughably bad.

I'm not the only person to reach the conclusion that the role of guns in violence is rather subtle. One interesting example is the Harvard psychology professor Steven Pinker. He's no fan of the NRA; he's from Canada, for God's sake. But in his book about the decline of violence, The Better Angels of Our Nature, the discussion of "weaponry and disarmament" is practically a footnote — about one page in an 800-page tome, relegated to a section about the "forces that one might have thought would be important [in major trends in violence] ... but as best as I can tell turned not out to be." He doesn't even bother to "endorse the arguments for or against gun control," and he writes that "human behavior is goal-directed, not stimulus-driven," adding that "anyone who is equipped to hunt, harvest crops, chop firewood, or prepare salad has the means to damage a lot of human flesh." Similarly, in Ghettoside, her interesting exploration of black-on-black crime in LA, the journalist Jill Leovy writes — in an actual footnote  that "guns are not a root cause of black homicide." The criminologist Gary Kleck tends to be highly skeptical of claims that guns make a difference, on net, one way or the other.

In short, yes, it's possible that confining gun ownership to the people willing to jump through various government hoops might have some marginal effect on violence. But that effect will probably be so small as to be difficult to detect, and there may be no effect at all.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Birthright Citizenship Encourages Assimilation

Alex Nowrasteh - August 27, 2015

Many Republicans are falling over themselves to echo Donald Trump's call to end birthright citizenship. Experts will be debating the legality of this for some time — many say a constitutional amendment would be needed — but the real-world impact of birthright citizenship is more important than the legal nuances. Granting citizenship to those born here is an insurance policy for a broken immigration system: It encourages the children of illegal immigrants to assimilate.

Currently, there are roughly 4 million U.S.-born children of illegal immigrants and 17 million minor children of legal immigrants. Those already born wouldn't be affected by a repeal, but roughly 1 million babies are born every year to immigrants. As immigration attorney Margaret Stock wrote, "If proponents of changing the Fourteenth Amendment have their way, every baby born in America will now face a bureaucratic hurdle before he or she gets a birth certificate." That's a huge number of newborns to annually condemn to automatic illegal status — and doing so would substantially increase the number of illegal immigrants in the country.

That would be bad enough, but the bigger problems would emerge later, as this larger population of illegal immigrants would assimilate more slowly. Assimilation, or the politically correct term "integration," mostly occurs in the second and third generations. Denying citizenship to children of immigrants would deny them legal equality in the United States, stunting their ability to culturally and economically assimilate.

Imagine being born and growing up here and being constantly reminded that you are not a citizen and will likely never be one. That scenario is theoretical for Americans, but Koreans born in Japan have experienced just that and the results are ugly. The Korean minority, called zainichi, are a legal underclass discriminated against by the government. This causes deep resentment and a proneness to crime and political extremism. The zainichi grew even though Japan has virtually zero legal immigration. By contrast, Korean immigrants and their descendants have thrived in the United States where their U.S.-born children are citizens.

And successful assimilation isn't limited to Korean Americans. According to research from University of Washington professor Jacob Vigdor, immigrants and their children from all backgrounds are culturally, linguistically, and economically assimilating today at about the same rate that immigrants assimilated 100 years ago. Nobody today thinks the descendants of the Italian, Polish, or Russian immigrants of early last century failed to assimilate.

The negative effects of making citizenship much harder or impossible to attain go way back. Republican Rome tightened its citizenship rules after the Second Punic War ended in 202 BC. Romans turned their backs on a previous open-door policy that allowed noble families to immigrate and naturalize while also granting citizenship to loyal allies. The new immigration restrictions led to an uprising in cities pushing for Roman citizenship — one of the stranger civil wars in history. To quiet the unrest, Rome finally reinstated the older rules that had served it so well.

America doesn't face a revolt of allies demanding citizenship, but it does face millions of illegal immigrants, their U.S.-born children, and the challenge of assimilating them. There will always be some illegal immigrants in the United States, regardless of reform or levels of enforcement. Birthright citizenship is an insurance policy that guarantees their children will assimilate instead of simmer on the margins of society.

We are the midst of a failed immigration policy that has produced around 12 million illegal immigrants. Now is not the time to cancel birthright citizenship and its benefits.

Alex Nowrasteh is the immigration policy analyst at the Cato Institute's Center for Global Liberty and Prosperity.

The Case for Utility Price Caps

Steve Pociask - August 27, 2015

Just last month, the U.S. Energy and information Administration announced that natural gas had surpassed coal — for the first time ever — as the main source of electricity generation. The news may have sparked delight for those who want to see the end of coal, and for those who view the recent boom in domestic natural-gas production as a means to lower consumer prices.

To others, however, the news brought puzzlement. While consumer gas prices fell by 24 percent from 2005 to May 2015, residential electricity costs rose by 37 percent. The stark divergence in prices has left policymakers and regulators wondering — if natural-gas prices are falling, and if natural gas is becoming the most important input in electricity generation, why are consumer utility prices still rising and by so much?

There are a number of easy explanations for why electricity prices are not dramatically decreasing — such as regulatory mandates that are increasing operational costs and shutting down lower-cost coal-fired plants, as well as the investment costs that are needed to improve the basic infrastructure of the power grid and protect plants from terrorist attacks, particularly cyber attacks. These are obvious costs that all electric utilities face.

Not so obvious, but very significant, is that many electric utilities are still regulated in much the same way that as they were over 70 years ago. That form of regulation, "rate-of-return regulation," guarantees a "fair return" for public-utility investments in plant and equipment, and it has long been known to create incentives to run up costs. According to reports from earlier this year, some utilities are accumulating excess capital for the purpose of increasing their profits, not for serving the public.

Corporations are always under pressure to increase shareholder value. For rate-of-return electric companies, regulators try prevent unreasonable utility profits by setting a rate-of-return, say 10 percent, on the size of the public-utility rate base. If the utility is a large one and requires more plant and equipment to serve its customers, it earns 10 percent of the larger base, which means more profit to cover its investment. Therein lies the incentive problem — public utilities get more profits by making more capital investments, needed or not.

As a recent example, Warren Buffet-owned NV Energy is in the midst of a regulatory-approval process with the Nevada Public Utility Commission to explore building a new billion-dollar natural-gas plant, rather than purchasing excess energy from other suppliers, as it has done in the past. While purchasing energy may be cheaper for NV Energy's customers, there is no money to be made for the utility. Building a new energy plant, on the other hand, would increase NV Energy's rate base, increase its profits and potentially raise consumer electricity bills.

In testimony before the Nevada's Public Utility Commission on June 10, the president of Wynn Resorts testified that NV Energy had announced to investors that it would grow its profits by spending more money. He estimated that the public utility has grabbed more net income than the entire Las Vegas Strip combined. Looks like NV Energy may be the best bet on the strip.

Rate-of-return regulation has been long regarded as wasteful, encouraging over-investment and "gold plating" by public utilities. In the 1960s, economists began to refer to the waste as the Averch-Johnson Effect, where utilities invest and accumulate excess capital stock in order to "pad" their rate base and increase profits.

Several studies emerged in the 1970s that proposed ways to make utility regulation more efficient by mimicking how competitive markets work. One such regulatory reform, price caps, automates changes in utility prices by keeping utilities from increasing rates faster than market costs, thereby encouraging productivity improvements. If utilities are able to outperform the market and cover a productivity factor, they can keep any additional income as profits. In other words, price caps would give consumers lower prices and provide utilities profit incentives to be more inefficient.

While all major telephone companies and some electric utilities have moved to price-cap regulation over the last two decades or so, rate-of-return regulation persists for some electric utilities. This means that some utilities continue to misallocate resources and over-invest in capital equipment, which pushes these unnecessary costs onto the backs of ratepayers. Price caps would give utilities the incentive to control costs and treat capital stock as just another input of producing electricity.

It is time to end archaic rate-of-return regulations and, in the absence of effective competitive, move to price-cap regulations. That reform would simplify regulatory oversight, keep consumer costs lower, and allow utilities to increase profits through efficiency.

Steve Pociask is president of the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization. Twitter: @consumerpal

Clean Power Plan: Acid Rain Part 2?

Ross McKitrick - August 26, 2015

In a recent speech in Washington, D.C., EPA administrator Gina McCarthy dismissed potential criticism of the costs of the new Clean Power Plan by pointing to America's success in reducing sulfur dioxide (SO2) emissions associated with acid rain. She said (correctly) that over the past 40 years, the U.S. slashed SO2 emissions while maintaining a growing economy. She warned darkly of "special interest critics" who would claim the new rules would be a threat to the economy. "They were wrong in the '90s when they said exactly the same thing," she claimed.

Some SO2 cost estimates were indeed too high. In 1990, the U.S. passed the Clean Air Act Amendments (CAAA), which introduced a cap-and-trade system to reduce sulfur air pollution. Critics warned that it would cost hundreds of dollars per ton of abatement, yet when the permits started trading, the price soon fell below market expectations and stayed there through the late 1990s and into the early 2000s.

But the factors that caused this do not apply to CO2.

A coal-fired power plant has four options for reducing SO2 emissions: switch to low-sulfur coal, install flue-gas desulfurization systems ("scrubbers"), switch to a cleaner fuel like natural gas, or scale back operations. The latter two are the costliest. The first two are relatively inexpensive but do not work for CO2. There are no scrubbers for CO2, and there is no such thing as low-carbon coal. (Well, actually, there is: It's called water, and it doesn’t burn very well.)

Unanticipated developments also played a role in driving down the cost of SO2 abatement. Prior to the 1990s, power plants in the eastern U.S. got most of their coal from nearby mines, which are high in sulfur. At the time that acid-rain legislation was being debated, railway deregulation was also being proposed, but it was not clear whether it would actually occur or how much competition would emerge in haulage. As it turned out, deregulation did happen, and increased competition substantially reduced the cost of moving low-sulfur coal from Wyoming to power plants in the East and Southeast.

Further, since power-plant operators did not anticipate this, they invested heavily in scrubbers during Phase I of the acid-rain program (1990 to 2000). In 1995, as the twin effects of scrubbers and cheap rail transport hit the market, emissions from units subject to the CAAA plunged far below expectations, taking permit prices with them. Since Phase I permits were bankable, power plants built up a large inventory to use in later years, and this kept prices low even as the cap was reduced in Phase II, which began in 2000.

The story changed after 2000. Permit prices had been projected to be $500-700 per ton in Phase II. As the stock of banked permits declined, prices trended up to $500 per ton by summer 2004, then shot to over $1500 per ton in late 2005 and early 2006 as generators coped with surging power demand and the expectation of further tightening of the emission cap. McCarthy seems conveniently to have forgotten this part of the story.

The situation changed again a year later when the EPA began to develop the Clean Air Interstate Rule (CAIR). This was a plan to group the permits by region in order to address the concentration of effects in downwind states. In 2008, as the recession hit and power demand fell, the average price paid in the annual EPA permits auction fell to $390 — below forecasts, but not dramatically so, considering the depth of the recession, which obviously could not have been foreseen in the '90s.

But a surprise court decision in July of that year blocking implementation of CAIR caused the permits market to collapse. By the next winter the regulatory uncertainty and the recession combined to push prices below $70. Needless to say, no one could have foreseen this, either. And the court battle came about because of the interstate differences in targets, which also does not apply to CO2 since concentrations are globally uniform.

Since 2010, uncertainty over the future form of the rule, the lingering effects of the financial crisis, and the rapid development of shale gas has caused SO2 permit prices to drop to a few dollars per ton. Until the EPA develops an interstate trading rule that satisfies the courts, the SO2 market is all but defunct.

It is wishful thinking to suppose that warnings about the costs of cutting CO2 emissions can be ignored, always and everywhere, just because some early estimates of SO2 control costs were too high, over some intervals. The main factors causing the overestimates do not apply to CO2, and absent these, SO2 permit prices would have been in line with, and occasionally far higher than, forecasts. Warnings about the economic impacts of the Clean Power Plan need to be taken seriously.

Ross McKitrick is an adjunct scholar at the Cato Institute.

Congress vs. Campus Speech Restrictions

Thomas K. Lindsay - August 25, 2015

Of late, there has been a deluge of news accounts detailing gross violations of free speech and debate on American campuses. From campus speech codes, to commencement speaker "dis-invitations," to naked ideological indoctrination in the classrooms, our universities, whose defining mission is the unfettered, nonpartisan quest for truth, are instead becoming havens for conformism, empty shells of the Socratic ideal from which they originally sprang.

But this oppressive regime may be beginning to crumble, at least if some members of the U.S. Congress have their way. In June, the House Judiciary Committee's Subcommittee on the Constitution and Civil Justice held a hearing titled, "First Amendment Protections on Public College and University Campuses," which investigated the extent to which free speech is still protected on taxpayer-funded campuses.

The findings from the investigation were not heartening, to put it mildly. As a result, Rep. Bob Goodlatte (R., Va.), chair of the House Judiciary Committee, recently sent a pointed letter to 162 public colleges and universities whose policies fail to ensure the First Amendment rights of their professors and students.

The House committee's list of freedom-suppressing public schools comes from research conducted by the nonprofit Foundation for Individual Rights in Education (FIRE), whose announced mission is to protect intellectual liberty on America's campuses. Surveying FIRE's list of offenders, we find a number of public flagships, among them the University of Alabama, the University of Georgia, the University of Iowa, the University of Kansas, the University of Michigan-Ann Arbor, and Ohio State University. In my home state of Texas, taxpayers fund ten named offenders, among them the state's two flagship institutions, the University of Texas-Austin and Texas A&M University-College Station.

It is illegal for any public college or university to maintain and enforce speech codes that violate the First Amendment-guaranteed rights of faculty and students. At the June Subcommittee on the Constitution and Civil Justice hearing, Greg Lukianoff, FIRE's president, testified that "speech codes — policies prohibiting student and faculty speech that would, outside the bounds of campus, be protected by the First Amendment — have repeatedly been struck down by federal and state courts. Yet they persist, even in the very jurisdictions where they have been ruled unconstitutional. The majority of American colleges and universities maintain speech codes."

Of the schools nationwide in violation of the First Amendment, the 162 recipients of the House committee's letter were found to be the worst offenders. Chairman Goodlatte writes, "In FIRE's Spotlight on Speech Codes 2015, your institution received a ‘red light' rating. According to FIRE, a ‘red light' institution ‘is one that has at least one policy that both clearly and substantially restricts freedom of speech.'" Hence, Goodlatte writes "to ask what steps your institution plans to take to promote free and open expression on its campus(es), including any steps toward bringing your speech policies in accordance with the First Amendment."

The named offenders have until August 28 to reply to Chairman Goodlatte's inquiry. How they choose to respond will determine the committee's course of action.

With this strong move by the House committee, we witness the academic world turned upside down: Academic freedom has always been supported, and rightly, as a defense against anti-intellectual pressure brought on universities by the political branches. The deeper defense of academic freedom is its indispensability to the nonpartisan truth-seeking that defines higher education's mission. But what happens when those who would deprive students and faculty of their First Amendment freedoms are within the universities themselves? This, unfortunately, is the crisis in which many universities find themselves today. For the solution, Congress has taken it upon itself to educate the educators in what those who supervise our universities should already know, namely, that when intellectual oppression rises, scientific progress and democratic deliberation decline.

Given the stakes involved, it is encouraging to see that there is growing bipartisan support for restoring freedom on our campuses. While Representative Goodlatte is a Republican, in the past year, two Democratic governors — Terry McAuliffe of Virginia and Jay Nixon of Missouri — have signed legislation banning "free-speech zones" at all public universities in their states. As I have argued previously, in America, under the First Amendment to the Constitution, everywhere should be a free-speech zone, not simply the restricted (and restrictive) spaces that the majority of universities today unconstitutionally deign to provide for students.

Although legislative action might prove necessary in the event that universities decline the House committee's plea to follow the Constitution, it would be heartbreaking if these institutions had to be compelled by a political branch to jettison their political agendas and return to disinterested inquiry. It would mean that American higher education has so lost any sense of its defining — and ennobling — purpose that it now has to be guided by those outside it, rather than guiding them, as it ought.

As a former university professor, I have seen firsthand the effect that the intolerance on our campuses has on the minds and souls of our students. As is the case in political regimes that suppress free speech, university policies that stifle debate produce an atmosphere of anxiety, distrust, and ultimately cynicism among those who suffer it. "Students' education suffers when colleges and universities infringe on free speech," observed Azhar Majeed, director of FIRE's Individual Rights Education Program.

Rightly said. Fear, intimidation, and uniformity are usurping the free, robust inquiry and debate that is the lifeblood of a genuine institution of higher learning, undermining both academic truth-seeking and democracy, which depends on an informed citizenry. The effect of campus-promoted intolerance is to jettison an informed, independent-minded citizenry and to replace it with a cowed, guilty, uncritical herd. From the students suffering under this regime will in time come our nation's leaders. Will they be able to face without blinking the profound moral challenges that every generation must face?

If so, it won't be due to their education. It will be in spite of it.

Thomas K. Lindsay directs the Centers for Tenth Amendment Action and Higher Education at the Texas Public Policy Foundation and is editor of SeeThruEdu.com. He was deputy chairman of the National Endowment for the Humanities under George W. Bush.

Courts Worsen the Pension Mess

Josh B. McGee - August 24, 2015

Court cases are creating a perilous standard for addressing the public-pension mess.

In May, Illinois's highest court said the state's constitution forbids even modest changes to the pension system. The next month, New Jersey's supreme court gave Governor Chris Christie carte blanche to refuse to pay into the state's pension funds.

These are two different courts, interpreting the laws of two different states. But if this signifies the approach courts will take elsewhere, it's the worst of all possible worlds. Eliminating options for reform while letting politicians underfund benefits puts workers and taxpayers between the proverbial rock and hard place. Workers may be forced to watch their retirement security go from squeezed to crushed, and taxpayers could be stuck with rising taxes, fewer services, and a weakened local economy.

Two things have to happen. First, leaders need to immediately adopt responsible, workable plans to adequately fund benefit promises, and second, everyone needs to work together to identify the changes necessary to create fair, sustainable pension systems for the future. Unfortunately, courts are encouraging leaders to do the exact opposite.

And the results of inaction are all too predictable.

Chicago illustrates the impact that pension mismanagement can have. In May, Moody's dropped the city's credit rating by two notches to junk status. The ratings agency also left the city on notice for future downgrades if it did not take concrete steps to deal with its looming fiscal crisis.

While Mayor Rahm Emanuel protested the downgrade, there is little disagreement between Moody's and the mayor regarding the city's significant financial challenges. The mayor acknowledged that “Chicago's financial crisis is very real and at our doorsteps.” The primary point of disagreement seems to be the magnitude of the impact Chicago's underfunded pensions will have on the city's finances.

The city's four pension funds currently have less than half the money they need to make good on the retirement benefits public workers have already earned.

The city must contribute a lot more money to keep the funds from running out of cash in the relatively near-term, a circumstance that would result in retirees' relying on direct budgetary payments from the city.

But there are only four levers the city might use to ameliorate the dire situation: tax increases, reductions to public services, changes to future retirement benefits, and restructuring of other debt. And the Illinois supreme court — interpreting a provision of the state constitution that says membership in a public pension program "shall be an enforceable contractual relationship, the benefits of which shall not be diminished or impaired" — recently took one option off the negotiating table, striking down a law that modestly reduced benefits for current workers and retirees. (For example, the law ended automatic cost-of-living increases for retirees and raised retirement ages for current workers.) This severely restricts the city's ability to find a solution without significant impacts on the other three, which of course should worry Chicago's creditors — Moody's primary concern.

So why does the mayor take issue with Moody's? The city may believe it is not on the hook for making pension payments above and beyond what is currently specified in statute. But the legally required contributions for some of the city's funds are so low that, with minimum payments, they will run out of money in relatively short order. Thus, with immediate benefit cuts off the table, the status quo is very likely to persist until the funds simply run out of money.

This could mark the beginning of a worrisome trend for workers. Given that retirement-plan sponsors in many jurisdictions appear to have very little flexibility to negotiate concessions from workers, what happens if sponsors simply force the issue and allow the funds to run out of money? Will the courts force governments to make benefit payments directly from annual budgets?

The tentative answer, at least in New Jersey, appears to be no. The New Jersey supreme court recently ruled that the state's 2011 commitment to adequately fund retirement benefits did not create an enforceable contract with workers, even though a number of members of the legislature have said that was their intent. Shortly thereafter, Governor Christie said flat-out that he would let the pension funds run dry unless workers agreed to concessions. This is political blackmail at its worst, and makes an already-underfunded system all the more precarious for workers.

What's more, it is not clear how this strategy protects taxpayers. In 2014, contributions to New Jersey's pension plans totaled $4.5 billion, but pension benefit payments were $9.4 billion — in other words, annual contributions would need to more than double just to make benefit payments. Even if workers agreed to concessions, it is unlikely that the savings would be enough to cover the immediate cash-flow deficit without service cuts or increased revenue. And ignoring the problem only makes the potential impacts worse.

It is unclear whether courts in other states will take a different tack, but the New Jersey ruling certainly does raise questions about the judiciary's willingness to force policymakers to appropriate dollars specifically for pensions.

The recent court rulings highlight a significant flaw in the structure of our current public retirement systems. The benefits workers earn are not directly connected to annual contributions or investment earnings. And since benefit payments are distant, this creates both the incentive and the opportunity for governments to understate the cost of benefits, systematically undermining the sustainability of the retirement system and in turn the security of the benefits workers have rightly earned.

Unfortunately, those who should be working to protect workers' benefits, including the pension plans themselves and the actuaries they hire, have too often aided governments in this endeavor. All of this should lead workers to ask, "What good is a benefit promise if there is not an equally strong funding commitment to back it up?"

It is time to stop engaging in pension brinksmanship and begin a real discussion about comprehensive reform.

Josh B. McGee is a senior fellow at the Manhattan Institute and vice president of public accountability at the Laura and John Arnold Foundation.

Program Evaluations Are a Waste of Money

Jason Richwine - August 21, 2015

Business schools teach aspiring managers to avoid "information bias" — that is, the tendency to seek more information even when it will have no effect on one's decision-making. That sounds like an obvious lesson, but it's not one the federal government has learned. Lawmakers routinely pay for formal evaluations of social programs, apparently knowing all the while that the results will not affect their support for those programs.

From job training to preschool, this year's House, Senate, and White House budget proposals all continue to offer funding for programs that have performed poorly on the government's own evaluations. It is a wasteful, disingenuous approach to social policy, but it need not continue. If we were to tie funding directly to the results of evaluations, the whole conversation about program evaluation would become more serious.

The perfect case study is Head Start, the oldest federal preschool program. The Head Start Impact Study — a state-of-the-art, multi-site experimental evaluation set in motion by a law Bill Clinton signed in 1998 — came with a price tag of $28 million. Rationally, lawmakers should not have paid for that study unless they expected the results to affect their support for the program. If the Impact Study shows Head Start is effective, they should want to increase funding and look for ways to expand the program's reach. If Head Start is not proven effective, lawmakers should presumably want to eliminate the program, or at least decrease support and redirect some of the funding toward back-to-the-drawing-board research.

Rationality did not prevail. The Impact Study failed to show lasting effects, yet Head Start is still alive and well. In fact, a couple of months after the study's final results were released, the Obama administration proposed increasing funding for Head Start, touting the "success" of the program and the "historic investments" the administration had already made in it. The White House did not say what it meant by "success," but clearly it must have been judging Head Start on some criteria that the Impact Study did not cover. So why pay for the study in the first place?

Head Start's defenders argue that the Impact Study is not capturing "sleeper effects" that will emerge later in the participants' lives. So if the Impact Study had shown positive effects, they would have said, "We should support Head Start because of these positive effects." Instead, they say, "We should support Head Start because of sleeper effects suggested by other research." Since the decision is the same either way, the Impact Study was a waste of taxpayer money.

Another way that the White House deflected the Impact Study's results was to cite its upcoming rewrite of performance standards for Head Start providers. However, a follow-up to the main Impact Study found that variation in Head Start program quality had no significant effect on student outcomes. That was apparently no problem for the administration. When its new standards were finally proposed this summer, there was no reference to the follow-up report's findings. Again, the Impact Study appears remarkably useless to the very government that funded it.

Democrats and Republicans share the blame. The legislation that authorized the Impact Study passed with large majorities of both parties. And, like the White House, both houses of the Republican-controlled Congress proposed budgets this year that would fully fund Head Start. So there is a bipartisan consensus in Washington both for evaluating Head Start and for disregarding the results of that evaluation.

Dropping the studies altogether would be preferable to paying for them and then ignoring the results. The better solution, however, would be to legally tie program funding to the evaluations. Make the existence of Head Start and other programs contingent on showing impacts on pre-specified outcome measures. That would require lawmakers to be clear about the reasons they support or oppose particular programs. If they protest that the benefits of their favorite program are not necessarily captured by a formal study, the natural question would be, "Since the study has no chance of changing your mind, why do you want taxpayers to fund it?"

There would be logistical difficulties, of course. One can imagine the special pleading that would follow a poor evaluation: "My favorite program almost achieved its required impact, so we shouldn't penalize it." A stubborn Congress might pass new legislation that simply restores funding to pre-evaluation levels. But the purpose of tying dollars to results is not so much to force an immediate policy change as it is to generate a more serious discussion about what we expect from social programs. It's a discussion that is long overdue.

Jason Richwine is a public-policy analyst in Washington, D.C.

The Latest Climate Kerfuffle

Patrick Michaels - August 20, 2015

Are political considerations superseding scientific ones at the National Oceanic and Atmospheric Administration?

When confronted with an obviously broken weather station that was reading way too hot, they replaced the faulty sensor — but refused to adjust the bad readings it had already taken. And when dealing with "the pause" in global surface temperatures that is in its 19th year, the agency threw away satellite-sensed sea-surface temperatures, substituting questionable data that showed no pause.

The latest kerfuffle is local, not global, but happens to involve probably the most politically important weather station in the nation, the one at Washington's Reagan National Airport.

I'll take credit for this one. I casually noticed that the monthly average temperatures at National were departing from their 1981-2010 averages a couple of degrees relative to those at Dulles — in the warm direction.

Temperatures at National are almost always higher than those at Dulles, 19 miles away. That's because of the well-known urban warming effect, as well as an elevation difference of 300 feet. But the weather systems that determine monthly average temperature are, in general, far too large for there to be any significant difference in the departure from average at two stations as close together as Reagan and Dulles. Monthly data from recent decades bear this out — until, all at once, in January 2014 and every month thereafter, the departure from average at National was greater than that at Dulles.

The average monthly difference for January 2014 through July 2015 is 2.1 degrees Fahrenheit, which is huge when talking about things like record temperatures. For example, National's all-time record last May was only 0.2 degrees above the previous record.

Earlier this month, I sent my findings to Jason Samenow, a terrific forecaster who runs the Washington Post's weather blog, Capital Weather Gang. He and his crew verified what I found and wrote up their version, giving due credit and adding other evidence that something was very wrong at National. And, in remarkably quick action for a government agency, the National Weather Service swapped out the sensor within a week and found that the old one was reading 1.7 degrees too high. Close enough to 2.1, the observed difference.

But the National Weather Service told the Capital Weather Gang that there will be no corrections, despite the fact that the disparity suddenly began 19 months ago and varied little once it began. It said correcting for the error wouldn't be "scientifically defensible." Therefore, people can and will cite the May record as evidence for dreaded global warming with impunity. Only a few weather nerds will know the truth. Over a third of this year's 37 90-degree-plus days, which gives us a remote chance of breaking the all time record, should also be eliminated, putting this summer rightly back into normal territory.

It is really politically unwise not to do a simple adjustment on these obviously-too-hot data. With all of the claims that federal science is being biased in service of the president's global-warming agenda, the agency should bend over backwards to expunge erroneous record-high readings.

In July, by contrast, NOAA had no problem adjusting the global temperature history. In that case, the method they used guaranteed that a growing warming trend would substitute for "the pause." They reported in Science that they had replaced the pause (which shows up in every analysis of satellite and weather balloon data) with a significant warming trend.

Normative science says a trend is "statistically significant" if there's less than a 5 percent probability that it would happen by chance. NOAA claimed significance at the 10 percent level, something no graduate student could ever get away with. There were several other major problems with the paper. As Judy Curry, a noted climate scientist at Georgia Tech, wrote, "color me 'unconvinced.'"

Unfortunately, following this with the kerfuffle over the Reagan temperature records is only going to "convince" even more people that our government is blowing hot air on global warming.

Patrick Michaels is director of the Center for the Study of Science at the Cato Institute.

Government Intervention Is Becoming Obsolete

Fred E. Foldvary - August 20, 2015

Much government intervention has no economic rationale and is due instead to pressure from special interests. However, some interventions have a public-welfare justification, backed by conventional economic theory. Textbooks in the field normally present four such rationales: asymmetric information, external effects, public goods, and monopoly.

Advances in technology are fast rendering these arguments obsolete.

"Asymmetric information" means that in an exchange, one party has much more knowledge than the other. When one buys a used car or computer, the seller could take advantage of the buyer's ignorance. Therefore, says standard theory, the market fails.

But ignorance creates a demand for both information and assurance. The economy provides consumers information through such channels as Consumer Reports, Angie's List, and Yelp reviews. Advancing technology provides greater and cheaper information. The websites of consumer publications enable users to computer-search, rather than having to go to libraries and look up printed articles. Markets also provide assurance through warranties, guarantees, and sellers' desire to preserve a good reputation.

"External effects" are uncompensated effects on others; the standard example is pollution. In a pure market, pollution constitutes trespass and invasion of another's property, and is subject to a liability rule that makes the producer pay for the damage, making the cost internal.

But some property rights, such as for fish in the oceans, have not historically been feasible.

Here too, advancing technology, like electronic fencing and tagging, is providing a solution. Even when government is involved in reducing pollution, better technology can replace regulations (such as on gasoline, engines, and smog) with pricing when remote sensors measure actual pollution and photograph the license plates. Private associations and firms can also use such technology to get polluting car owners to compensate for their emissions and help pay for the roads.

"Public goods" are items that are non-rival, meaning that their use by one person does not diminish the use of others. One more person viewing a city fireworks show does not prevent others from viewing it. Standard economic theory posits market failure due to free riders: An entrepreneur cannot privately build a dam to protect a city from floods, because some people will refuse to pay, figuring that the dam will protect them whether they contribute or not.

Already, private contractual communities such as homeowners' associations can and do provide such collective goods. And better technology such as electronic tolling now makes private provision more feasible, as private roads and parking can more easily collect the needed fees, while also eliminating congestion with prices just high enough to enable traffic flow and parking.

Monopoly can indeed result in higher prices, but there can be benefits to large firms, such as providing standard formats for software. Also, even dominant firms need to innovate in order to maintain market share, and excessively high prices induce competition. Here too, better technology helps to address the problem. Examples include cheaper generation of electricity on a small scale, including solar generators, and the recycling of water. Both of these examples reduce the need for regulated "natural monopolies" that have high fixed costs.

The effects of advancing technology on the rationales for governmental programs were presented in the 2003 book "The Half Life of Policy Rationales," edited by Daniel Klein and myself. Eric Hammer and I recently updated this research in 2015 in a working paper published by the Mercatus Center at George Mason University, "How Advancing Technology Keeps Reducing Interventionist Policy Rationales."

The prevailing market-failure theory and the government programs that claim justification from such theory are increasingly obsolete, and both theorists and practitioners need to take note.

— Fred E. Foldvary teaches economics at San Jose State University, California, and is the coauthor (with Eric Hammer) of a recent working paper on "How Advances in Technology Keep Reducing Interventionist Policy Rationales," published by the Mercatus Center at George Mason University.

Adam Rosenberg - August 19, 2015

This month marks the 80th birthday of the Social Security program. For decades, the program has been a vital lifeline for retirees, the disabled, and their families and has lifted tens of millions of Americans out of poverty. 

The program faces financial problems, though. The Disability Insurance trust fund is expected to deplete its reserves in late 2016, and even if its finances are intermingled with the old age program, the combined Social Security trust funds are projected to go insolvent by 2034. When these trust funds run out of money, benefit payments will need to be cut or delayed to hold spending to incoming revenue. 

Making Social Security financially secure will require an informed debate about the choices involved, but myths are often recited to obstruct progress on reform. Here are 4 common myths.

Myth #1: Social Security does not face a large funding shortfall.

Fact: The Social Security trust fund is projected to run out by 2034 and faces a shortfall of 2.7 to 4.4 percent of total wage income over the next 75 years. 

Due to population aging, Social Security is projected to have a relatively large shortfall over the long term, which will deplete the trust fund reserves by 2034. Keeping Social Security solvent for 75 years would require the equivalent of a 20 percent (2.6 percentage point) immediate payroll tax increase or 16 percent immediate benefit cut, according to the Social Security trustees. Needed adjustments will grow over time.

While the shortfall can certainly be closed with targeted spending and revenue changes, it should not be downplayed or ignored.

Myth #2: Today's workers will not receive Social Security benefits.

Fact: Even if policymakers do nothing, the program could still pay about three-quarters of benefits.

Simply put, the only way that future beneficiaries would receive zero benefits from Social Security is if the program were eliminated. When the trust fund goes insolvent, as the trustees project will happen in 2034, it would still be able to pay benefits from incoming revenue, which they forecast would equal 79 percent of scheduled benefits in that year, declining to 73 percent over time. 

Trust fund insolvency does not mean that benefits would disappear but rather that they would be reduced from their scheduled level by about one-fifth. That would clearly be a dramatic cut in benefits, particularly since it would happen quickly, but it would not be the same thing as benefits going away entirely.

Myth #3: Social Security would be fine if we hadn't "raided the trust fund."

Fact: The program's financial shortfall stems from a growing mismatch between benefits paid and incoming revenue, not the fact that the funds were borrowed.

As a result of surpluses accumulated during the 1990s and 2000s, the Social Security trust fund currently holds $2.8 trillion of assets, which are invested in special U.S. Treasury bonds. Many argue that the surpluses were used to mask the size of deficits outside the program, allowing lawmakers to enact more tax cuts and spending increases than they otherwise would. In that sense, it could be argued that lawmakers "raided the trust fund," but in an accounting sense, no actual money has been taken out of the Social Security trust fund. The $2.8 trillion of assets will be available to cover program deficits until that money runs out.

Solvency projections take into account the trust fund assets but show that they are dwarfed by the shortfall over the next 75 years. The Social Security trustees project that the program will spend $13.5 trillion more (on a "present value basis," which accounts for interest and inflation) than it raises over the next 75 years, relying on the trust fund to finance $2.8 trillion of it. Lawmakers need to reduce the program's deficits to ensure solvency.

Myth #4: Fixing Social Security is too hard.

Fact: Social Security reform options are well-known, and incremental adjustments - enacted soon - can secure the program for future generations.

While Social Security reform may be politically difficult to accomplish politically, the options, from a policy sense, are well-known and are sufficient to keep the program solvent without fundamentally changing its structure. 

There are countless options available to change various parameters of the program, and the Social Security Chief Actuary publishes annually a long list of 121 options for doing so. The chief actuary also has evaluated Social Security reform proposals going back two decades. Ordinary citizens can weigh the various factors and come up with their own Social Security plan by accessing "The Reformer," an online tool from The Committee for a Responsible Federal Budget (CRFB).

There are plenty of well-known and quantifiable options to ensure that Social Security remains financially sound for the next 80 years and beyond. 

Adam Rosenberg is a policy analyst at the Committee for a Responsible Federal Budget, a nonpartisan organization committed to educating the public about issues that have significant fiscal-policy impact. These are just four of eight myths that CRFB tackled in a recent paper. To read more about all eight, click here.

Inequality and the Veil of Ignorance

Courtney Such - August 10, 2015

America's income gap is much debated. But a new paper — invoking the famous "veil of ignorance" theory of philosopher John Rawls, who is much beloved on the left — suggests it may not be as dramatic as many believe. The paper suggests that global inequality, not inequality within advanced nations, is what should concern the adherents of this theory as they make policy.

We talked with the paper's co-authors, Federal Reserve Bank of Minneapolis consultants V.V. Chari and Christopher Phelan, to learn more. The interview has been shortened and edited for clarity.


How would you explain the "veil of ignorance" theory in layman's terms?

Chari: John Rawls argued that the sensible way to make moral judgments about political issues is to imagine that none of us knows our current position in society. Our current position does not influence what we think is desirable; we adopt a perspective of neutral observation.

A way to imagine a neutral observer is to imagine that all of us are transported to Mars — or some outside planet — and then we decide on a social arrangement, and then we are randomly reassigned as possibly someone quite different from who we are. If we happen to be currently rich, there is some chance we could end up as somebody who grew up in an inner city and had an underprivileged education. Or, if you're a poor person, you could be a rich person. So that was the "veil of ignorance" construct that was invented to allow us to make judgments without letting our personal circumstances influence that thinking.


When you apply this theory to income inequality, what happens?

Phelan: If you take income inequality as being exogenous — exogenous is a term we use to mean like it just fell from the sky — then you would just say, "bad." You want to just raise the lowest person up. But in reality there's a tradeoff between inequality and income levels where, if you try to get rid of inequality too much, there just won't be enough to hand out. People make a tradeoff between income inequality and, let's say, economic growth.

Chari: The basic message of our paper is, imagine all the persons in the world are transported to Mars and we are going to be randomly reassigned. There's a very good chance, roughly a 20 percent chance, that we'll end up as a relatively poor person in India. There's a smaller chance, but still significant, that you might end up as somebody in Africa or Latin America. So therefore, sitting in our position in Mars when we are deciding on these social arrangements, we've got to ask ourselves, "How will that social arrangement help or hurt me if I end up in Chad or if I end up in Manhattan?"

Given that there are a lot more people who live in poor countries than in relatively rich countries, the odds are pretty good that, when deciding on social arrangements while living in Mars, you would be very concerned about global inequality. That is what would concern us from a first approximation. We would set up social arrangements which would provide a lot of opportunities in the event that we happened to be reborn in a desperate country — and that we would be less concerned about our prospects if we happened to be cast into Denmark or Sweden.


How does this factor in to the current debate?

Chari: Our paper is directed at some subset of people who argue that because income inequality in developed countries has increased dramatically, we have to engage in more extensive redistribution inside the United States or Sweden or France. What we are saying is, it's perfectly fine to make that argument — if what you acknowledge upfront is, look, I'm selfishly interested only in the wellbeing of people in the United States.

The political system responds to those who have that consideration. But some people say, not only is this good policy, it's an ethically desirable policy — and it's only when people make that last argument that we say their thinking is not well grounded in the discipline of ethics as envisioned by John Rawls.

All we are saying is, if you believe in Rawls's principles, then step one, you ought to be celebrating the extraordinary decline in worldwide inequality that has occurred over the last 35-40 years — the biggest improvement in human prosperity in the history of humankind. Second, you ought to be advocating very extensively for policies that make poor people in poor countries better off. Somebody below the poverty line in the United States is by the standards of world distribution extraordinary affluent. You have to care much more about poor people in Chad than you do about poor people in Mississippi.


Your study discusses global trad
e. How does this play into your argument?

Phelan: Let's say you're a furniture maker in North Carolina. Let's say you're in the part of the factory that requires a low-skilled worker or a textile factory in North Carolina. Those industries got defeated when we opened up our trade to the rest of the world. For the most part, there isn't much textile industry in the U.S. — it's all moved overseas. Well, the people in that industry were relatively poor in the U.S.

If you apply the veil-of-ignorance criteria to just people in the United States, it gives bad policy. It means the poorest of us get poorer. If you apply it to the whole world, it's a good policy, because it makes the poorest of the whole world richer.


What are your suggestions for getting redistribution policies right?

Chari: I think that economics has good lessons and messages. "Increase trade" is one, "increase research and development" is another, and somewhat more controversial is "increase immigration from very poor countries to very rich countries." These are all devices that I think would make poor people better off, and those are the kinds of policies that those who advance an ethical point of view — I'm not necessarily one of them, I'm just an economist perusing those ethics, but those are the policies they ought to be dealing with.

The policies they typically end up advocating are policies that restrict immigration, restrict the ability of rich societies to become prosperous so they can share their additional knowledge with people in poor countries. For example, advances in, say, cell phones — innovations like the iPhone — have made some people in the United States and Finland and so on extraordinarily rich. They've also served dramatically to improve the function of the markets in Africa and have brought immeasurable benefits to those people, so, in some sense, those kinds of innovations have increased inequality in a developed world but have reduced one way of inequality, and therefore, they should be applauded from the perspective of the veil of ignorance.

Phelan: I'm not personally convinced that inequality in the country right now is something that needs to be fixed. There is a tradeoff in society between inequality and growth. If you try to ensure everybody gets everything, level all differences, you would be removing incentive to get education, to work hard, to take risks.

The person poor in our country right now is actually relatively wealthy compared to the person in that position 100 years ago, and they are relatively wealthy compared to the world income distribution. They have cars, air conditioning, houses — poor deprivation used to be a big deal. It's not that it doesn't happen ever in the United States, but the fraction of people who literally are having trouble getting enough calories to get through the day has shrunken dramatically to almost nothing. It's not nothing, but it's getting close.


Courtney Such is a RealClearPolitics intern.

Blog Archives