Filter | only charts

Do Guns Cause Violence?

Robert VerBruggen - August 27, 2015

At The Week, my old friend Michael Brendan Dougherty makes a "conservative case for reforming America's sick gun culture." (He and I were roommates in a disturbingly messy four-bachelor Fairfax townhouse about a decade ago.) He supports the idea of an armed citizenry, and believes people should be able to own guns to protect themselves. But in light of yesterday's events, he would make it a requirement for anyone buying a gun to have some sort of training or socialization in the culture of gun clubs.

Of course, this is a political nonstarter — it amounts to background checks on steroids, where not only your criminal history but also your character and training can disqualify you. We do have criminal checks for gun-dealer sales (and yesterday's killer apparently passed one), but attempts to require these checks on sales between private individuals, a modest measure I tentatively support, have flopped outside a handful of liberal states. It's hard to see Dougherty's plan faring even that well. And then there's the question of whether such a rule would square with the Second Amendment as the Supreme Court interprets it.

But it's worth engaging with Dougherty's assumptions about the broad link between guns and violence, because they're shared by vast swathes of the population. So, here's some of the wisdom I've accumulated in more than a decade of following gun research.

Dougherty writes:

America is really the only nation that is orderly with an almost unchallengeable state, and yet has a gun-death rate similar to much poorer Latin American nations experiencing low-grade civil wars and disorder.

Yes, many of our firearm-related deaths are suicides. But our firearm-related homicide rate is noticeably higher than every comparable industrialized nation. And furthermore, there seems to be a strong correlation between reduced access to firearms and a reduced rate of suicide.

"Gun deaths" are a pet peeve of mine, and Dougherty only partly addresses my concern when he admits that they include suicides. The notion that guns and "gun deaths" go together is practically tautological, and unhelpful to boot. A country with no guns by definition has no gun deaths, but that doesn't mean it has fewer violent deaths overall.

To start with a point of agreement, I'm somewhat sympathetic to his point about suicides. Unlike with homicides — where a gun can enable one or prevent one — the effect of guns on suicide can only be bad. There's decent research suggesting that gun ownership does modestly increase suicide rates; suicide can be impulsive, so it's not true that someone without a gun will necessarily find another way. But many people — including me, and I'm guessing most conservatives in general — find repugnant the idea of reducing people's "access to firearms," not on the basis of any demonstrated suicide risk, but simply on the off chance that they might use a gun to harm themselves.

If not "gun deaths," what about "firearm-related homicide"? This too is a nearly useless concept, because gun homicides and non-gun homicides interact with each other. Someone who can't get a gun may simply kill with a different weapon instead. (Even in gun-drenched America, about a third of murders are committed with no gun.) And someone who can get a gun might defend himself against an assailant who doesn't have one. So we should always focus first on total violence, not gun violence, even when we're looking for the effects of guns.

The simple correlation between gun ownership and violence often disappears entirely when you take this into account, as I've shown with data on both states in the U.S. and developed countries. This shows that guns are not a primary driver of differences in murder rates — whatever effect they have is drowned out in the data by things like demographic differences, culture, and so forth.

Using complicated statistical techniques, you can try to tease the effect of guns out of this mess, and some researchers have purported to do so. But as statistical techniques become more complicated, they also become more subjective and run the risk of falling victim to political motivations. The two fundamental laws of gun studies are: One, if a given author reaches a pro- or anti-gun result in one study, all his future results will point in the same direction; two, if it appears in a public-health journal, the results will suggest guns are bad. Relatedly, a general note of caution is always in order when it comes to social science: It's impossible to "control" for everything besides guns that might affect violence, especially culture.

Essentially, the tools currently available to scientists aren't precise enough to resolve this debate, leaving too much wiggle room for researchers to reach the conclusions they want. We don't have consensus, but rather groups of researchers reaching conflicting results. Here's a criticism of the study linked above, for example. 

We see a similar thing in the debate over shall-issue concealed-carry laws, under which any civilian without a serious criminal record can get licensed to carry a gun. Some state laws are incredibly permissive — a few don't even require permits or training, and I got my Virginia license on the basis of a Wisconsin hunter's-safety certification I earned when I was 12. For all the state knew, I hadn't touched a gun in more than 15 years.

This would seem to be a prime example of the anyone-can-pack-heat culture Dougherty wants to reform. But as with the research on gun ownership, 20 years of studies on these laws have taught us almost nothing. Some studies suggest the laws reduce crime. Others suggest they have no effect. Still others say they increase crime. And even the most recent study reaching the anti-gun conclusion admitted that the results are incredibly sensitive. The most the authors could say is that the results are anti-gun if you use the techniques they happen to prefer.

I said we've learned almost nothing. What we have learned is this: A bunch of states started letting almost any random person walk around a gun, and if anything good or bad resulted, it doesn't reliably show up in the data. That's something in itself.

Other ways of studying gun restrictions are even less conclusive. For example, the "public health" crowd is quite fascinated by "case-control" studies, where they compare people who got murdered with demographically similar people who didn't get murdered, and pretend it means something that the people who got murdered were more likely to own guns. And studies looking at states before and after they implemented gun-control measures range from interesting if only suggestive to laughably bad.

I'm not the only person to reach the conclusion that the role of guns in violence is rather subtle. One interesting example is the Harvard psychology professor Steven Pinker. He's no fan of the NRA; he's from Canada, for God's sake. But in his book about the decline of violence, The Better Angels of Our Nature, the discussion of "weaponry and disarmament" is practically a footnote — about one page in an 800-page tome, relegated to a section about the "forces that one might have thought would be important [in major trends in violence] ... but as best as I can tell turned not out to be." He doesn't even bother to "endorse the arguments for or against gun control," and he writes that "human behavior is goal-directed, not stimulus-driven," adding that "anyone who is equipped to hunt, harvest crops, chop firewood, or prepare salad has the means to damage a lot of human flesh." Similarly, in Ghettoside, her interesting exploration of black-on-black crime in LA, the journalist Jill Leovy writes — in an actual footnote  that "guns are not a root cause of black homicide." The criminologist Gary Kleck tends to be highly skeptical of claims that guns make a difference, on net, one way or the other.

In short, yes, it's possible that confining gun ownership to the people willing to jump through various government hoops might have some marginal effect on violence. But that effect will probably be so small as to be difficult to detect, and there may be no effect at all.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Birthright Citizenship Encourages Assimilation

Alex Nowrasteh - August 27, 2015

Many Republicans are falling over themselves to echo Donald Trump's call to end birthright citizenship. Experts will be debating the legality of this for some time — many say a constitutional amendment would be needed — but the real-world impact of birthright citizenship is more important than the legal nuances. Granting citizenship to those born here is an insurance policy for a broken immigration system: It encourages the children of illegal immigrants to assimilate.

Currently, there are roughly 4 million U.S.-born children of illegal immigrants and 17 million minor children of legal immigrants. Those already born wouldn't be affected by a repeal, but roughly 1 million babies are born every year to immigrants. As immigration attorney Margaret Stock wrote, "If proponents of changing the Fourteenth Amendment have their way, every baby born in America will now face a bureaucratic hurdle before he or she gets a birth certificate." That's a huge number of newborns to annually condemn to automatic illegal status — and doing so would substantially increase the number of illegal immigrants in the country.

That would be bad enough, but the bigger problems would emerge later, as this larger population of illegal immigrants would assimilate more slowly. Assimilation, or the politically correct term "integration," mostly occurs in the second and third generations. Denying citizenship to children of immigrants would deny them legal equality in the United States, stunting their ability to culturally and economically assimilate.

Imagine being born and growing up here and being constantly reminded that you are not a citizen and will likely never be one. That scenario is theoretical for Americans, but Koreans born in Japan have experienced just that and the results are ugly. The Korean minority, called zainichi, are a legal underclass discriminated against by the government. This causes deep resentment and a proneness to crime and political extremism. The zainichi grew even though Japan has virtually zero legal immigration. By contrast, Korean immigrants and their descendants have thrived in the United States where their U.S.-born children are citizens.

And successful assimilation isn't limited to Korean Americans. According to research from University of Washington professor Jacob Vigdor, immigrants and their children from all backgrounds are culturally, linguistically, and economically assimilating today at about the same rate that immigrants assimilated 100 years ago. Nobody today thinks the descendants of the Italian, Polish, or Russian immigrants of early last century failed to assimilate.

The negative effects of making citizenship much harder or impossible to attain go way back. Republican Rome tightened its citizenship rules after the Second Punic War ended in 202 BC. Romans turned their backs on a previous open-door policy that allowed noble families to immigrate and naturalize while also granting citizenship to loyal allies. The new immigration restrictions led to an uprising in cities pushing for Roman citizenship — one of the stranger civil wars in history. To quiet the unrest, Rome finally reinstated the older rules that had served it so well.

America doesn't face a revolt of allies demanding citizenship, but it does face millions of illegal immigrants, their U.S.-born children, and the challenge of assimilating them. There will always be some illegal immigrants in the United States, regardless of reform or levels of enforcement. Birthright citizenship is an insurance policy that guarantees their children will assimilate instead of simmer on the margins of society.

We are the midst of a failed immigration policy that has produced around 12 million illegal immigrants. Now is not the time to cancel birthright citizenship and its benefits.

Alex Nowrasteh is the immigration policy analyst at the Cato Institute's Center for Global Liberty and Prosperity.

The Case for Utility Price Caps

Steve Pociask - August 27, 2015

Just last month, the U.S. Energy and information Administration announced that natural gas had surpassed coal — for the first time ever — as the main source of electricity generation. The news may have sparked delight for those who want to see the end of coal, and for those who view the recent boom in domestic natural-gas production as a means to lower consumer prices.

To others, however, the news brought puzzlement. While consumer gas prices fell by 24 percent from 2005 to May 2015, residential electricity costs rose by 37 percent. The stark divergence in prices has left policymakers and regulators wondering — if natural-gas prices are falling, and if natural gas is becoming the most important input in electricity generation, why are consumer utility prices still rising and by so much?

There are a number of easy explanations for why electricity prices are not dramatically decreasing — such as regulatory mandates that are increasing operational costs and shutting down lower-cost coal-fired plants, as well as the investment costs that are needed to improve the basic infrastructure of the power grid and protect plants from terrorist attacks, particularly cyber attacks. These are obvious costs that all electric utilities face.

Not so obvious, but very significant, is that many electric utilities are still regulated in much the same way that as they were over 70 years ago. That form of regulation, "rate-of-return regulation," guarantees a "fair return" for public-utility investments in plant and equipment, and it has long been known to create incentives to run up costs. According to reports from earlier this year, some utilities are accumulating excess capital for the purpose of increasing their profits, not for serving the public.

Corporations are always under pressure to increase shareholder value. For rate-of-return electric companies, regulators try prevent unreasonable utility profits by setting a rate-of-return, say 10 percent, on the size of the public-utility rate base. If the utility is a large one and requires more plant and equipment to serve its customers, it earns 10 percent of the larger base, which means more profit to cover its investment. Therein lies the incentive problem — public utilities get more profits by making more capital investments, needed or not.

As a recent example, Warren Buffet-owned NV Energy is in the midst of a regulatory-approval process with the Nevada Public Utility Commission to explore building a new billion-dollar natural-gas plant, rather than purchasing excess energy from other suppliers, as it has done in the past. While purchasing energy may be cheaper for NV Energy's customers, there is no money to be made for the utility. Building a new energy plant, on the other hand, would increase NV Energy's rate base, increase its profits and potentially raise consumer electricity bills.

In testimony before the Nevada's Public Utility Commission on June 10, the president of Wynn Resorts testified that NV Energy had announced to investors that it would grow its profits by spending more money. He estimated that the public utility has grabbed more net income than the entire Las Vegas Strip combined. Looks like NV Energy may be the best bet on the strip.

Rate-of-return regulation has been long regarded as wasteful, encouraging over-investment and "gold plating" by public utilities. In the 1960s, economists began to refer to the waste as the Averch-Johnson Effect, where utilities invest and accumulate excess capital stock in order to "pad" their rate base and increase profits.

Several studies emerged in the 1970s that proposed ways to make utility regulation more efficient by mimicking how competitive markets work. One such regulatory reform, price caps, automates changes in utility prices by keeping utilities from increasing rates faster than market costs, thereby encouraging productivity improvements. If utilities are able to outperform the market and cover a productivity factor, they can keep any additional income as profits. In other words, price caps would give consumers lower prices and provide utilities profit incentives to be more inefficient.

While all major telephone companies and some electric utilities have moved to price-cap regulation over the last two decades or so, rate-of-return regulation persists for some electric utilities. This means that some utilities continue to misallocate resources and over-invest in capital equipment, which pushes these unnecessary costs onto the backs of ratepayers. Price caps would give utilities the incentive to control costs and treat capital stock as just another input of producing electricity.

It is time to end archaic rate-of-return regulations and, in the absence of effective competitive, move to price-cap regulations. That reform would simplify regulatory oversight, keep consumer costs lower, and allow utilities to increase profits through efficiency.

Steve Pociask is president of the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization. Twitter: @consumerpal

Clean Power Plan: Acid Rain Part 2?

Ross McKitrick - August 26, 2015

In a recent speech in Washington, D.C., EPA administrator Gina McCarthy dismissed potential criticism of the costs of the new Clean Power Plan by pointing to America's success in reducing sulfur dioxide (SO2) emissions associated with acid rain. She said (correctly) that over the past 40 years, the U.S. slashed SO2 emissions while maintaining a growing economy. She warned darkly of "special interest critics" who would claim the new rules would be a threat to the economy. "They were wrong in the '90s when they said exactly the same thing," she claimed.

Some SO2 cost estimates were indeed too high. In 1990, the U.S. passed the Clean Air Act Amendments (CAAA), which introduced a cap-and-trade system to reduce sulfur air pollution. Critics warned that it would cost hundreds of dollars per ton of abatement, yet when the permits started trading, the price soon fell below market expectations and stayed there through the late 1990s and into the early 2000s.

But the factors that caused this do not apply to CO2.

A coal-fired power plant has four options for reducing SO2 emissions: switch to low-sulfur coal, install flue-gas desulfurization systems ("scrubbers"), switch to a cleaner fuel like natural gas, or scale back operations. The latter two are the costliest. The first two are relatively inexpensive but do not work for CO2. There are no scrubbers for CO2, and there is no such thing as low-carbon coal. (Well, actually, there is: It's called water, and it doesn’t burn very well.)

Unanticipated developments also played a role in driving down the cost of SO2 abatement. Prior to the 1990s, power plants in the eastern U.S. got most of their coal from nearby mines, which are high in sulfur. At the time that acid-rain legislation was being debated, railway deregulation was also being proposed, but it was not clear whether it would actually occur or how much competition would emerge in haulage. As it turned out, deregulation did happen, and increased competition substantially reduced the cost of moving low-sulfur coal from Wyoming to power plants in the East and Southeast.

Further, since power-plant operators did not anticipate this, they invested heavily in scrubbers during Phase I of the acid-rain program (1990 to 2000). In 1995, as the twin effects of scrubbers and cheap rail transport hit the market, emissions from units subject to the CAAA plunged far below expectations, taking permit prices with them. Since Phase I permits were bankable, power plants built up a large inventory to use in later years, and this kept prices low even as the cap was reduced in Phase II, which began in 2000.

The story changed after 2000. Permit prices had been projected to be $500-700 per ton in Phase II. As the stock of banked permits declined, prices trended up to $500 per ton by summer 2004, then shot to over $1500 per ton in late 2005 and early 2006 as generators coped with surging power demand and the expectation of further tightening of the emission cap. McCarthy seems conveniently to have forgotten this part of the story.

The situation changed again a year later when the EPA began to develop the Clean Air Interstate Rule (CAIR). This was a plan to group the permits by region in order to address the concentration of effects in downwind states. In 2008, as the recession hit and power demand fell, the average price paid in the annual EPA permits auction fell to $390 — below forecasts, but not dramatically so, considering the depth of the recession, which obviously could not have been foreseen in the '90s.

But a surprise court decision in July of that year blocking implementation of CAIR caused the permits market to collapse. By the next winter the regulatory uncertainty and the recession combined to push prices below $70. Needless to say, no one could have foreseen this, either. And the court battle came about because of the interstate differences in targets, which also does not apply to CO2 since concentrations are globally uniform.

Since 2010, uncertainty over the future form of the rule, the lingering effects of the financial crisis, and the rapid development of shale gas has caused SO2 permit prices to drop to a few dollars per ton. Until the EPA develops an interstate trading rule that satisfies the courts, the SO2 market is all but defunct.

It is wishful thinking to suppose that warnings about the costs of cutting CO2 emissions can be ignored, always and everywhere, just because some early estimates of SO2 control costs were too high, over some intervals. The main factors causing the overestimates do not apply to CO2, and absent these, SO2 permit prices would have been in line with, and occasionally far higher than, forecasts. Warnings about the economic impacts of the Clean Power Plan need to be taken seriously.

Ross McKitrick is an adjunct scholar at the Cato Institute.

Congress vs. Campus Speech Restrictions

Thomas K. Lindsay - August 25, 2015

Of late, there has been a deluge of news accounts detailing gross violations of free speech and debate on American campuses. From campus speech codes, to commencement speaker "dis-invitations," to naked ideological indoctrination in the classrooms, our universities, whose defining mission is the unfettered, nonpartisan quest for truth, are instead becoming havens for conformism, empty shells of the Socratic ideal from which they originally sprang.

But this oppressive regime may be beginning to crumble, at least if some members of the U.S. Congress have their way. In June, the House Judiciary Committee's Subcommittee on the Constitution and Civil Justice held a hearing titled, "First Amendment Protections on Public College and University Campuses," which investigated the extent to which free speech is still protected on taxpayer-funded campuses.

The findings from the investigation were not heartening, to put it mildly. As a result, Rep. Bob Goodlatte (R., Va.), chair of the House Judiciary Committee, recently sent a pointed letter to 162 public colleges and universities whose policies fail to ensure the First Amendment rights of their professors and students.

The House committee's list of freedom-suppressing public schools comes from research conducted by the nonprofit Foundation for Individual Rights in Education (FIRE), whose announced mission is to protect intellectual liberty on America's campuses. Surveying FIRE's list of offenders, we find a number of public flagships, among them the University of Alabama, the University of Georgia, the University of Iowa, the University of Kansas, the University of Michigan-Ann Arbor, and Ohio State University. In my home state of Texas, taxpayers fund ten named offenders, among them the state's two flagship institutions, the University of Texas-Austin and Texas A&M University-College Station.

It is illegal for any public college or university to maintain and enforce speech codes that violate the First Amendment-guaranteed rights of faculty and students. At the June Subcommittee on the Constitution and Civil Justice hearing, Greg Lukianoff, FIRE's president, testified that "speech codes — policies prohibiting student and faculty speech that would, outside the bounds of campus, be protected by the First Amendment — have repeatedly been struck down by federal and state courts. Yet they persist, even in the very jurisdictions where they have been ruled unconstitutional. The majority of American colleges and universities maintain speech codes."

Of the schools nationwide in violation of the First Amendment, the 162 recipients of the House committee's letter were found to be the worst offenders. Chairman Goodlatte writes, "In FIRE's Spotlight on Speech Codes 2015, your institution received a ‘red light' rating. According to FIRE, a ‘red light' institution ‘is one that has at least one policy that both clearly and substantially restricts freedom of speech.'" Hence, Goodlatte writes "to ask what steps your institution plans to take to promote free and open expression on its campus(es), including any steps toward bringing your speech policies in accordance with the First Amendment."

The named offenders have until August 28 to reply to Chairman Goodlatte's inquiry. How they choose to respond will determine the committee's course of action.

With this strong move by the House committee, we witness the academic world turned upside down: Academic freedom has always been supported, and rightly, as a defense against anti-intellectual pressure brought on universities by the political branches. The deeper defense of academic freedom is its indispensability to the nonpartisan truth-seeking that defines higher education's mission. But what happens when those who would deprive students and faculty of their First Amendment freedoms are within the universities themselves? This, unfortunately, is the crisis in which many universities find themselves today. For the solution, Congress has taken it upon itself to educate the educators in what those who supervise our universities should already know, namely, that when intellectual oppression rises, scientific progress and democratic deliberation decline.

Given the stakes involved, it is encouraging to see that there is growing bipartisan support for restoring freedom on our campuses. While Representative Goodlatte is a Republican, in the past year, two Democratic governors — Terry McAuliffe of Virginia and Jay Nixon of Missouri — have signed legislation banning "free-speech zones" at all public universities in their states. As I have argued previously, in America, under the First Amendment to the Constitution, everywhere should be a free-speech zone, not simply the restricted (and restrictive) spaces that the majority of universities today unconstitutionally deign to provide for students.

Although legislative action might prove necessary in the event that universities decline the House committee's plea to follow the Constitution, it would be heartbreaking if these institutions had to be compelled by a political branch to jettison their political agendas and return to disinterested inquiry. It would mean that American higher education has so lost any sense of its defining — and ennobling — purpose that it now has to be guided by those outside it, rather than guiding them, as it ought.

As a former university professor, I have seen firsthand the effect that the intolerance on our campuses has on the minds and souls of our students. As is the case in political regimes that suppress free speech, university policies that stifle debate produce an atmosphere of anxiety, distrust, and ultimately cynicism among those who suffer it. "Students' education suffers when colleges and universities infringe on free speech," observed Azhar Majeed, director of FIRE's Individual Rights Education Program.

Rightly said. Fear, intimidation, and uniformity are usurping the free, robust inquiry and debate that is the lifeblood of a genuine institution of higher learning, undermining both academic truth-seeking and democracy, which depends on an informed citizenry. The effect of campus-promoted intolerance is to jettison an informed, independent-minded citizenry and to replace it with a cowed, guilty, uncritical herd. From the students suffering under this regime will in time come our nation's leaders. Will they be able to face without blinking the profound moral challenges that every generation must face?

If so, it won't be due to their education. It will be in spite of it.

Thomas K. Lindsay directs the Centers for Tenth Amendment Action and Higher Education at the Texas Public Policy Foundation and is editor of He was deputy chairman of the National Endowment for the Humanities under George W. Bush.

Courts Worsen the Pension Mess

Josh B. McGee - August 24, 2015

Court cases are creating a perilous standard for addressing the public-pension mess.

In May, Illinois's highest court said the state's constitution forbids even modest changes to the pension system. The next month, New Jersey's supreme court gave Governor Chris Christie carte blanche to refuse to pay into the state's pension funds.

These are two different courts, interpreting the laws of two different states. But if this signifies the approach courts will take elsewhere, it's the worst of all possible worlds. Eliminating options for reform while letting politicians underfund benefits puts workers and taxpayers between the proverbial rock and hard place. Workers may be forced to watch their retirement security go from squeezed to crushed, and taxpayers could be stuck with rising taxes, fewer services, and a weakened local economy.

Two things have to happen. First, leaders need to immediately adopt responsible, workable plans to adequately fund benefit promises, and second, everyone needs to work together to identify the changes necessary to create fair, sustainable pension systems for the future. Unfortunately, courts are encouraging leaders to do the exact opposite.

And the results of inaction are all too predictable.

Chicago illustrates the impact that pension mismanagement can have. In May, Moody's dropped the city's credit rating by two notches to junk status. The ratings agency also left the city on notice for future downgrades if it did not take concrete steps to deal with its looming fiscal crisis.

While Mayor Rahm Emanuel protested the downgrade, there is little disagreement between Moody's and the mayor regarding the city's significant financial challenges. The mayor acknowledged that “Chicago's financial crisis is very real and at our doorsteps.” The primary point of disagreement seems to be the magnitude of the impact Chicago's underfunded pensions will have on the city's finances.

The city's four pension funds currently have less than half the money they need to make good on the retirement benefits public workers have already earned.

The city must contribute a lot more money to keep the funds from running out of cash in the relatively near-term, a circumstance that would result in retirees' relying on direct budgetary payments from the city.

But there are only four levers the city might use to ameliorate the dire situation: tax increases, reductions to public services, changes to future retirement benefits, and restructuring of other debt. And the Illinois supreme court — interpreting a provision of the state constitution that says membership in a public pension program "shall be an enforceable contractual relationship, the benefits of which shall not be diminished or impaired" — recently took one option off the negotiating table, striking down a law that modestly reduced benefits for current workers and retirees. (For example, the law ended automatic cost-of-living increases for retirees and raised retirement ages for current workers.) This severely restricts the city's ability to find a solution without significant impacts on the other three, which of course should worry Chicago's creditors — Moody's primary concern.

So why does the mayor take issue with Moody's? The city may believe it is not on the hook for making pension payments above and beyond what is currently specified in statute. But the legally required contributions for some of the city's funds are so low that, with minimum payments, they will run out of money in relatively short order. Thus, with immediate benefit cuts off the table, the status quo is very likely to persist until the funds simply run out of money.

This could mark the beginning of a worrisome trend for workers. Given that retirement-plan sponsors in many jurisdictions appear to have very little flexibility to negotiate concessions from workers, what happens if sponsors simply force the issue and allow the funds to run out of money? Will the courts force governments to make benefit payments directly from annual budgets?

The tentative answer, at least in New Jersey, appears to be no. The New Jersey supreme court recently ruled that the state's 2011 commitment to adequately fund retirement benefits did not create an enforceable contract with workers, even though a number of members of the legislature have said that was their intent. Shortly thereafter, Governor Christie said flat-out that he would let the pension funds run dry unless workers agreed to concessions. This is political blackmail at its worst, and makes an already-underfunded system all the more precarious for workers.

What's more, it is not clear how this strategy protects taxpayers. In 2014, contributions to New Jersey's pension plans totaled $4.5 billion, but pension benefit payments were $9.4 billion — in other words, annual contributions would need to more than double just to make benefit payments. Even if workers agreed to concessions, it is unlikely that the savings would be enough to cover the immediate cash-flow deficit without service cuts or increased revenue. And ignoring the problem only makes the potential impacts worse.

It is unclear whether courts in other states will take a different tack, but the New Jersey ruling certainly does raise questions about the judiciary's willingness to force policymakers to appropriate dollars specifically for pensions.

The recent court rulings highlight a significant flaw in the structure of our current public retirement systems. The benefits workers earn are not directly connected to annual contributions or investment earnings. And since benefit payments are distant, this creates both the incentive and the opportunity for governments to understate the cost of benefits, systematically undermining the sustainability of the retirement system and in turn the security of the benefits workers have rightly earned.

Unfortunately, those who should be working to protect workers' benefits, including the pension plans themselves and the actuaries they hire, have too often aided governments in this endeavor. All of this should lead workers to ask, "What good is a benefit promise if there is not an equally strong funding commitment to back it up?"

It is time to stop engaging in pension brinksmanship and begin a real discussion about comprehensive reform.

Josh B. McGee is a senior fellow at the Manhattan Institute and vice president of public accountability at the Laura and John Arnold Foundation.

Program Evaluations Are a Waste of Money

Jason Richwine - August 21, 2015

Business schools teach aspiring managers to avoid "information bias" — that is, the tendency to seek more information even when it will have no effect on one's decision-making. That sounds like an obvious lesson, but it's not one the federal government has learned. Lawmakers routinely pay for formal evaluations of social programs, apparently knowing all the while that the results will not affect their support for those programs.

From job training to preschool, this year's House, Senate, and White House budget proposals all continue to offer funding for programs that have performed poorly on the government's own evaluations. It is a wasteful, disingenuous approach to social policy, but it need not continue. If we were to tie funding directly to the results of evaluations, the whole conversation about program evaluation would become more serious.

The perfect case study is Head Start, the oldest federal preschool program. The Head Start Impact Study — a state-of-the-art, multi-site experimental evaluation set in motion by a law Bill Clinton signed in 1998 — came with a price tag of $28 million. Rationally, lawmakers should not have paid for that study unless they expected the results to affect their support for the program. If the Impact Study shows Head Start is effective, they should want to increase funding and look for ways to expand the program's reach. If Head Start is not proven effective, lawmakers should presumably want to eliminate the program, or at least decrease support and redirect some of the funding toward back-to-the-drawing-board research.

Rationality did not prevail. The Impact Study failed to show lasting effects, yet Head Start is still alive and well. In fact, a couple of months after the study's final results were released, the Obama administration proposed increasing funding for Head Start, touting the "success" of the program and the "historic investments" the administration had already made in it. The White House did not say what it meant by "success," but clearly it must have been judging Head Start on some criteria that the Impact Study did not cover. So why pay for the study in the first place?

Head Start's defenders argue that the Impact Study is not capturing "sleeper effects" that will emerge later in the participants' lives. So if the Impact Study had shown positive effects, they would have said, "We should support Head Start because of these positive effects." Instead, they say, "We should support Head Start because of sleeper effects suggested by other research." Since the decision is the same either way, the Impact Study was a waste of taxpayer money.

Another way that the White House deflected the Impact Study's results was to cite its upcoming rewrite of performance standards for Head Start providers. However, a follow-up to the main Impact Study found that variation in Head Start program quality had no significant effect on student outcomes. That was apparently no problem for the administration. When its new standards were finally proposed this summer, there was no reference to the follow-up report's findings. Again, the Impact Study appears remarkably useless to the very government that funded it.

Democrats and Republicans share the blame. The legislation that authorized the Impact Study passed with large majorities of both parties. And, like the White House, both houses of the Republican-controlled Congress proposed budgets this year that would fully fund Head Start. So there is a bipartisan consensus in Washington both for evaluating Head Start and for disregarding the results of that evaluation.

Dropping the studies altogether would be preferable to paying for them and then ignoring the results. The better solution, however, would be to legally tie program funding to the evaluations. Make the existence of Head Start and other programs contingent on showing impacts on pre-specified outcome measures. That would require lawmakers to be clear about the reasons they support or oppose particular programs. If they protest that the benefits of their favorite program are not necessarily captured by a formal study, the natural question would be, "Since the study has no chance of changing your mind, why do you want taxpayers to fund it?"

There would be logistical difficulties, of course. One can imagine the special pleading that would follow a poor evaluation: "My favorite program almost achieved its required impact, so we shouldn't penalize it." A stubborn Congress might pass new legislation that simply restores funding to pre-evaluation levels. But the purpose of tying dollars to results is not so much to force an immediate policy change as it is to generate a more serious discussion about what we expect from social programs. It's a discussion that is long overdue.

Jason Richwine is a public-policy analyst in Washington, D.C.

The Latest Climate Kerfuffle

Patrick Michaels - August 20, 2015

Are political considerations superseding scientific ones at the National Oceanic and Atmospheric Administration?

When confronted with an obviously broken weather station that was reading way too hot, they replaced the faulty sensor — but refused to adjust the bad readings it had already taken. And when dealing with "the pause" in global surface temperatures that is in its 19th year, the agency threw away satellite-sensed sea-surface temperatures, substituting questionable data that showed no pause.

The latest kerfuffle is local, not global, but happens to involve probably the most politically important weather station in the nation, the one at Washington's Reagan National Airport.

I'll take credit for this one. I casually noticed that the monthly average temperatures at National were departing from their 1981-2010 averages a couple of degrees relative to those at Dulles — in the warm direction.

Temperatures at National are almost always higher than those at Dulles, 19 miles away. That's because of the well-known urban warming effect, as well as an elevation difference of 300 feet. But the weather systems that determine monthly average temperature are, in general, far too large for there to be any significant difference in the departure from average at two stations as close together as Reagan and Dulles. Monthly data from recent decades bear this out — until, all at once, in January 2014 and every month thereafter, the departure from average at National was greater than that at Dulles.

The average monthly difference for January 2014 through July 2015 is 2.1 degrees Fahrenheit, which is huge when talking about things like record temperatures. For example, National's all-time record last May was only 0.2 degrees above the previous record.

Earlier this month, I sent my findings to Jason Samenow, a terrific forecaster who runs the Washington Post's weather blog, Capital Weather Gang. He and his crew verified what I found and wrote up their version, giving due credit and adding other evidence that something was very wrong at National. And, in remarkably quick action for a government agency, the National Weather Service swapped out the sensor within a week and found that the old one was reading 1.7 degrees too high. Close enough to 2.1, the observed difference.

But the National Weather Service told the Capital Weather Gang that there will be no corrections, despite the fact that the disparity suddenly began 19 months ago and varied little once it began. It said correcting for the error wouldn't be "scientifically defensible." Therefore, people can and will cite the May record as evidence for dreaded global warming with impunity. Only a few weather nerds will know the truth. Over a third of this year's 37 90-degree-plus days, which gives us a remote chance of breaking the all time record, should also be eliminated, putting this summer rightly back into normal territory.

It is really politically unwise not to do a simple adjustment on these obviously-too-hot data. With all of the claims that federal science is being biased in service of the president's global-warming agenda, the agency should bend over backwards to expunge erroneous record-high readings.

In July, by contrast, NOAA had no problem adjusting the global temperature history. In that case, the method they used guaranteed that a growing warming trend would substitute for "the pause." They reported in Science that they had replaced the pause (which shows up in every analysis of satellite and weather balloon data) with a significant warming trend.

Normative science says a trend is "statistically significant" if there's less than a 5 percent probability that it would happen by chance. NOAA claimed significance at the 10 percent level, something no graduate student could ever get away with. There were several other major problems with the paper. As Judy Curry, a noted climate scientist at Georgia Tech, wrote, "color me 'unconvinced.'"

Unfortunately, following this with the kerfuffle over the Reagan temperature records is only going to "convince" even more people that our government is blowing hot air on global warming.

Patrick Michaels is director of the Center for the Study of Science at the Cato Institute.

Government Intervention Is Becoming Obsolete

Fred E. Foldvary - August 20, 2015

Much government intervention has no economic rationale and is due instead to pressure from special interests. However, some interventions have a public-welfare justification, backed by conventional economic theory. Textbooks in the field normally present four such rationales: asymmetric information, external effects, public goods, and monopoly.

Advances in technology are fast rendering these arguments obsolete.

"Asymmetric information" means that in an exchange, one party has much more knowledge than the other. When one buys a used car or computer, the seller could take advantage of the buyer's ignorance. Therefore, says standard theory, the market fails.

But ignorance creates a demand for both information and assurance. The economy provides consumers information through such channels as Consumer Reports, Angie's List, and Yelp reviews. Advancing technology provides greater and cheaper information. The websites of consumer publications enable users to computer-search, rather than having to go to libraries and look up printed articles. Markets also provide assurance through warranties, guarantees, and sellers' desire to preserve a good reputation.

"External effects" are uncompensated effects on others; the standard example is pollution. In a pure market, pollution constitutes trespass and invasion of another's property, and is subject to a liability rule that makes the producer pay for the damage, making the cost internal.

But some property rights, such as for fish in the oceans, have not historically been feasible.

Here too, advancing technology, like electronic fencing and tagging, is providing a solution. Even when government is involved in reducing pollution, better technology can replace regulations (such as on gasoline, engines, and smog) with pricing when remote sensors measure actual pollution and photograph the license plates. Private associations and firms can also use such technology to get polluting car owners to compensate for their emissions and help pay for the roads.

"Public goods" are items that are non-rival, meaning that their use by one person does not diminish the use of others. One more person viewing a city fireworks show does not prevent others from viewing it. Standard economic theory posits market failure due to free riders: An entrepreneur cannot privately build a dam to protect a city from floods, because some people will refuse to pay, figuring that the dam will protect them whether they contribute or not.

Already, private contractual communities such as homeowners' associations can and do provide such collective goods. And better technology such as electronic tolling now makes private provision more feasible, as private roads and parking can more easily collect the needed fees, while also eliminating congestion with prices just high enough to enable traffic flow and parking.

Monopoly can indeed result in higher prices, but there can be benefits to large firms, such as providing standard formats for software. Also, even dominant firms need to innovate in order to maintain market share, and excessively high prices induce competition. Here too, better technology helps to address the problem. Examples include cheaper generation of electricity on a small scale, including solar generators, and the recycling of water. Both of these examples reduce the need for regulated "natural monopolies" that have high fixed costs.

The effects of advancing technology on the rationales for governmental programs were presented in the 2003 book "The Half Life of Policy Rationales," edited by Daniel Klein and myself. Eric Hammer and I recently updated this research in 2015 in a working paper published by the Mercatus Center at George Mason University, "How Advancing Technology Keeps Reducing Interventionist Policy Rationales."

The prevailing market-failure theory and the government programs that claim justification from such theory are increasingly obsolete, and both theorists and practitioners need to take note.

— Fred E. Foldvary teaches economics at San Jose State University, California, and is the coauthor (with Eric Hammer) of a recent working paper on "How Advances in Technology Keep Reducing Interventionist Policy Rationales," published by the Mercatus Center at George Mason University.

Adam Rosenberg - August 19, 2015

This month marks the 80th birthday of the Social Security program. For decades, the program has been a vital lifeline for retirees, the disabled, and their families and has lifted tens of millions of Americans out of poverty. 

The program faces financial problems, though. The Disability Insurance trust fund is expected to deplete its reserves in late 2016, and even if its finances are intermingled with the old age program, the combined Social Security trust funds are projected to go insolvent by 2034. When these trust funds run out of money, benefit payments will need to be cut or delayed to hold spending to incoming revenue. 

Making Social Security financially secure will require an informed debate about the choices involved, but myths are often recited to obstruct progress on reform. Here are 4 common myths.

Myth #1: Social Security does not face a large funding shortfall.

Fact: The Social Security trust fund is projected to run out by 2034 and faces a shortfall of 2.7 to 4.4 percent of total wage income over the next 75 years. 

Due to population aging, Social Security is projected to have a relatively large shortfall over the long term, which will deplete the trust fund reserves by 2034. Keeping Social Security solvent for 75 years would require the equivalent of a 20 percent (2.6 percentage point) immediate payroll tax increase or 16 percent immediate benefit cut, according to the Social Security trustees. Needed adjustments will grow over time.

While the shortfall can certainly be closed with targeted spending and revenue changes, it should not be downplayed or ignored.

Myth #2: Today's workers will not receive Social Security benefits.

Fact: Even if policymakers do nothing, the program could still pay about three-quarters of benefits.

Simply put, the only way that future beneficiaries would receive zero benefits from Social Security is if the program were eliminated. When the trust fund goes insolvent, as the trustees project will happen in 2034, it would still be able to pay benefits from incoming revenue, which they forecast would equal 79 percent of scheduled benefits in that year, declining to 73 percent over time. 

Trust fund insolvency does not mean that benefits would disappear but rather that they would be reduced from their scheduled level by about one-fifth. That would clearly be a dramatic cut in benefits, particularly since it would happen quickly, but it would not be the same thing as benefits going away entirely.

Myth #3: Social Security would be fine if we hadn't "raided the trust fund."

Fact: The program's financial shortfall stems from a growing mismatch between benefits paid and incoming revenue, not the fact that the funds were borrowed.

As a result of surpluses accumulated during the 1990s and 2000s, the Social Security trust fund currently holds $2.8 trillion of assets, which are invested in special U.S. Treasury bonds. Many argue that the surpluses were used to mask the size of deficits outside the program, allowing lawmakers to enact more tax cuts and spending increases than they otherwise would. In that sense, it could be argued that lawmakers "raided the trust fund," but in an accounting sense, no actual money has been taken out of the Social Security trust fund. The $2.8 trillion of assets will be available to cover program deficits until that money runs out.

Solvency projections take into account the trust fund assets but show that they are dwarfed by the shortfall over the next 75 years. The Social Security trustees project that the program will spend $13.5 trillion more (on a "present value basis," which accounts for interest and inflation) than it raises over the next 75 years, relying on the trust fund to finance $2.8 trillion of it. Lawmakers need to reduce the program's deficits to ensure solvency.

Myth #4: Fixing Social Security is too hard.

Fact: Social Security reform options are well-known, and incremental adjustments - enacted soon - can secure the program for future generations.

While Social Security reform may be politically difficult to accomplish politically, the options, from a policy sense, are well-known and are sufficient to keep the program solvent without fundamentally changing its structure. 

There are countless options available to change various parameters of the program, and the Social Security Chief Actuary publishes annually a long list of 121 options for doing so. The chief actuary also has evaluated Social Security reform proposals going back two decades. Ordinary citizens can weigh the various factors and come up with their own Social Security plan by accessing "The Reformer," an online tool from The Committee for a Responsible Federal Budget (CRFB).

There are plenty of well-known and quantifiable options to ensure that Social Security remains financially sound for the next 80 years and beyond. 

Adam Rosenberg is a policy analyst at the Committee for a Responsible Federal Budget, a nonpartisan organization committed to educating the public about issues that have significant fiscal-policy impact. These are just four of eight myths that CRFB tackled in a recent paper. To read more about all eight, click here.

Inequality and the Veil of Ignorance

Courtney Such - August 10, 2015

America's income gap is much debated. But a new paper — invoking the famous "veil of ignorance" theory of philosopher John Rawls, who is much beloved on the left — suggests it may not be as dramatic as many believe. The paper suggests that global inequality, not inequality within advanced nations, is what should concern the adherents of this theory as they make policy.

We talked with the paper's co-authors, Federal Reserve Bank of Minneapolis consultants V.V. Chari and Christopher Phelan, to learn more. The interview has been shortened and edited for clarity.

How would you explain the "veil of ignorance" theory in layman's terms?

Chari: John Rawls argued that the sensible way to make moral judgments about political issues is to imagine that none of us knows our current position in society. Our current position does not influence what we think is desirable; we adopt a perspective of neutral observation.

A way to imagine a neutral observer is to imagine that all of us are transported to Mars — or some outside planet — and then we decide on a social arrangement, and then we are randomly reassigned as possibly someone quite different from who we are. If we happen to be currently rich, there is some chance we could end up as somebody who grew up in an inner city and had an underprivileged education. Or, if you're a poor person, you could be a rich person. So that was the "veil of ignorance" construct that was invented to allow us to make judgments without letting our personal circumstances influence that thinking.

When you apply this theory to income inequality, what happens?

Phelan: If you take income inequality as being exogenous — exogenous is a term we use to mean like it just fell from the sky — then you would just say, "bad." You want to just raise the lowest person up. But in reality there's a tradeoff between inequality and income levels where, if you try to get rid of inequality too much, there just won't be enough to hand out. People make a tradeoff between income inequality and, let's say, economic growth.

Chari: The basic message of our paper is, imagine all the persons in the world are transported to Mars and we are going to be randomly reassigned. There's a very good chance, roughly a 20 percent chance, that we'll end up as a relatively poor person in India. There's a smaller chance, but still significant, that you might end up as somebody in Africa or Latin America. So therefore, sitting in our position in Mars when we are deciding on these social arrangements, we've got to ask ourselves, "How will that social arrangement help or hurt me if I end up in Chad or if I end up in Manhattan?"

Given that there are a lot more people who live in poor countries than in relatively rich countries, the odds are pretty good that, when deciding on social arrangements while living in Mars, you would be very concerned about global inequality. That is what would concern us from a first approximation. We would set up social arrangements which would provide a lot of opportunities in the event that we happened to be reborn in a desperate country — and that we would be less concerned about our prospects if we happened to be cast into Denmark or Sweden.

How does this factor in to the current debate?

Chari: Our paper is directed at some subset of people who argue that because income inequality in developed countries has increased dramatically, we have to engage in more extensive redistribution inside the United States or Sweden or France. What we are saying is, it's perfectly fine to make that argument — if what you acknowledge upfront is, look, I'm selfishly interested only in the wellbeing of people in the United States.

The political system responds to those who have that consideration. But some people say, not only is this good policy, it's an ethically desirable policy — and it's only when people make that last argument that we say their thinking is not well grounded in the discipline of ethics as envisioned by John Rawls.

All we are saying is, if you believe in Rawls's principles, then step one, you ought to be celebrating the extraordinary decline in worldwide inequality that has occurred over the last 35-40 years — the biggest improvement in human prosperity in the history of humankind. Second, you ought to be advocating very extensively for policies that make poor people in poor countries better off. Somebody below the poverty line in the United States is by the standards of world distribution extraordinary affluent. You have to care much more about poor people in Chad than you do about poor people in Mississippi.

Your study discusses global trad
e. How does this play into your argument?

Phelan: Let's say you're a furniture maker in North Carolina. Let's say you're in the part of the factory that requires a low-skilled worker or a textile factory in North Carolina. Those industries got defeated when we opened up our trade to the rest of the world. For the most part, there isn't much textile industry in the U.S. — it's all moved overseas. Well, the people in that industry were relatively poor in the U.S.

If you apply the veil-of-ignorance criteria to just people in the United States, it gives bad policy. It means the poorest of us get poorer. If you apply it to the whole world, it's a good policy, because it makes the poorest of the whole world richer.

What are your suggestions for getting redistribution policies right?

Chari: I think that economics has good lessons and messages. "Increase trade" is one, "increase research and development" is another, and somewhat more controversial is "increase immigration from very poor countries to very rich countries." These are all devices that I think would make poor people better off, and those are the kinds of policies that those who advance an ethical point of view — I'm not necessarily one of them, I'm just an economist perusing those ethics, but those are the policies they ought to be dealing with.

The policies they typically end up advocating are policies that restrict immigration, restrict the ability of rich societies to become prosperous so they can share their additional knowledge with people in poor countries. For example, advances in, say, cell phones — innovations like the iPhone — have made some people in the United States and Finland and so on extraordinarily rich. They've also served dramatically to improve the function of the markets in Africa and have brought immeasurable benefits to those people, so, in some sense, those kinds of innovations have increased inequality in a developed world but have reduced one way of inequality, and therefore, they should be applauded from the perspective of the veil of ignorance.

Phelan: I'm not personally convinced that inequality in the country right now is something that needs to be fixed. There is a tradeoff in society between inequality and growth. If you try to ensure everybody gets everything, level all differences, you would be removing incentive to get education, to work hard, to take risks.

The person poor in our country right now is actually relatively wealthy compared to the person in that position 100 years ago, and they are relatively wealthy compared to the world income distribution. They have cars, air conditioning, houses — poor deprivation used to be a big deal. It's not that it doesn't happen ever in the United States, but the fraction of people who literally are having trouble getting enough calories to get through the day has shrunken dramatically to almost nothing. It's not nothing, but it's getting close.

Courtney Such is a RealClearPolitics intern.

Medicare Devours the Federal Government

John R. Graham - August 7, 2015

In the last few years, the Medicare trustees' annual financial report has been met with complacency. Because Medicare's fiscal problems appear not to be worsening, people think they can stop worrying. Nothing could be further from the truth.

Indeed, the trustees themselves insist that, "notwithstanding recent favorable developments, current-law projections indicate that Medicare still faces a substantial financial shortfall that will need to be addressed with further legislation. Such legislation should be enacted sooner rather than later to minimize the impact on beneficiaries, providers, and taxpayers."

In 2014, Medicare's taxes and premiums added up to $342 billion — but its spending amounted to $600 billion, just short of defense- and security-related spending. The program accounts for 11 percent of federal tax and fee revenue but 17 percent of federal spending.

Medicare's finances are unnecessarily confusing because of the artificial distinction between the Hospital Insurance Trust Fund (Part A) and the Supplemental Medical Insurance Trust Fund (Part B, for physician payment, and Part D, for outpatient prescription drugs).

The Hospital Insurance Trust Fund is deceptive. It is financed by payroll taxes, and for many years the revenue was more than what was required to pay hospital claims. The government spent the surplus on other things. When it took a million dollars out of the drawer labeled "Medicare" and transferred the money to the drawer labeled "Navy," it left a $1 million Treasury note in its place. Absurdly, the pile of notes resulting from these transfers is called the Medicare Trust Fund.

Suppose Mr. & Mrs. Smith put aside some of their income for a college fund for their four kids. Then they decide to spend the money on a vacation. So they replace the money with an IOU stating, "The Smith family will pay its kids' college tuition." Nobody would consider that an asset. The money is gone.

Remarkably, even with no real money in the trust fund, it is going bust. According to the trustees, "the HI trust fund has not met the Trustees' formal test of short range financial adequacy since 2003." Since 2008 payroll taxes have not covered Medicare's hospital claims. So Medicare has been giving "Trust Fund" IOUs back to the Treasury (which the Treasury redeems by issuing more IOUs to investors). The last "Trust Fund" IOU will be turned in by 2030. What then?

And then there is Part B, financed primarily by beneficiaries' premiums, which pays (among other costs) physicians' claims. Until this year payments to physicians were governed by a fantasy formula called the Sustainable Growth Rate (SGR). At least once a year Congress passed a short-term boost to physicians' pay so that they earned enough to keep seeing Medicare patients, but it made long-term spending projections a laughingstock.

Earlier this year, Congress passed a long-term increase, hiking doctors' pay for ten years and adding $141 billion to the deficit. But this so-called fix is still unrealistic, because it merely kicks the can down the road a decade. According to the trustees, the bonuses

are scheduled to expire in 2025, resulting in a significant one-time payment reduction for most physicians.

In addition, the law specifies the physician payment update amounts for all years in the future, and these amounts do not vary based on underlying economic conditions, nor are they expected to keep pace with the average rate of physician cost increases.

By 2025 at the latest (and perhaps as early as 2018 by my reckoning), organized medicine will once again declare the payment system broken and demand more deficit-financed pay hikes.

For this and other reasons, the report also includes an "illustrative alternative" (that is, "realistic") scenario, in which long-term Medicare spending will be 50 percent higher than the official estimate — 9.1 percent of Gross Domestic Product in 2089, instead of "just" 6 percent.

Okay, that is 74 years from now. However, Medicare has been with us for half a century, and its fiscal problems have been recognized for decades. Neither the people nor the politicians have taken the first step to fixing its finances. The complacent response to the latest trustees' report suggests its warnings will pass unheeded once again.

John R. Graham is an Independent Institute senior fellow and a senior fellow at the National Center for Policy Analysis.

Why White Men Get Shot by Cops

Robert VerBruggen - August 6, 2015

Yesterday, a man attacked a theater, dousing moviegoers with pepper spray and assaulting one person with a hatchet. The victim survived with a minor injury, but when SWAT officers showed up, the attacker went after them with a pistol and was killed. It turned out the gun, while it looked realistic, was an Airsoft model. It's a toy designed to shoot soft pellets at other people for fun.

The man has been identified as a white male with a history of serious mental illness. His name was Vincente David Montano. The first and last names are often Spanish (they can also be Italian), so he may be classifiable as Hispanic or Latino as well, depending on where his ancestors actually came from and how broadly the terms are defined.

This case may hold the key to a statistical pattern I've been pointing out for nearly a year now: Once one adjusts for murder rates, a good proxy for the most serious violent crime, whites are actually more likely to be shot by police than blacks are. The same is true if you instead adjust for rates of cop-killing. Blacks, while just 13 percent of the general population, are about 25-30 percent of those shot by police, half of murderers, and 40-45 percent of cop killers. The sociologist and former police officer Peter Moskos has noticed the same thing, as has my former National Review colleague David French

"Suicide-by-cop," and mental illness more generally, might be a missing piece of the puzzle. Over the past decade and a half, the CDC's WONDER database puts overall suicide rates at 14.5 per 100,000 for non-Hispanic whites, 5.1 for Hispanics, and 5.2 for blacks; for serious mental illness, government survey data put rates at 4.2 percent for whites, 4.4 percent for Hispanics, and 3.4 percent for blacks. (The mental-illness survey tries to measure illness directly; it doesn't just ask about previous diagnoses.) To some degree, police-shooting numbers might reflect these disparities — with a skew toward whites and, for mental illness but not suicide, Hispanics — rather than rates of violent crime.

This is a hard theory to test more directly. We don't even know what percentage of police shootings are suicide-by-cop incidents. There are some obvious tells, such as when a suspect menaces cops with a fake gun, but in other cases suspects actually discharge weapons in officers' direction and researchers need to make an educated guess as to what the intent was. Yesterday's incident illustrates another difficulty as well — his final act was suicide, but he plainly also had the intention to harm others, seeing as he attacked someone with a hatchet. Estimates range from 10 percent to more than one-third of all police shootings' being suicides.

Good racial breakdowns of these incidents are hard to come by. Many suicide-by-cop studies focus on a single state or police department. And then there are efforts like the 2009 study that generated the "more than one-third" estimate cited above, which had a big and diverse sample ... that the authors conceded was "nonrandom." For what it's worth, in cases where the race was known, 18 percent of suicides-by-cop involved blacks, 46 percent whites, and 29 percent Hispanics. The figures for whites and Hispanics seem particularly implausible as national estimates.

The Washington Post has some helpful new numbers, though. Reporters are tracking every police killing in the country this year, and while they don't explicitly classify cases as suicide-by-cop, they do keep track of whether the person had a known history of mental illness. So far, about a third of whites killed by police did (93 of 287), compared with only about a sixth of blacks (23 of 143) and a fifth of Hispanics (19 of 91). The black-white gap in particular is substantial, though perhaps mental illness is more likely to go undiagnosed, or unnoted in police and media reports, among minorities. 

The reporters are also tracking whether people were shot holding "toy" guns. (I've placed "toy" in quotation marks because they include BB and pellet guns; these are not lethal, usually, but unlike Airsoft guns they are not toys.) These cases aren't always suicide-by-cop — see Tamir Rice and John Crawford III, both black — and there aren't enough of them in the Post's data to say anything with confidence. But the early numbers are also consistent with the theory: 5 percent of whites (14 people), 3 percent of Hispanics (3 people), and 2 percent of blacks (3 people) were holding these guns.

I think the above amounts to a strong circumstantial case. And either way, this topic deserves a lot more study: If we're going to give police-shooting statistics more scrutiny — and we should — we need to find the correct baseline to compare them against. Racially, violent-crime rates skew one way, while suicide and mental-illness rates skew in the opposite direction. This makes it hard to tell whether police-shooting disparities reflect officers' bias or something else entirely.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Philip A. Wallach, Brookings Institution - August 6, 2015

The EPA's final Clean Power Plan is radically different from the version proposed a year ago, with big consequences for which states will face the most costly paths toward compliance in 2030. Whereas the proposed rule gave coal-dependent states a break in many ways, the final rule does not, and that means that those states — generally already hostile to the rule — now face a much more difficult task in complying with the rule.

Understanding the derivation of the state targets in the proposed Clean Power Plan, released in June 2014, was hard work — we gave explaining the EPA's method a shot here at FixGov. The results of that opaque method were somewhat clearer: states that emitted the most were generally asked to do the least. As I argued repeatedly, that situation created fairness concerns that were destined to undermine the rule's political foundation, as states overwhelmingly thought they were being treated unfairly.

Though the new rule is complex and thick (1560 pages in its current typeset) enough to make understanding its ins and outs a challenge, the basic structure is much more straightforward than in the proposal. Basically, the EPA has set carbon emissions standards for two types of plants: for fossil fuel-fired steam generating units, 1305 lbs CO2/MWh, and for stationary combustion turbines, 771 lbs CO2/MWh. (Where they got those numbers is sure to be controversial, both legally and politically, and will be the subject of a future post here — but let's put that aside for now). Now each state's target is set by looking at a weighted average of their current (2012) fossil fuel-fired electrical generating units and imposing those emission standards. States must devise their own plans to reach those targets, using just about any combination of measures they see fit, as well as preparing federally enforceable fallback plans to regulate each plant directly. Vexing questions, including some about the puzzling treatment of nuclear energy and the role of energy efficiency measures, have been resolved by deriving the rule's targets much more directly from the mix of fossil fuel emitters.

The new mix of targets is far easier to defend as equitable — which of necessity means it is much harsher on states that have done less to move toward carbon efficient energy production to date. For a great many states (those shaded in green), that actually means that their 2030 target in the final rule is less stringent than it was in the proposed rule. For other states (those shaded in red), they will have to do considerably more than originally proposed. That includes Kentucky, the home state of the leader of the “just say no to the Clean Power Plan” strategy, Senate Majority Leader Mitch McConnell (R). Last year's proposal asked the state for just a 19 percent reduction in carbon intensity by 2030 (relative to its 2012 baseline), but the final rule demands a 41 percent cut. On the other end of the spectrum, Idaho was originally tasked with becoming 73 percent more efficient by 2030, but in the final rule must only get 10 percent more efficient than its baseline — easily achievable under business-as-usual conditions for the hydro-heavy state. 

It was possible to see the proposed rule's targets as politically accommodating to those parts of the country that have shown little interest in improving their carbon intensity; but the final rule's targets do quite the opposite. That's easy to see in the following map, which shows the proportional improvement in carbon efficiency each state is required to make (relative to 2012 baseline) by 2030.

Roughly speaking, states that have already taken many actions to improve their carbon efficiency (especially embracing renewables and natural gas) are tasked with smaller additional reductions (e.g., California, states in the Northeast), while states that have done less and are still more coal-dependent are asked to do more (e.g., Illinois, Montana, North Dakota). 

That makes plenty of intuitive and economic sense, but it is sure to make certain states dig in their heels against the rule politically even more than they already were. EPA may well have decided that their opposition was a certainty in any case, so that the extra requirements won't generate any extra enmity.

Stay tuned here for future posts about the final Clean Power Plan's legal future and more about its political ramifications.

Note: 2012 carbon efficiency baseline and final rule targets taken from EPA documents available here; proposed rule targets available here, presented in map form here. Alaska, Hawaii, and Vermont are all currently exempt from the Clean Power Plan.

Philip Wallach is a Fellow in Governance Studies and the author of the upcoming book, To the Edge: Legality, Legitimacy, and the Responses to the Financial Crisis of 2008 (Brookings Institution Press, 2015). This piece originally appeared on Brookings's FixGov blog.

An Alternative to 'Ban the Box'

Greg Glod - August 5, 2015

When President Obama called for justice reform at last month's NAACP Centennial Convention, offender reentry was among the areas he highlighted. Specifically, Obama endorsed a federal "ban the box" law, which would prohibit federal employers from inquiring about applicants' criminal histories at the initial stages of the hiring process. Currently, 18 states have enacted this legislation for public employers, while seven have placed this regulation on the private sector as well.

The rationale behind "ban the box" is just and is backed by data. In many cases, it is an insurmountable task to find a proper vocation after being released from incarceration or while under community supervision due to a criminal history. Studies indicate that stable employment is a leading factor in determining who will reoffend. It follows that allowing ex-offenders an opportunity to move beyond their criminal pasts will make our streets safer and save taxpayers millions in corrections costs, while broadening the tax base with a larger workforce.

However, "ban the box" has several drawbacks that make it a less-than-ideal solution, not only for businesses but also for ex-offenders. Rather than being applied to the federal government as Obama suggested, it should be rolled back and replaced with a superior alternative: letting ex-offenders earn the right to seal their records and assert on job applications that they were never arrested, charged, or convicted.

The first problem with "ban the box" is that it places additional government regulation on public entities and private businesses. The laws are often incredibly convoluted, with the line between legal and illegal hiring practices drawn based on the misdeeds of the offender, not the employer. This equates to more money spent on administrative red tape in order to ensure compliance, rather than investing in new hiring or innovative technologies.

In Illinois, for example, a public or private employer who has at least 15 employees cannot ask about an individual's criminal history until the applicant has been selected for an interview or, if there is no interview, until they have been pre-selected for the job. This may seem straightforward; however, Chicago has a separate, tougher "ban the box" rule. Chicago's ordinance applies to businesses with fewer than 15 employees, and also requires certain city agencies to take into account several factors prior to hiring individuals with criminal backgrounds — such as the nature of the offense, the applicant's criminal history, and the age of the individual when convicted. The ordinance also forces small businesses to tell applicants who weren't hired whether the decision was due to their criminal record. Thus, a business owner who has entities in and outside of Chicago must establish different hiring practices and training for each location to indemnify him- or herself from hefty fines and litigation.

Companies such as Walmart, Home Depot, and Koch Industries have all taken the admirable step of eliminating questions regarding criminal history from their applications in recent years. But there's a difference between private-sector decisions and government regulation here: "Ban the box" does not prevent employers from considering criminal history at a later point in the hiring process, so when it's forced on employers who have no intention of hiring ex-offenders, it merely ends up wasting the time of both the businesses and the job applicants.

The alternative is nondisclosure. In general, nondisclosure or "record sealing" allows an individual to petition the court to seal their criminal record from the general public, while allowing law enforcement and employers in sensitive industries, such as health care, education, and finance, to see through the seal. If the judge determines that the ex-offender is indeed on the straightened path, the person can legally state that they have not been arrested, charged, or convicted of the offense to most employers, housing agents, and licensing centers. This places personal responsibility and the costs of reentry with the individual, rather than having the government enact further restrictions on business.

Recently, Texas passed legislation that allows for first-time offenders to receive orders of nondisclosure, so long as their offenses did not involve sex crimes, domestic abuse, or other serious violence. This legislation can serve as a model for states and the federal government to combat the cycle of recidivism that plagues our criminal-justice system, while not expanding the government's control over business.

Greg Glod is a policy analyst for Right on Crime as well as the Center for Effective Justice at the Texas Public Policy Foundation.

Blog Archives