Filter | only charts

FAA Needs To Get Back On Course

Steve Pociask - November 21, 2014

Air traffic congestion often raises safety concerns for passengers. In the last year, U.S. airlines flew 753 million passengers both domestically and internationally. As Thanksgiving Day approaches, airline travel will reach its most hectic pace across the country, with Los Angeles International and Chicago O’Hare predicted to be the two busiest domestic airports.  On top of the holiday bustle, there are reports that flight delays could soon reach their worst levels seen in the last twenty years. While air safety should always be the regulatory priority, recent policy changes at the Federal Aviation Administration (FAA) have raised some serious questions and the flying public deserves some answers.

Recall when Ronald Reagan fired over 12,000 striking government air traffic controllers in 1981? Now most of the air traffic controllers that were hired to replace the strikers face mandatory retirement. In fact, according to a U.S. Department of Transportation Inspector General Office report, more than 11,700 air traffic controllers will retire by 2021. While that should be enough to get the FAA geared up to meet this growing challenge, a string of problems – agency mismanagement and overspending, a proposal to sideline the current training program, and turning away potentially prime candidates from selection into the training program – are impeding the pathway to get more air traffic controllers into the airport towers where they are needed.   

Recent news reports provide a quick reminder of how air traffic controller shortages could create adverse consequences on both travelers and airlines, such as when last year’s sequestration made flight delays and cancellations commonplace, or when a recent fire at an airport control tower in Chicago occurred. These examples provide ample evidence of the harms that can occur in the face of a shortage. They also demonstrate how a problem in one airport can produce cascading problems, including delays and cancellations, in airports throughout the nation.

At a time when the FAA needs to ramp up its hiring and training to fill the growing void, its proposal to do away with the current air traffic controller training program belies logic. First, since training can take two or more years to complete, there is an immediate need to keep the process moving along in order to minimize the shortage of trained air traffic controllers. Impending shortages that the FAA should have foreseen would bring stress for existing air traffic controllers and produce flight delays for passengers, which would lead to increased safety risks for passengers and needless costs for airlines. The resulting costs, which could reach billions of dollars, would be passed on to consumers in the form of higher airline prices. 

Second, the FAA has overhauled its commonsense practice of recruiting students from flight schools and tapping into already trained vets leaving the military, who have direct knowledge and experience. Instead, the FAA is now recruiting new off-the-street hires with no previous experience, whose training takes twice as long and at extra cost.

All in all, the timing of the FAA’s decision does not coincide with the needs of the flying public, and it is inconsistent with the agency’s focus on public safety. It amounts to regulatory malpractice and a problem that policymakers will need to take quick action to fix.

With nearly 90,000 flights in the U.S. each day, having more eyes on the sky seems as important as ever. The decision by the FAA to throw out the “baby with the bathwater” seems irresponsible and it could jeopardize public safety. At the very least, their actions would increase flight delays and airline costs, which ultimately would cost consumers more in lost time and higher prices.

To that end, if flight delays and cancelations increased by a mere 1%, the cost (by my estimates) to American consumers well over $1 billion dollars of lost time, but it would also mean increased costs for airlines. In short, passengers lose, and all of the impending airline delays, as well as the potential safety risks associated with increased traffic congestion, could have been avoided.

The FAA needs to revisit its proposal to turn off its established air traffic controller training program, and instead, direct its attention to immediately accelerating its training. That effort would avoid stretching air traffic controllers too thin, which would spare the public a lot of misery and costs from delays, as the busy holiday season approaches and, more importantly, for years to come.

Steve Pociask is president of the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization. For more information about the Institute, visit www.theamericanconsumer.org.

Roberts, Obamacare, and Consequences

Joel Scanlon - November 20, 2014

“This bill [Affordable Care Act] was written in a tortured way to make sure the C.B.O. did not score the mandate as taxes. If C.B.O. scored the mandate as taxes, the bill dies.”—Jonathan Gruber

Shockingly, political considerations were in play in the passage of the Affordable Care Act. This was not particularly hard to see at the time. But Jonathan Gruber, he of the stupid American voter and architect of Obamacare, has exploded any remaining pretense.

One man in particular should be paying attention.

In his opinion upholding the Affordable Care Act’s individual mandate, Chief Justice John Roberts noted that it’s not the Court’s “job to protect the people from the consequences of their political choices.”

Clearly, the “political choices” the Chief Justice had in mind were electoral. The people elect their legislative and executive branch representatives, and must live with the consequences of the collective decisions made by those representatives – at least until they have a chance to throw them out of office.

What Roberts failed to appreciate—or willfully ignored—was that his decision did, in fact, offer protection from political choices, though of a different sort. Roberts’s opinion upholding the constitutionality of the mandate hinged on Congress’s authority to tax. What Congress labelled a “penalty” in the language of the law could be construed as a “tax,” according to Roberts, and thus was within Congress’s power. Roberts wrote,“That choice [of label] does not, however, control whether an exaction is within Congress’s constitutional power to tax.” In other words, Congress could do that which it said it was not doing. An interesting interpretation, for sure, but also one that protected Congress “from the consequences of [its] political decisions.”

Congressional Democrats certainly could have written a bill expressly levying a new tax to promote the purchase of health insurance, without any question of constitutionality. But they didn’t. They chose to write Obamacare with penalties rather than taxes. The bill’s authors, and Jonathan Gruber, recognized the political cost of increasing taxes—both to the bill (it would not have passed) and to themselves (fear of Democratic electoral losses). 

This is all relevant today because the Supreme Court has agreed to hear King v. Burwell, a case challenging the ability of the federal government to subsidize insurance (“premium assistance”) through insurance exchanges set up by the federal government. The plain text of Obamacare provides subsidies to those enrolled through exchanges “established by the State”. Those challenging the implementation of the law argue the IRS is not authorized to issue subsidies through the federally-established exchanges.

In recently revealed comments from 2012, Gruber offers Americans insight into the naked political considerations underlying this provision. Gruber argued the law was explicitly designed to “squeeze” states into setting up their own exchanges. The political calculation was that the cost of not expanding coverage through exchanges would be too great for governors and state legislators: “What’s important to remember politically about this is if you're a state and you don’t set up an exchange, that means your citizens don't get their tax credits—but your citizens still pay the taxes that support this bill. …I hope that that's a blatant enough political reality that states will get their act together...” The law could have been written to clarify that the federal government could offer subsidies through its own exchanges – but, by political choice, it wasn’t.

Democrats were wrong in their calculations, however: 36 states have not established exchanges. The federal government established them in those states instead. If the law were applied as written, citizens in those states could not access subsidies through the federal insurance exchanges. Not one to be constrained by the language of a law, however, the Obama administration and the IRS have chosen to provide subsidies for those on federal exchanges, deciding “the state” really means “the federal government too.” 

Again the Court is faced with political decisions and their consequences. Will Chief Justice Roberts note that the decisions of those 36 states not to set up state exchanges are the consequence of the political choices made by the people in those states? 

Or will he and the Court override those political decisions? Will he again protect Congress from the consequences of its own political choices? 

He would be wrong to do so – just as he was wrong in 2012.

Joel Scanlon is the Director of Studies at the Hudson Institute.

Health Reform Without Deception

John Goodman - November 19, 2014

MIT professor Jon Gruber is getting a lot of flak lately. As the intellectual architect of ObamaCare, he has shocked a lot of people with his video confessions that passing health reform required “deception” because the public is too “stupid” to understand what needs to be done.

I believe voters are smart. And that with three simple (and very transparent) reforms we could replace the mess that is ObamaCare with a health system the public would readily accept:

1. Replace all the ObamaCare mandates and subsidies with a universal tax credit that is the same for everyone.

2. Allow Medicaid (or private insurance that looks very much like Medicaid) to compete with other insurance, with everyone having the right to buy in or get out.

3. Denationalize and deregulate the exchanges.

You could have a very workable health care system by making these changes and these changes alone.

Technical problems with the online exchanges would be gone. Virtually every problem with the online exchanges has one and only one cause: People at different incomelevels and in different insurance pools get different subsidies from the federal government.

In theory, when you apply for insurance on an exchange, the exchange needs to check with the IRS to verify your income; it needs to check with Social Security to see how many different employers you work for; it needs to check with the Department of Labor to see if those employers are offering affordable, qualified insurance; and it has to check with your state Medicaid program to see if you are eligible for that.

To make matters worse, everyone’s subsidy is almost certain to be wrong – leading to refunds or extra taxes next April 15th.

With a universal tax credit, it wouldn’t matter where you work or what your employer offers you. It wouldn’t matter what your income is. It wouldn’t matter if you qualify for Medicaid.

All the perverse outcomes in the labor market would be gone. As is well known, employers have perverse incentives to keep the number of employees small, to reduce their hours of work, to use independent contractors and temp labor instead of full time employees, to end insurance for below average wage employees, to self-insure while the workforce is healthy and pay fines instead of providing the insurance the law requires.

With a universal tax credit and no mandate, all of these perversions would be gone. The subsidy for private health insurance would be the same for all: whether they work less than 30 hours a week or more; whether their workplace has fewer than 50 employees or more; and whether they obtain insurance at work or obtain it on their own.

The “race to the bottom” in the health insurance exchanges would end. Health insurers are choosing narrow networks in order to keep costs down and premiums low. They are doing that on the theory that only sick people pay attention to networks and the healthy buy on price; and they are clearly trying to attract the healthy and avoid the sick.

The perverse incentives that are causing these results have one and only one cause: when individuals enter a health plan, the premium the insurer receives is different from the enrollee’s expected medical costs.

Precisely the opposite happens in the Medicare Advantage program, where Medicare makes a significant effort to pay insurers an actuarially fair premium. The enrollees themselves all pay the same premium, but Medicare adds an additional sum, depending on the enrollee’s expected costs.

What I call “change of health status insurance” would accomplish the same result. The only difference is that the extra premium adjustments would be paid by one insurer to another and the amount paid would be determined in the marketplace — not by Medicare.

People would no longer be trapped in one insurance system rather than another. If you are offered affordable coverage by an employer you cannot get subsidized insurance in the exchange. If you are eligible for Medicaid you are not allowed into the exchange. If your income is 100% below poverty, you are not allowed into the exchange -- even if you aren’t eligible for Medicaid.

To make matters worse, eligibility for one system versus another will change frequently for millions of people because of fluctuations in their incomes.

With a universal tax credit that is independent of income, it would not matter where people get their insurance. People could join a plan and stay there.

Note: This change would work best if the universal tax credit is set at the level the CBO estimates a new enrollee in Medicaid will cost. Currently, that’s about $2,500 for an adult and $8,000 for a family of four.

There you have it: Three easy-to-understand, not very difficult changes, and millions of problems vanish in a heartbeat.

John C. Goodman is Senior Fellow at the Independent Institute and author of Priceless: Curing the Healthcare Crisis (Independent Institute).

Globalization Helps More Than Redistribution

Carrie Sheffield - November 19, 2014

Over the weekend I spoke on a panel  at the Millennial Success Conference hosted by GenFKD. FKD stands for “Financial Knowledge Development”; the organization is funded at least in part by The Home Depot founder Bernie Marcus, who beamed in a video message about entrepreneurship.

The panel was on “Millennial Identity,” and under the tutelage of RCP’s David DeRosiers, I sat alongside fellow Millennials Elizabeth Plank, Spencer Carnes, and Gabrielle Jackson. We generally agreed the defining historical moments for Millennials were 9-11 and the financial crisis of 2008, events creating profound turbulence for our generation. During the Q&A portion a member of the audience asked how we balance the notion of corporate responsibility--specifically mentioning Apple’s labor practices in China--while in pursuit of success.

Jackson, a thoughtful, rising star in DC, mentioned how she was faced with this quandary while working at a PR firm whose client included Wal-Mart. This flummoxed her a bit since she’d previously spoken out about the corporate behemoth’s labor practices. Wal-Mart is a frequent target for critics who question the fairness of its wages and health care offerings and its unparalleled ability to drive mom-and-pop stores out of business.

While there wasn’t time, I wanted to expound on that scenario a bit to bring in another economic consideration, and that is the significant "consumer surplus" wrought by Wally World driving down prices to rock bottom. Sure wages are low, but on balance the economic gains are wonderful for consumers. And since consumers of Wal-Mart goods tend to be the poorest among us, that’s an added net benefit to society.

As I’ve written elsewhere, as one of eight children in a low-income family money during my early childhood, Wal-Mart greatly enhanced the quality of our lives. Yes, people complain about the quality of the products vs. traditional mom-and-pop shops, but if those products were above our price point, they were totally irrelevant for us. And that meant higher-paying jobs at those mom-and-pop shops were out of reach for many workers, too.

Quantitatively, my story is one of millions that aggregates to some $50 billion in savings for American consumers each year, according a study highlighted by Gregory Mankiw, chairman of Harvard University’s Department of Economics. That means $50 billion more in Americans’ pockets to be used for many other purposes, whether education, travel, business creation, you name it.

The study’s authors, economists with Massachusetts Institute of Technology and the United States Department of Agriculture, write that “while we do not estimate the costs to workers who may receive lower wages and benefits, we find the effects of supercenter entry and expansion to be sufficiently large so that overall we find it to be extremely unlikely that the expansion of supercenters does not confer a significant overall benefit to consumers.”

They break out consumers by income bracket and show that supercenters have, in economicspeak, increasing “compensating variation,” as a shopper’s income declines. In plain English, that means the personal economic benefit grows in a powerful way--nearly 50 percent from the highest to the lowest brackets.

Like any firm believer in free markets, I despise crony capitalism and unsafe, exploitative labor and environmental practices around the world. We live in an imperfect world, though to cite Rev. Martin Luther King, Jr., “The arc of the moral universe is long, but it bends towards justice.” The Millennial generation is infused with a profound reverence for social justice; I would argue that globalization enhances social justice around the world. For every cherry-picked, soulless mogul, there's also a Bill Gates curing diseases and alleviating poverty. 

If we take the world’s current GDP of roughly $85 trillion and divide that by 7 billion people, that's a scant $121 per person per year, not enough to live well. We need to increase GDP rather than enacting redistributionist, sclerotic policies in the utopian hope of creating social justice. Global poverty can be slain through government reforms that allow free markets to flourish and increase global GDP. Yet some 1.5 billion people still live under communism, and India’s a massive democracy plagued by crony capitalism and red tape. While we abhor the nightmare scenes such as collapsing factories in Bangladesh, the developed world shows us the exciting possibilities. And Wal-Mart is certainly one of them.

On the Latest Trends in College Pricing

Avi Snyder & Michael Poliakoff - November 19, 2014

Every year, the College Board publishes its Trends in College Pricing and Trends in Student Aid reports. And every year, the news is the same: the price of college is up; debt is up; and the benefits of a college education are moving farther and farther away from the average family.

On one level, this year’s reports are no different. Average tuition and fees for in-state students at public four-year colleges increased 2.9 percent over the past year. The average tuition at two-year colleges increased 3.3 percent, while tuition at private nonprofit colleges increased 3.7 percent. And roughly 60 percent of students who earned bachelor’s degrees in 2012-13 from the institutions at which they began their studies graduated with debt. They borrowed an average of $27,300—an increase of 13 percent over five years. And the grand total of student debt is up too.

But there is also what appears to be a bit of good news. The rate of price increase is actually down, and the sum of what students borrowed this year was 13 percent lower than their borrowing in 2010-11. 

Some are greeting these reports with optimism, taking them as evidence that calls for dramatic higher ed reforms are premature. According to Inside Higher Ed, “Justin Draeger, president of the National Association of Student Financial Aid Administrators, said this year’s reports are good news over all. They’re also a good reminder of why permanent changes, such as cuts to Pell Grants, shouldn’t be made in response to acute budgetary problems.”

But when one digs into the data, even the “good” news reveals itself to be superficial.

First, it is important to remember why tuition and fees had been growing at such an especially alarming rate over the past several years, especially at public institutions. The Great Recession put a tremendous amount of strain of college and university budgets, as state appropriations evaporated and families’ finances suffered. Instead of making the tough decisions that would allow them to maintain academic quality while cutting back, many schools made up their financial shortfalls by passing the costs onto students in the form of higher tuition and fees. The decline in the rate of price increase this year doesn’t reflect institutions’ learning to control costs; it is simply the process of returning toward the pre-recession status quo.

Furthermore, much of the fall in total student borrowing is the result of sharp declines in enrollment. As the College Board notes, “Growth in full-time equivalent (FTE) postsecondary enrollment of 16% over the first three years, followed by a decline of 4% over the next three years, contributed to this pattern [of declining student borrowing].” Far from representing a success, this fact illustrates that more and more families feel higher education is out of reach. Even when it comes to the per-student decrease in borrowing, which is unaffected by enrollment, grant aid has simply taken up much of the slack. Less borrowing has little to do with greater cost effectiveness.

And all of this comes at a time when the average American’s income remains stagnant. So, even as growth in tuition and fees slows, a college education continues to become increasingly less affordable for most Americans.

Finally, it is also important to remember what the College Board’s reports don’t measure: what students are getting for all of this money. The publication of these reports comes not long after Richard Arum and Josipa Roksa released Aspiring Adults Adrift, the follow-up book to their 2011 study on the limited learning that occurs on college campuses. What they found is that today’s graduates are entering the world less prepared than ever. They lack the skills to be useful employees and the knowledge to be informed citizens. College price growth isn’t just outpacing inflation; it’s outpacing student learning by light-years.

A slower increase in tuition and fees and less student borrowing are surely good news. But the fundamental problems plaguing higher education remain as acute as ever. Far from dulling our desire for higher ed reform, a deeper look into the data should spur the country to stop focusing on the symptoms and begin tackling the root causes of our higher ed crisis.

Together They Bargain?

Scott Beaulier & George Crowley - November 18, 2014

Last Friday, America’s four postal employee unions organized a mass protest against Postmaster General Patrick Donahoe’s plan to shut down 80 distribution centers in January 2015. The postal workers, quite understandably, see their livelihoods at stake. Many reformers, however, see the rising share of public sector unionization as a drain on our tax dollars and a likely source of government growth—which, as new research reveals, may not be the case.

Regardless of where one falls on controversies like the postal worker strike or the attempted recall of Wisconsin Governor Scott Walker in 2012, most of us recognize the need for states to keep their promises to government workers, retirees, and citizens who rely on essential state services like education, Medicaid and public safety. In a study published today by the Mercatus Center at George Mason University, we outline just how challenging this can be for policymakers. Public sector unions are highly effective at securing pay and benefits for their members, but appear to have no effect on overall government spending. This leaves an obvious question: How are we paying for everything?

In our new research, we examine public sector union lobbying and collective bargaining activity. Because unions have several tools at their disposal to influence policy, it is difficult to gauge each tool’s effect on workers and taxpayers. To address this, we measured the impact of unions’ collective bargaining rights and political contributions on state budgets and employee compensation. After controlling for a number of factors, we made two important findings:

First, political activity by public sector unions works. Specifically, more collective bargaining tends to mean more government jobs, and more union political spending tends to mean higher growth in employees’ incomes. Rather than demonize unions, we should recognize that they are responding to strong political incentives. Their job is to take care of their members, and they do this extremely well. In economic terms, public sector unionization functions as a “club good” where members pay dues and, in return, receive higher salaries.

Second, while many public sector union critics believe they are a driving force behind government growth—according to the numbers we examined—union political activity does not appear to lead to higher state government spending. Instead, our findings suggest that it is geared toward securing a larger share of an existing pie, rather than growing the government pie. There appears to be a tradeoff between spending on public services and spending on employees.

We also find similar results for teachers’ unions: They take care of their members, and the data clearly indicate that stronger unions and more activity guarantee higher salaries for teachers. But, again, a larger spillover effect is that the data do not show an obvious correlation between increased teachers’ union spending leading to increases in state spending. So it’s reasonable to wonder if in-classroom funding is suffering.

In our current economic environment—where wages are stagnated and state budgets are already being squeezed by less revenue—these findings are doubly important for policymakers. Budgets are unlikely to rise, so increased public sector union activity seems likely to come at the cost of other services. As a result, we can expect to hear more stories like those coming from Detroit, San Bernardino, and Stockton, California—municipal bankruptcies driven in large part by policymakers’ inability to balance union priorities with financial commitments to the general public.

While our data indicate that the unions may not drive much new spending growth, they carve out such a large share of budgets for their members that municipal governments seem destined to fail. If the nationwide pension crisis—which could very well be related to the dynamic we’ve uncovered—is any indication, the longer politicians wait to address the problem, the more painful the fix will be for public workers and retirees.

The scene from failing cities is not all that different from what we’re seeing this week with postal employees: Their unions are fighting hard to protect their members and are willing to go down swinging to get the job done. But with states either unable or unwilling to increase the overall size of government, the result for American taxpayers is an increasingly squeezed public sector that is being asked again and again to do more with less.

Policymakers have a different job: to balance the priorities of different interest groups and the general public. Let’s hope they’re up to the challenge.

Scott Beaulier is chair of the economics and finance division and director of the Johnson Center at Troy University. George Crowley is an assistant professor of economics in the Johnson Center at Troy University. They are the authors of a new working paper published by the Mercatus Center at George Mason University on “Public-Sector Unions and Government Policy: The Effects of Political Contributions and Collective Bargaining Rights Reexamined.”

Diversity is the Key to Efficient, Affordable Energy

Pınar Çebi Wilber - November 14, 2014

The reaction to this week’s joint announcement by the U.S. and China on plans to drastically cut emissions has been mixed. According to the fact sheet released by the White House, under the agreement the U.S. agrees to cut net greenhouse gas emissions to 26-28 percent below 2005 levels by 2025. President Xi Jinping of China announced his intention to halt the increase in China's CO2 emissions by 2030, with an attempt to peak earlier, and to increase the non-fossil fuel share of China's energy usage to around 20 percent by 2030. 

One major step for the U.S. is the EPA’s recently released Clean Power Plan, with the goal of reducing power sector emissions for existing power plants to 30% below 2005 levels by 2030.  However, critics of the proposal have already voiced numerous concerns about the legality and feasibility of the plan, as well as concerns about the plan's impact on the reliability of the power grid. Reliability will be a key issue since the plan intends to dramatically decrease the share of coal fired generation relative to the nationwide electric power generation mix in favor of renewables. 

The shale renaissance that created an abundant supply of natural gas in U.S. has been one of the key factors in the switch away from the use of coal in electricity generation. In fact, over the last decade, the increase in electricity generated by natural gas reduced the share of coal in electricity generation by 10 percentage points.  A recent estimate by the Government Accountability Office states that, since 2012, 13 percent of the country’s coal capacity has either been retired or is planned to be retired by 2025. 

At the same time, the country’s nuclear generation capacity is also under threat. In addition to low natural gas prices, subsidies for renewable energy undermine the value of nuclear plants, causing premature retirement of these plants.   

Diversity of supply (or integration of different fuels and technologies) plays a key role in lowering the cost of electricity generation, as well as maintaining reliability, and also reduces the variability in monthly power bills. This past winter’s polar vortex was a perfect case study for how delivery and price issues in one fuel source can impact electricity consumers. The situation could have been worse if the system did not have other fuel sources, mainly coal to provide a relief valve for power generation in the Midwest and East.

In fact, a recent study conducted by IHS Energy shows how valuable a diverse power supply is for U.S. electricity generation and, consequently, the U.S. economy. Comparing the current mix of supply with a hypothetical case in which there is no meaningful contribution from coal and nuclear, the study found that the cost of generating electricity would be $93 billion higher per year without coal and nuclear. The study also calculates the macroeconomic impacts of a less diverse energy supply. The increase in the cost of electricity would reduce real U.S. GDP by nearly $200 billion, lead to roughly 1 million fewer jobs, and reduce the typical household’s annual disposable income by around $2,100 within the three years after the power price changes. 

Then there is the issue of efficiently integrating renewable power sources into the nation’s power grid.  The aging of the nation's infrastructure has been a concern for the last decade without any major action to address the issue. In fact, according to the 2013 Report Card for America’s Infrastructure, conducted every 4 years by the American Society of Civil Engineers, the country’s grade for energy and the national power grid is D+, which means poor and at risk. Similarly, a new assessment by grid overseers North American Electric Reliability Corp, argues that the surge toward natural gas and renewable energy, driven by cheap gas and new government rules and policies, is creating reliability concerns -- especially in the Midwest, New York and Texas -- and weakening buffers for blackouts. Furthermore, this analysis did not include the impact of the EPA's Clean Power Plan that can only exacerbate reliability concerns.   

As any smart investor would know, it is not wise to put all your eggs in one basket. Unfortunately, the current regulatory climate both at the state and federal levels is encouraging the trend of decreasing the diversity of our power supply in electricity generation. A closer look at policies that encourage the phase out of certain fuels, like the Clean Power Plan and state level renewable portfolio standards, is warranted. While fighting climate change is a noble goal, there need to be   smart, cost effective ways of dealing with the problem. As a new paper by Hugh Byrd and Steve Matthewman concluded: “no matter how smart a city may be, it becomes dumb when the power goes out.”

Dr. Pınar Çebi Wilber is a senior economist for the American Council for Capital Formation, a nonprofit, nonpartisan organization promoting pro-capital formation policies and cost-effective regulatory policies.

Mike Cassidy - November 11, 2014

Vacations are different from weekends.

Two days off at the end of the week is nice, but anyone who's ever felt Sunday-afternoon angst knows you need a lot more than that to get a true respite from the working world. Three or four days in, you finally start relaxing. After a week, you stop compulsively checking your e-mail. Another week and PowerPoints and conference calls begin to seem a fanciful memory, not unlike payphones and smoking in restaurants.

Unfortunately, the deepness of the relaxation is matched only by the harshness of the wake-up call. Upon returning to the office, you are at once inundated and overwhelmed, unable to remember what you were working on or, worse still, why it was important.

For most Americans, a really long vacation might last for a couple of weeks. Now imagine how you'd feel if you were out of work for a year. You'd notice your skills depreciating. Your self-esteem might take a hit. And if you didn't have a set job to return to, you might start to doubt if you'd be able to find a job at all.

If you can appreciate these feelings, you can begin to get a sense of why long-term unemployment is important.


Long-Term Unemployment at Historic Highs
You may have heard that unemployment has been dropping recently. It's true. At 5.8 percent, it's only a quarter above its 2007 average.

But seven years after the onset of the Great Recession, long-term unemployment remains at 1.9 percent. Although it has steadily fallen from its record of 4.5 percent in 2010, it's still higher than it was at any time between 1983 and the Great Recession.

In normal times, most unemployment is of the short-term variety, which is defined as spells lasting 26 weeks or less. Take a look at the figure below, which comes from my new report, "Uncovering the Labor Market Recovery," published last week by the Century Foundation. From 2000 to 2007, long-term unemployment constituted less than a fifth of total unemployment, on average.

But in the aftermath of the recession it spiked, grabbing a 45 percent share in 2010. Four years later, the long-term unemployed are still a third of all unemployed. What that means is one in every three unemployed workers have been without jobs for more than 27 weeks. That's 2.9 million people.

So while the short-term unemployment rate is just 4.1 percent above its pre-recession average, the long-term unemployment rate is still elevated by a staggering 130 percent. As of October, the average unemployed worker had been out of work for 32.7 weeks -- nearly a doubling of the average duration of unemployment in 2007.

Duration matters. Like a summer vacation on steroids, time spent out of work causes productive potential to erode. What's more, the long-term unemployed often must switch industries or change occupations, which requires not only not losing old skills, but acquiring new ones.

Employers know this. So the longer someone sits on the sidelines, the larger the stigma they must overcome in making their case to recruiters. And the longer the drought, the more skills dry up, which makes finding a job harder still -- a vicious cycle.


Effects of Long-Term Unemployment Can Be Permanent
The unemployed themselves aren't the only ones hurt by this unforgiving pattern. At the macro level, sustained labor underutilization can permanently harm an economy's productive capacity. Economists refer to this as hysteresis, a term borrowed from physics that describes situations in which temporary perturbations have permanent consequences. In our case, persistence in long-term unemployment may mean strained safety nets and diminished living standards for years to come.

It's too early to tell whether the Great Recession's long-term unemployment legacy will linger. But the next figure suggests just how unusual our present situation is. It shows the evolution of the long-term unemployment rate in the five years following the month in which the overall unemployment rate peaked during the three most recent recessions (excluding the minor recession of 2001).

In each case, long-term unemployment considerably exceeded pre-recession levels. But in the 1981-82 and 1990-91 recessions, it declined to normal levels within about two and a half years. (From 1980 to 2007, the long-term unemployment rate averaged 1.0 percent.)

But the Great Recession was different. Not only did the long-term unemployment rate reach nearly twice the level it did during the two previous recessions, but now, five years out, it's still double its pre-recession norm. That's not a good sign.


What Makes the Great Recession Different?
So why has the Great Recession been different for long-term unemployment -- and what does this imply for policy? These questions are not easy, and remain fairly unresolved. However, a recent Brookings paper by Princeton economists Alan Krueger, Judd Cramer, and David Cho offers a few important insights.

In one sense, the long-term unemployed are a different species from those unemployed only for short periods. They have much greater difficulty finding jobs; indeed, the authors find that, from 2008 to 2012, only about one in ten long-term unemployed returned to full-time work within a year. Not surprisingly, the long-term unemployed are also more prone to stop looking for work -- that is, to withdraw from the labor market.

Consequently, the long-term unemployed exert little pressure on hiring or wages, which may help explain why we have not experienced deflation even as the unemployment rate remained high. For purposes of prices and wages, the long-term unemployed have seemingly had the effect of artificially inflating the unemployment rate.

But in another important sense, the long-term unemployed are just like the rest of us. The study finds that, contrary to popular conception, the long-term unemployed are spread widely across demographic and occupational groups. In other words, the long-term unemployed are unlucky.

When consumer demand collapsed during the Great Recession, unemployment spiked. When it did, some laid-off workers got trapped in the long-term unemployment vortex, often through no fault of their own. Many remain mired there today.

Employment policies should serve the needs of these hard-luck workers. Education and training programs that equip displaced workers with new skills are a good place to start, as are incentives that militate against employer biases. But as the Brookings study suggests, the diversity of the long-term unemployed necessitates a potluck of solutions, carefully calibrated to individual circumstances.

Most importantly, we must act now: Getting the involuntarily idle back to work is not just for their sake; it's for ours too. Let's hope it doesn't take much longer.


Mike Cassidy is a policy associate at the Century Foundation. 

The 'Doc Fix' Disaster

Craig H. Kliger - November 10, 2014

J. Wellington Wimpy, the glutton from the comic strip Popeye, is famous for saying "I'd gladly pay you Tuesday for a hamburger today."

As part of the Balanced Budget Act of 1997, Congress based the Medicare "Sustainable Growth Rate" (SGR) on economist Robert C. Higgins's formula for calculating how quickly a corporation's sales can grow. It hoped to prevent Medicare payments to health providers from rising faster than the rate of growth (or contraction) of the Gross Domestic Product, weighted for the number of Medicare beneficiaries. Were spending to rise or fall faster than that in a given year, payment rates for the following year would be adjusted to maintain budget neutrality, paralleling Wimpy's offer, albeit in reverse.

But like a "Guess What Happens Next?" segment from America's Funniest Home Videos, you already know this won't turn out pretty. The idea that a future penalty, imposed on all providers, could somehow limit current services provided by individuals defies common sense. Higgins, for example, did not see his formula as a reason for corporations to raise their prices whenever sales didn't meet targets.

"You mean, if I have a hamburger now, the ones sold next Tuesday might be an ounce smaller? Who cares? I'm hungry today, and there might not even be a next Tuesday!"

So the Medicare SGR may have actually (at least initially) encouraged overutilization by creating the expectation that future reimbursements could be lower for the same work. Indeed, with the exceptions of 2000 and 2001, each year has been slated for a reimbursement cut (see chart) -- but Congress hasn't had the stomach to enforce it since 2002, recognizing the threat posed to health-care access for seniors, a key constituency.

The nearly annual financial patch is now lovingly called the "Doc Fix." However, since Congress has only rarely allocated money to offset these costs, the SGR formula does not incorporate Doc Fixes into its baseline and now produces yearly cuts of 20 to 35 percent, which would bankrupt most practices if they ever occurred. What could actually have been a real incentive to control costs has, for nearly twelve years, become nothing more than an expensive game of crying wolf.

Building on its "success" with the SGR, Congress has since, through various laws (the 2006 Tax Relief and Health Care Act, the 2009 Health Information Technology for Economic and Clinical Health Act, and the 2010 Affordable Care Act), overlaid it with similarly well-intentioned but dubious mechanisms attempting to encourage cost containment -- but now also "quality" care -- by tying Medicare payments to "performance." These include the Physician Quality Reporting System, the "Value-Based Modifier," and standards for demonstrating "Meaningful Use" of electronic health records.

Were it only as simple as Dana Carvey might have put it, impersonating President George H.W. Bush: "Quality gooood ... Gooood, spending too much baaaaad." Unfortunately, the overly complex and time-consuming processes that have since taken shape -- with looming potential combined penalties of around 10 percent for failure to achieve statutory goals -- will almost certainly give Congress yet another chance to blink.

Measuring a provider's quality in a credible way would require actual chart review by qualified peers of a sizable number of different types of patient encounters over time. Such a comprehensive assessment could determine whether reasonable care had been given over a reasonable timeframe in a reasonably cost-effective way based on individual patients' circumstances. Unfortunately, this is an extremely labor-intensive and thus expensive process.

So we instead are asking computers to divine "quality" from claims or registry data using unproven surrogate measures. For example, one measure uses the glycosylated hemoglobin (A1c) blood test as a surrogate for good control of diabetes. This unfairly penalizes providers who have higher-than-average proportions of Medicare patients who either are non-adherent or do not respond to accepted treatment regimens -- something not under the provider's control. More concerning, the mere existence of such a measure may cause providers to focus solely on the test at the expense of other non-measured yet vital aspects of diabetic treatment -- the "treating to the test" phenomenon (similar to "teaching to the test" in education).

And providers aren't the only ones concerned. The well-respected Robert Wood Johnson Foundation strongly cautions, "The adoption of flawed measurement approaches that do not accurately discriminate between providers can undermine professional and public support for provider accountability, reward indiscriminately, and divert attention from more appropriate and productive quality improvement efforts."

In addition, data just disclosed by the Centers for Medicare and Medicaid Services (CMS) at a November 4 meeting indicate that only about 2 percent of 500,000 participating providers have to date attested to meeting the most current (Stage 2) of standards related to "Meaningful Use" of electronic health records, raising concerns about the complexity of this program as well.


One might conclude the government is actually relying on the inability of providers to navigate such complexity to generate the maximum penalties, regardless of quality or cost containment, as a surefire means of stabilizing the Medicare Trust Fund. After all, rather than addressing the standards themselves, CMS still recoups through audits vast sums from providers who fail to understand or consistently implement complex Medicare documentation requirements that have been in place since the late 1990s -- a period notable for Susan Powter's "Stop the Insanity" weight-loss craze.

We need to follow her advice.

Yes, it is very important to get quality and value for the money we spend on health care. And it's not as if the programs put in place had no potential at all. But any surrogate quality measures need to be validated -- perhaps through small regional pilots -- by comparing their results with those of more accepted means (such as chart review) before they are applied to an industry that constitutes one-sixth of the U.S. economy. Furthermore, rather than passing hard-and-fast laws with unachievable deadlines on these issues and treating providers as the enemy, Congress could increase its odds of success by giving CMS more general directives to innovate to restrain costs (perhaps with flexible targets) in actual partnership with those rendering care.

The Medicare SGR Repeal and Beneficiary Access Improvement Act of 2014, sponsored by Senate Finance Committee chairman Ron Wyden (D., Ore.), would have eliminated the SGR, limited many of the above-referenced draconian penalties, and -- at least according to early estimates that included other fixes -- cost a relatively small $131 to $180 billion over ten years. (This is just 2 percent of total Medicare outlays.) The bill almost passed with bipartisan support, but was scuttled at the last minute in favor of another temporary "fix," with Congress unable to reach agreement on a funding mechanism during an election year.

Interestingly, based on the Congressional Budget Office’s own estimates (see page 2) for the next decade, the Wyden bill's cost to taxpayers represents just shy of 2 percent of total Medicare outlays. I may be going out on a limb here, but if that money can't be found in the budget, providers might be willing to accept the cut (or some portion) instead if -- in fairness -- other sectors of Medicare (hospitals, etc., that have been spared the SGR over the same twelve years in favor of modest annual increases) did so as well. However, this would have to be in exchange for sidelining or scaling back the above-discussed programs and similar onerous and costly ones affecting the other sectors, possibly including abandoning the counterproductive "ICD-10" disease-classification system as I have previously advocated, making all of this potentially a financial wash.

Of course, the next cycle of crying wolf has already begun. CMS has just announced the SGR will yield a cut of 21.2 percent when the most recent patch expires in April 2015, which almost no one believes will happen. Given its track record, can a similar "fix" for program penalties be far behind?

It’s time to say enough is enough. Now that the election is over, the lame-duck Congress has one final chance to address these issues, and the outcome of the election shouldn't change the bipartisan resolve to get this done. However they fund it, legislators need to swap their hamburgers for some spinach so they can be strong to the finish like Popeye.

Craig H. Kliger is an ophthalmologist and executive vice president of the California Academy of Eye Physicians and Surgeons.

How Much Did Your Vote Cost?

Grace Wallack & John Hudak, Brookings Institution - November 7, 2014

Totaling more than $111,000,000.00, the 2014 North Carolina Senate contest between Kay Hagan and Thom Tillis is the most expensive Senate election in the nation's history (not adjusted for inflation). As we investigated earlier this week, outside money has been flowing into American politics in the wake of the Supreme Court's Citizens United decision in 2010.

When candidate and independent spending are combined, 2014 ranks among the most expensive, if not the most expensive, in history. However, understanding campaign spending takes more than a simple examination of total dollars. Spending differences across states can occur for a variety of reasons, including geographic size, population size, and the expense of media markets.

As a result, a more useful metric for understanding the magnitude of campaign activity is spending per voter, and 2014 offers an interesting case: Alaska. This year, Alaska saw a highly competitive Senate race in which both outside groups and candidates spend substantial amounts of money. Alaska ranks 47th in population with just over 700,000 residents and an estimated 503,000 eligible voters. After adjusting spending (both candidate and independent expenditures) for each state's estimated voting eligible population, Alaska's 2014 Senate race, unsurprisingly, ranks as the most expensive in US history.

Alaska originally ranked 6th most expensive in 2014, with about $60 million spent total. But it jumps to first place in dollars spent per voter. Candidates and outside groups spent roughly $120 per voter in Alaska this year, about double the next most-expensive race, Montana 2012, where candidates and outside groups spent $66.5 per voter. By comparison, the $111 million Senate race in North Carolina -- with a voting-eligible population of about 6,826,610 -- equaled only $16.25 per voter. That's still far above the median spending per race for all three cycles ($7.3 per voter) but certainly serves to put the spending in context.

Relative to 2012 and 2014, in terms of both combined and per-voter spending, 2010 could be considered one of the cheaper cycles for Senate races thus far.

These data lend some support to the observation that, since Citizens (and more recently McCutcheon v. FEC) independent expenditures are quickly outpacing contributions to candidates. But given changes in reporting requirements and limited data, there is still a lot about outside spending we still don't know.

All in all, candidate and outside group spending totaled just over a billion dollars in Senate races in 2014. The fact that North Carolina alone accounted for more than ten percent of that spending is astonishing, but no less remarkable is the intensity of spending per voter in Alaska. But if spending continues to grow as it has the last three election cycles, both of those records will likely be shattered in 2016.

Grace Wallack is a research and editorial assistant in governance studies at the Brookings Institution's Center for Effective Public Management. John Hudak is a fellow in governance studies and managing editor of the FixGov blog, where this piece originally appeared.

The IMF's Rapid Ebola Response

Gary Litman - November 5, 2014

The U.S. is preparing to deploy another 3,000 troops to the Ebola-stricken nations of Western Africa to join the 700 already there -- but the International Monetary Fund has beaten the Pentagon to the punch, evading its own byzantine rules to push $130 million to Guinea, Liberia, and Sierra Leone.

Already in early July, the IMF's staff had completed a thorough review of the fiscal impact of the Ebola outbreak. The decline in receipts on trade, income and other taxes, and mining royalties was estimated at $46 million in Liberia alone, or roughly 20 percent of the country's GDP. The additional spending on emergency health, security, and food imports added another $20 million. The damage to the other two nations was equally devastating.

Realizing the scope of the problem, on October 9 IMF changed its rules and allowed the three most threatened countries to receive the money almost instantly under the aptly named Rapid Credit Facility arrangement. So today, as help is reaching West Africa from all around the world, the governments of Liberia, Guinea, and Sierra Leone can continue to function and avoid a social unrest that would have compounded the health-care crisis.

U.S. ambassador to the United Nations Samantha Power recently traveled to the region and noted "positive signs" in West Africa. What American officials are reluctant to mention is that the money behind much of the progress has come through the one international institution that works and yet has been denied the full support of the U.S. Later this month at the G20 Summit in Brisbane, Australia, the president will likely again hear from all global leaders that U.S. commitment to the IMF remains unfulfilled.

Congress has so far refused to ratify an IMF reform that was due to take place in 2012. Instead of a slight increase in the U.S. stake in the IMF, the Treasury has been compelled to provide a temporary loan, which in effect blocks the reform -- which the U.S. itself pushed through in the wake of the Lehman collapse. Little wonder that other members, who have all held up their part of the bargain, are now busy developing alternative financial institutions.

If we allow our leadership in this essential global institution to lapse, we will be left with just the Army, Navy, and Air Force to meet every major global challenge. We have to recognize that despite their many drawbacks, the post-World War II international financial institutions have worked, have delivered, and will be needed again in the future. The deadline for Congress to act on our commitment is December 31. It will be a tall order for the lame-duck legislature. Will the world keep waiting on us?

Gary Litman is vice president of international strategic initiatives at the U.S. Chamber of Commerce.

Why the GSEs' Support of Low Down Payment
Loans Again Is No Big Deal

Taz George, Laurie Goodman & Jun Zhu, Urban Institute - November 4, 2014

Will allowing the government-sponsored enterprises (GSEs) to guarantee smaller down payment loans in an effort to increase mortgage availability lead to more defaults? Some skeptics have raised this concern in response to the Federal Housing Finance Agency's recent move to encourage lenders to issue mortgages with down payments as low as 3 percent. Based on a review of the performance of low-down-payment GSE mortgages in recent years, however, these fears are not well founded.

Fannie Mae and Freddie Mac (the GSEs), the guarantors of most of the nation's mortgage debt, currently only purchase loans that have at least a 5 percent down payment. Prior to late 2013, however, Fannie Mae guaranteed loans with down payments between three and 5 percent. By examining the performance of these pre-2013 loans, we can get a sense of how likely it is that borrowers with similar loans will default going forward.


The default rates of 3-5 and 5-10 percent-down payment GSE loans are similar.

Loans that originated in recent years with down payments between 3-5 percent exhibit default rates similar to the default rates of those with slightly larger down payments -- in the 90-95 LTV category.

Of loans that originated in 2011 with a down payment between 3-5 percent, only 0.4 percent of borrowers have defaulted. For loans with slightly larger down payments -- between 5-10 percent -- the default rate was exactly the same. The story is similar for loans made in 2012, with 0.2 percent in the 3-5 percent down-payment group defaulting, versus 0.1 percent of loans in the 5-10 percent down-payment group.

While this database is limited to 30-year, fixed-rate, amortizing mortgages (interest-only mortgages, 40-year mortgages, and negative-amortization loans are excluded), it is representative of GSE loans made in the post-crises period.


Borrower's credit is a stronger indicator of default risk than down payment size with these loans.

The pattern is consistent even in the years leading up to the crisis, when overall default rates were much higher. In 2007, the worst issue year, 95-97 LTV loans in any given FICO bucket performed only marginally worse than the 90-95 LTV loans, and FICO score was a larger determinant of performance. For example, 95-97 LTV loans with a 700-750 FICO score have a default rate of 21.3 percent, versus 18.2 percent for 90-95 LTV loans. However, the 95-97 LTV loans with a FICO score above 750 had a 13.5 percent default rate, much lower than the 90-95 LTV loans with a 700-750 FICO score.


The GSEs' risk-based pricing means only a small group of lower-risk borrowers will end up with these loans.

This analysis tells us that there is likely to be minimal impact on default rates as low-down payment GSE lending gravitates towards borrowers with otherwise strong credit profiles. And this makes sense because GSE loans are priced on the basis of risk (including loan-level pricing adjustment and mortgage insurance costs), while Federal Housing Authority (FHA) loans are not. Thus, borrowers with high LTVs and low FICO scores will find it more economically favorable to obtain an FHA loan.

Furthermore, in recent years, a miniscule number of these loans were put back by Fannie Mae following a default, an action taken when Fannie determines that a delinquent loan was irresponsibly underwritten. The number of putbacks on 95-97 LTV loans over the entire 1999-2013 period was 0.5 percent, little different than the 0.4 percent for the 90-95 LTV bucket.

Those who have criticized low-down payment lending as excessively risky should know that if the past is a guide, only a narrow group of borrowers will receive these loans, and the overall impact on default rates is likely to be negligible. This low down payment lending was never more than 3.5 percent of the Fannie Mae book of business, and in recent years, had been even less. If executed carefully, this constitutes a small step forward in opening the credit box -- one that safely, but only incrementally, expands the pool of who can qualify for a mortgage.


Taz George, Laurie Goodman, and Jun Zhu are researchers in the Urban Institute's Housing Finance Policy Center. This piece originally appeared on the Urban Institute's MetroTrends blog.

Tie the 401(k) Limit to the Minimum Wage

Elliot Schreur - November 4, 2014

The IRS just raised the maximum annual 401(k) contribution from $17,500 to $18,000 to keep pace with inflation. Saving is a good thing, and Americans plainly don't do enough of it, so maintaining a high contribution limit is good news, right?

Wrong. The limit is actually a part of our wealth-inequality problem. The government goes to great lengths to help the very wealthy save more money while neglecting everybody else. Rather than making sure the maximum contribution keeps pace with inflation, as the IRS is currently required to do by statute, we should lower the limit significantly.

How, exactly, does a generous 401(k) maximum give the wealthy a leg up? Because when it comes down to it, almost no one actually maxes out their contributions. The few who do tend to have very high incomes and are more likely to save at higher rates anyway.

According to the Urban Institute, just 6 percent of 401(k) participants are affected by the contribution cap. Another study found that only 8 out of 645 employees surveyed (or 1 percent) were affected. Automatically raising the contribution cap provides benefits only to a small minority of the very highest earners and does nothing for more than 90 percent of American families.

The households we're talking about simply cannot be described as middle-class. Consider that the median retirement-account balance for all working-age households is $3,000. That means your average U.S. household has the same amount saved in total as a maximum contributor saves in two months. And if middle-class means middle-income, it's worth noting that over 60 percent of working-age households making below the median income don't even have a retirement account, let alone save $18,000 each and every year.

It's no wonder high-income households contribute more to their retirement accounts: They get far more benefit from saving in them. Every $100 saved in a 401(k) by an earner in the top tax bracket -- 39.6 percent -- provides an immediate benefit of $39.60. (Those who contribute the maximum can pocket up to $7,128.) By contrast, a single mother making minimum wage will earn $15,080 a year, which is less than the contribution cap. If that struggling mother is resourceful and fortunate enough to save $100 in a 401(k), the tax code will provide an immediate tax benefit of exactly $0 because she has no income-tax liability.

All told, the top 5 percent of earners receive 40 percent of all the federal government's retirement subsidies. The bottom 60 percent get 7 percent.

In 2015, the overall yearly cap on employer and employee contributions to retirement accounts, which includes the $18,000 contribution limit from employees, will be $53,000. To get a sense of how few taxpayers would be affected by lowering this amount, we can look at a proposal from the Urban Institute to reduce the overall cap by as much as 63 percent, to $20,000. It's estimated that reducing the overall cap by this much would increase taxes on just 3 percent of U.S. taxpayers in 2015. Sure, this is likely to be an unpopular proposition with this 3 percent, but the tax dollars recovered by eliminating this regressive government subsidy could be redirected to provide more effective saving supports to more workers who need the most help saving for retirement.

$20,000 isn't much more than $15,080 -- so why not link tax-preferred savings for the wealthy to our minimum-wage-earning single mother's yearly income? This would have the dual benefit of cutting down government inefficiency and forcing policymakers to give some help to working families by raising the minimum wage if they really want to provide more subsidies to the rich.

401(k) contribution limits affect a small minority of American workers. The government should stop focusing on maintaining the value of this wasteful subsidy and instead help the majority of American workers build sufficient assets to retire with dignity.

Elliot Schreur is a policy analyst with the Asset Building Program at New America.

W. Bradford Wilcox & Robert I. Lerman - November 3, 2014

When it comes to inequality and mobility, there is a new smell of defeat in America. Usually, the counsel of despair highlights two market forces: the growing polarization of the labor market, with middle-class jobs falling by the wayside, and the increasing share of income going to the rich. For some, more government spending and more taxes are clearly the solution. Others see less government spending and regulation as the best way to revive the economic fortunes of the country.

But few are talking about the erosion of a fundamental building block for economic opportunity in America: the intact, married-couple family.

The most recent example is last month's speech by Federal Reserve chair Janet Yellen, who suggested that rising income inequality and stagnant wages among American families threaten fundamental American values. Her sensible prescription focused on four "significant sources of opportunity for individuals and their families" -- family financial resources, affordable higher education, business ownership, and inheritances. Yet Yellen said virtually nothing about the breakdown of the American family.

Unfortunately, this oversight is all too typical of America's public conversation about economics. In our new report, "For Richer, for Poorer," we demonstrate that the nation's half-century retreat from marriage -- marked by increases in family instability and single parenthood – is clearly linked to growing economic inequality, stagnant family incomes, and the declining economic status of men.

Take growing income inequality among American families with children. We find that about one-third of the growth in family-income inequality since 1979 is connected to the fact that fewer American parents are, and stay, married. Widening income differentials further is the fact that married parenthood is increasingly the preserve of college graduates, as working-class and poor Americans are much less likely to get and stay married.

Or take stagnant incomes. We estimate that the growth in the median income of families with children would be 44 percent higher if the United States enjoyed 1980 levels of married parenthood today.

Marriage and family structure matter a great deal for economic mobility as well. We find that young men and women work significantly more hours and make more money if they were raised in an intact family with both biological or adoptive parents (nearly all of whom are married), compared with their peers raised by single parents. Growing up with both parents also increases high-school-graduation and marriage rates and lowers rates of unwed parenthood. Regardless of the family situation in which men were raised, getting married themselves sharply increases their hours of work and their individual earnings. And middle-aged men and women enjoy markedly higher family incomes -- a gain of at least $44,000 -- if they are married. We also find that these economic benefits of marriage extend to black, Hispanic, and less-educated Americans. For instance, black men enjoy a marriage premium in their personal income of at least $12,500 compared with their single peers.

Interestingly, we found that family structure is often about as predictive of economic outcomes as other factors that attract more attention, including race, education, and gender. The connection between family structure, economic opportunity, and economic success in America is remarkably strong.

But even in the face of findings like these, many have been silent about the family-structure effect. Others seem to despair that anything can be done to revive the fortunes of marriage in 21st-century America. While Isabel Sawhill, the director of the Brookings Center on Children and Families, acknowledges that "marriage has many benefits for both adults and children" and is "one of the best antipoverty policies in existence," she thinks marriage may be beyond saving, arguing that we need to move "beyond marriage" and focus on reducing the prevalence of unplanned parenthood with more contraception, thereby ensuring at least that parents are better educated and better off when children come along.

Such a strategy is problematic, however, because no substitute exists for the intact, married family when it comes to boosting future economic opportunities for children, strengthening the commitment of men to the labor force, and ensuring the economic welfare of families. Indeed, even in Sweden, where the social-welfare state is strong, and where family-structure inequality is also growing, children from single-parent families are significantly less likely to do well in school and more likely to be poor. One study found that child poverty in Sweden was three times higher in single-parent families than in two-parent families.

So, if political, business, and thought leaders, economists, religious leaders, and educators are serious about confronting economic inequality, social immobility, and stagnating wages -- i.e., about reviving the American Dream -- they also need to focus on how to reverse the retreat from marriage in America. The alternative is a future of durable economic inequality, where college-educated Americans and their kids enjoy strong and stable marriages that boost their economic opportunities, and everyone else faces an increasingly unstable family life that puts them at a permanent disadvantage. We cannot think of anything less progressive.

W. Bradford Wilcox, a professor of sociology at the University of Virginia, directs the Home Economics Project at the American Enterprise Institute and the Institute for Family Studies. Robert I. Lerman is an institute fellow at the Urban Institute and a professor of economics at American University. They are co-authors of a new AEI-IFS report, "For Richer, for Poorer: How Family Structures Economic Success in America."

Do Americans Know How to Use Insurance?

Michelle Andrews, Kaiser Health News - November 1, 2014

They know less than they think they know. That's the finding of a recent study that evaluated people's confidence about choosing and using health insurance compared with their actual knowledge and skills.

As people shop for health coverage this fall, the gap between perception and reality could lead them to choose plans that don't meet their needs, the researchers suggest.

"There's a concern that people who don't have much experience with health insurance don't protect themselves financially, and then something happens," says Kathryn Paez, a principal researcher at the American Institutes for Research who co-authored the study. "So they're learning through hard knocks."

The nationally representative survey of 828 people aged 22 to 64 is part of a project to develop a standardized questionnaire that researchers, health plans and providers can use to assess people's health insurance literacy.

The study found, for example, that while three-quarters of Americans say they're confident they know how to use health insurance, only 20 percent could correctly calculate how much they would owe for a routine physician visit. Many people don't understand commonly used terms such as "out-of-pocket costs," "HMO" and "PPO," according to the study.

The study also found that certain groups of people tended to have a tougher time using health insurance, including young people, minorities, those with lower income or educational levels and those who used health care services infrequently.

People who visit the doctor occasionally but have never been hospitalized or visited the emergency room may be overconfident they understand how health insurance works, says Paez. Likewise, people who belong to integrated health care systems where providers are generally on staff may not realize the potential complications of in-network and out-of-network coverage, among other things, she says.

More comprehensive education could help close the gap between what people think they know about health insurance and what they actually know. In the meantime, the issue brief about the study includes a consumer checklist to aid consumers in choosing a plan.

This piece originally appeared at Kaiser Health News, a nonprofit national health policy news service, where Michelle Andrews is a reporter.

Blog Archives