Filter | only charts

Don't Wait for Washington to Fix Health Care

Paul Howard- July 29, 2016

Democratic and Republican governors know that rising health-care costs — for public employees and those with Medicaid — are, increasingly, restricting spending on other state priorities, from education to roads and bridges. They also know a bitterly divided Washington is unlikely to provide any help any time soon. Yet there are plenty of levers governors can pull to help bring costs in line, especially by increasing competition in health-care markets.

The challenge is stark. Medicaid spending — up 14 percent last year, according to a Kaiser Family Foundation report in October — is the second largest item on most state budgets. States that expanded their programs under Obamacare will see a decline in federal matching funds beginning next year, while non-expansion states are still dealing with the influx of millions of new Medicaid enrollees prompted by the law. In addition, state employee health coverage continues to be a significant cost center for many states and cities, with the total unfunded liability for state employees’ health benefits estimated at over $600 billion.

But states are not powerless. They retain core regulatory powers over health-care services, including the licensing of doctors and nurses as well as authorization of infrastructure, including hospitals and ambulatory surgery centers. State governments can also turn some of their liabilities — for instance, large pools of very generously insured public employees — into market leverage. By embracing contracting and benefit design strategies that are common in the private sector, they can get lower prices and better outcomes for every tax dollar spent, while leveling the playing field for the entire payer community.

With these unique opportunities in mind, here are five strategies that innovative governors can use to help transform state health-care markets:

1. Incorporate reference pricing for common procedures and tests into state benefit designs. Reference pricing (RP) allows employers to set a price for a given service and then offer employees a list of providers who accept the price. California’s Public Employees Retirement System (CalPERS) used RP to correct stark price variations for knee and hip replacements, which helped reduce orthopedic surgery prices across the state. In the first two years of the program, the state saved $13.4 million. The Health Care Cost Institute estimates that as much as 43 percent of spending through employer-based coverage is attributable to potentially “shoppable” services that could benefit from tools like reference pricing.  

2. Ban anti-tiering provisions. Large hospital systems can use their leverage to prevent insurers from offering patients lower co-pays at less expensive in-network hospitals (a strategy called tiered networks). Massachusetts passed a law in 2010 that required public plans to offer tiered networks. For instance, preferred hospitals in Blue Cross/Blue Shield of Massachusetts had cost sharing of just $170, with a middle tier of $360 and $1,000 for non-preferred hospitals. Researchers at the Commonwealth Fund found that from 2009–2012, “44 percent of [Massachusetts] hospitals changed tiers, mostly from middle to preferred, and nearly all because they decreased their prices.”

3. Drive price transparency by setting up an all-payer claims database (APCD). APCDs collect information on all health-care claims paid from all payers. Pricing information allows consumers, employers, and providers to shop for the best values. In 2003, New Hampshire became one of the first states to implement an APCD. In 2007, the state launched NH Health Cost, a pricing website with provider- and insurer-specific prices. NH Health Cost spurred competition among hospitals, leading to widespread uptake of tiered benefit plans and sharp growth in high deductible health plans. 

4. Expand Access to Direct Primary Care, including for Medicaid patients. Fourteen states have legalized direct primary-care practices, where primary-care physicians set monthly fees for 24-7 physician access, along with other basic services. DPC practices don’t accept insurance, but this allows them to slash overhead costs (in some cases, by as much as 40 percent); many offer care for less than $100 per month. When DPC’s are wrapped around high deductible health plans (which Obamacare allows) and health savings accounts, consumers can get the best of both worlds: access to routine primary care at an affordable cost, with protection from catastrophic expenses in the event of a true emergency or serious illness. 

5. Repeal regulations that hamstring competition. States should repeal certificate of need laws, which protect incumbent hospitals from competition, and prohibitions on the corporate practice of medicine, which prevent for-profit companies from buying and reorganizing health care firms. 

Each of these reforms can help stimulate what health care has long lacked: a more transparent, competitive market for goods and services that attracts the best in new health-care start-ups and investment. State Medicaid programs will also benefit as well from a more innovative, value-focused health care system. Best of all, these reforms are eminently non-partisan and don’t require any permission from Washington. 

Our national debates about health care tend to focus on getting people coverage. But to make the cost of coverage sustainable in the long run, we need much greater competition in how health care is delivered and priced. Here, states are best positioned to take the lead.

Paul Howard is director of health policy at the Manhattan Institute. 

Give States the Freedom to Regulate Drones

Jason Snead & John-Michael Seibler- July 28, 2016

In late June, a drone nearly collided with an aircraft carrying 500-gallon buckets of water to douse a wildfire in southern Utah. Incidents like this have been used to feed the narrative that drones are so dangerous that they must be regulated by the feds, leaving no room for states to act on their own. 

Utah has disproved that claim.

Within weeks of the near collision, Utah Governor Gary Herbert signed into law a bill dealing with drone interference in emergency firefighting operations. Utah’s prompt action shows that states can and do act to protect the public without prodding from or preemption by Washington. 

The New Law

The new law makes it a crime, punishable by imprisonment or a fine of $2,500, to enter airspace designated as a wildland fire scene by a federal, state, or local government entity. There’s an exception for individuals operating unmanned aircraft systems (UAS) “with the permission of, and in accordance with the restrictions established by, the incident commander.”

The law also imposes a term of imprisonment or up to $5,000 in fines for “recklessly” operating a UAS in “an area under a temporary flight restriction” and interfering with any aircraft carrying a “water or fire retardant.” If the UAS makes physical contact with a manned aircraft, it would constitute a third degree felony, punishable by imprisonment or a fine of up to $10,000. And if the UAS causes a manned aircraft to crash, the charge rises to a second degree felony, punishable by imprisonment or a fine of up to $15,000.

And the cost of drone interference could go far, far higher. The law authorizes judges to require a convicted person to repay “damages to person or property, the costs of a flight, and any loss of fire retardant.” According to Gov. Herbert, the costs of fighting the small wildfire in Utah would have been several million dollars, but wound up going "north of $10 million because we had to ground aircraft all because of a drone." 

Finally, the Utah law authorizes the incident commander to “neutralize” a drone by “disabling or damaging it” or by “interfering with” or “otherwise taking control” of the drone.

Unfortunately, however, according to the FAA, drones of all shapes and sizes are treated no differently under federal law than full blown aircraft, such as firefighting jets and helicopters, which Utah’s law aims to protect. As a result, a private party who willfully damages a UAS by any of those means may be committing a federal crime under 18 U.S.C. § 32, with penalties of up to 20 years in federal prison and over $250,000 in fines. (In some circumstances a valid claim of defense of self or property may absolve private parties who have damaged a drone from civil or criminal liability, but the rules are far from clear.)

A Novel Harm

Lately, there have been numerous reports of drones interfering with firefighting operations. To the extent that such reports are reliable, they suggest that drones pose a novel harm to pilots engaged in risky, low-altitude firefighting operations. Given the uniqueness of the threat, and the apparent lack of any prior law addressing it, lawmakers are well within their authority to protect public safety to respond by passing new laws as Utah has done.

It’s worth noting, however, that such interventions differ dramatically from efforts to, for example, impose drone-specific criminal penalties for peeping Toms who use quadcopters to invade the privacy of others. That invasion could just as easily take place in other ways, by climbing a tree and snapping a picture on a smartphone. In situations like these — where the underlying harm is technology-agnostic — legislators should refrain from passing duplicative criminal laws and should, instead, punish the conduct regardless of how it is performed.

Federal Preemption

Earlier this year, the U.S. Senate considered an FAA reauthorization bill that would have barred states and localities from “enact[ing] or enforc[ing] any law, regulation, or other provision… relating to the …operation…of an unmanned aircraft system.” Had that provision been enacted, federal preemption in the drone space would have been effectively universal.

Utah demonstrates why such drastic expansion of federal authority is neither warranted nor desirable. Not only are states able to deal with any truly novel dangers posed by drones, they can do so far more quickly — and in a manner more reflective of their particular local concerns and interests — than Congress or federal regulatory agencies.

Last October, Senator Jeanne Shaheen (D-N.H.) proposed a bill making it a federal crime to use a drone to “interfere with fighting fires affecting Federal property or responding to disasters affecting interstate or foreign commerce.” That bill has gone nowhere, so far; if federal lawmakers had passed the aforementioned preemption provisions, Utah would still be waiting for federal action.

Utah is pioneering the establishment of reasonable laws that criminalize or otherwise sanction dangerous conduct possible only because of the rise of drone technology. In doing so, Utah is proving that the nation does not need a single set of federal rules in the drone space.

Jason Snead is a policy analyst in The Heritage Foundation’s Edwin Meese III Center for Legal and Judicial Studies, where John-Michael Seibler is a legal fellow.

Lindsey Burke, Jamie Bryan Hall & Mary Clare Reim- July 27, 2016

The average college student spends only a small portion of the day focused on education-related activities. When lawmakers talk about forgiving student loan debt, the Heritage Foundation’s new report shows that taxpayers should be aware of how college students are spending their time and what would be financed by taxpayer dollars.

Lindsey Burke is the Will Skillman fellow in education policy at The Heritage Foundation. Jamie Bryan Hall is a senior policy analyst in the Center for Data Analysis at The Heritage Foundation. Mary Clare Reim is a research associate in domestic policy studies at The Heritage Foundation.

Sean Kennedy & Parker Abt- July 25, 2016

Although homicide rates are not rising everywhere, they’re up in most major U.S. cities — and dramatically so. Where homicide rates are down, they’re down only slightly. Preliminary FBI data for 2015 show that the aggregate homicide rate across all jurisdictions rose 6.2 percent for the first half of 2015 over the first half of 2014. Other surveys have found that the rate rose approximately 16–17 percent in 2015 over 2014 in major cities. Today, the Major Cities Police Chiefs Association released data for the first half of 2016. It confirms this trend: homicide rates are rising. If this increase in homicide rates holds, this year will see the largest jump in murders since 1960 with the exception of last year — which saw the largest increase in decades.

Sean Kennedy is a writer and researcher based in Washington, D.C. Parker Abt is a student at the University of Pennsylvania.

New E-cigarette Regulations Benefit Big Tobacco

Jonathan Nelson- July 22, 2016

The e-cigarette industry has blossomed into a $2 billion business, providing thousands of manufacturing and retail jobs while offering smokers a safer alternative. Unfortunately, this thriving market is under threat due to new rules finalized in May by the U.S. Food and Drug Administration (FDA).

The new rules expand the regulatory authority of the FDA to cover all tobacco or tobacco-related products — including e-cigarettes, many of which contain no tobacco whatsoever.

Like cigarettes, other forms of tobacco consumption can be addictive because of nicotine — which is why the FDA has asserted regulatory authority. While most e-cigarette products do not contain tobacco, they still use flavored juices that contain nicotine. Smokers can use e-cigarettes to supplement or, in some cases, replace their regular tobacco usage. The FDA worries that these products will increase nicotine addiction rates, especially among youth, and thereby lead to increased tobacco usage overall.

While it could be argued that e-cigarettes and other tobacco-related products should be regulated in a way that puts them on an even playing field with cigarettes, the rules do not limit the FDA to making regulations fair. The rules give the FDA broad authority to regulate tobacco products in whatever way the agency wants. For many of these rules, the FDA will issue “guidance” rather than spelling out the restrictions in the finalized regulations. Unlike formal rules, guidance does not need to go through a formal approval process. Thus, there is simply no oversight to ensure equitable regulation.

Most e-cigarette producers and retailers are small businesses, not multibillion-dollar corporations like Philip Morris. Many e-cig and vape shop owners produce their own juices and flavors which, according to the new rules, must soon be individually tested and approved by the FDA.

Open-ended regulations and big business do not bode well for a competitive market. As in other industries, large and powerful tobacco companies are able to engage in political rent seeking by pushing through policies that benefit them and harm their competition.

The motivation behind Big Tobacco’s e-cig scare is easy to spell out: e-cigarettes pose a threat to them. According to a study from the EU, e-cigarettes are responsible for up to 30 percent of cigarette smokers reducing their cigarette consumption or quitting altogether.

The rules finalized by the FDA will likely decimate much of the e-cigarette industry, from producers to retailers, potentially eliminating tens of thousands of jobs. Their business models are simply not equipped to absorb the regulatory costs imposed by these regulations; many shops and producers will have to shut down rather than attempt to comply with them. And less competition is good news for Big Tobacco — especially if vapers transition back to traditional cigarettes.

Some e-cigarette companies and advocacy groups are beginning to fight back, including the vaping industry’s Right to Be Smoke-Free Coalition. Five lawsuits have already been filed against the FDA over the rule. The groups argue that the FDA has no rationale for regulating non-tobacco products in the same way as cigarettes. We can only hope the judges agree.

In an attempt to promote public health, the FDA is limiting consumer choice through unnecessary and prohibitive regulations. Without the option of using safer alternatives, smokers addicted to nicotine may be left with only two choices: quit cold turkey or continue to consume traditional cigarettes — just what Big Tobacco wants.

Jonathan Nelson is a Young Voices Advocate and a graduate of Grove City College.

Does the Democratic Platform Ignore Union Hypocrisy?

Heather Greenaway- July 21, 2016

This week, the Democratic Party released their proposed 2016 platform, which includes the adoption of a $15 minimum wage — despite the fact that Hillary Clinton has admitted her concerns it will cost the economy jobs. Sounds like a big win for unions, who have been pushing for a higher minimum wage for years, right? As they applaud the inclusion of this platform commitment, however, the benefits for Big Labor may backfire. 

It appears that this platform plank would nullify a number of exemptions and carve-outs unions negotiated for themselves at the state and local levels. Few realize this, but many U.S. cities, such as San Francisco, Oakland, Richmond, Long Beach, San Jose, Milwaukee and Chicago, exempt organized labor from their minimum wage mandates. That’s right — for years, labor leaders have brokered backroom deals to be granted exemptions, undermining the same minimum wage policies that they have spent tens of millions of dollars to publicly support. The U.S. Chamber of Commerce has the full list of exemptions, here.  

The existence of these escape clauses prove what many in the free-market movement have said all along: Big Labor bosses don’t care about American workers, they only care about themselves and their bottom line. In abject hypocrisy, unions’ national push for what they deem a “fair wage” does not include that wage for their own members. 

A step ahead of the game, organized labor has used the push for higher wages as a manipulation tool. All around the country, they encourage non-unionized workplaces to agree to union representation by presenting themselves as a lower cost labor option to hotel-owners, fast food chains, and hospitals. They argue that, with their secured exemptions, employers can pay their unionized employees less, making unionization seem more appealing. 

For example, in Los Angeles, the Service Employees International Union (SEIU) — which just endorsed Hillary Clinton for president — spent millions campaigning for a $15 wage, and then asked the city council to exempt union shops from the new law. Already in Los Angeles hotels, there remains inequity between the minimum wages of unionized and non-unionized hotel workers — indeed, at the Sheraton Universal hotel, unionized employees only make $10 an hour, far less than the city’s $15.37 minimum wage for hotel workers. They’ve got unions to thank for making 50 percent less than their counterparts at the Hilton next door. 

It’s evident Big Labor’s push to increase the minimum wage is not about providing workers a so-called “living wage,” but is, rather, a coordinated, disguised effort to boost union membership — putting them in an easier position to organize the workplace with the owners’ blessing. This is one of the last bargaining chips union bosses have to combat their drastically declining membership, as workers realize unions sell an outdated product few want or need, anymore. 

Perhaps the Democratic Party is starting to catch on. While I do believe a $15 minimum wage will be a disaster for our economy — costing nationally between three million and five million jobs, by conservative estimates — I can appreciate that their platform may finally be calling foul on union hypocrisy. So next time you hear union protestors chant: “A fair day’s wage for a fair day’s work,” remind them that they may not be getting “fairness” at all.

Heather Greenaway is a spokesperson for the Workforce Fairness Institute (WFI).

Against Regulatory Complexity

Patrick A. McLaughlin & Chad Reese- July 20, 2016

With the July 21 anniversary of the Dodd-Frank Wall Street Reform and Consumer Protection Act now upon us, it’s a good time to reflect on how this type of Byzantine legislation spawns a convoluted network of tangled regulations. 

When recently unveiling his Financial CHOICE Act, House Financial Services Committee Chairman Jeb Hensarling highlighted a key principle behind his efforts to combat this overgrowth: “Simplicity must replace complexity.” The chairman’s focus on regulatory complexity is appropriate.

In many ways, regulations are like a computer’s operating system, establishing processes and parameters within which programs must operate. But anyone who has undergone the experience of “upgrading” an operating system only to find her computer sluggish and unresponsive knows that complexity is not always a desirable feature. Steven Teles, a political scientist with Johns Hopkins, made a similar comparison when he famously referred to American policy as a “kludgeocracy,” an ever-expanding series of “inelegant patch(es)” meant to solve short-term problems, but which ultimately hinder system performance.

A recent analysis showed that Dodd-Frank accounted for nearly 30,000 new regulatory restrictions — more than all other laws passed during the Obama administration combined. These new regulations, authorized by a Congress in crisis mode, were piled on top of more than one million existing regulatory restrictions. Even former Senator Chris Dodd, one of the bill’s namesakes, admitted just after the bill’s passage that “no one will know until this is actually in place how it works.” Scholars subsequently argued that the regulatory uncertainty exacerbated by Dodd-Frank could explain the slow recovery. At the time, however, some facts were clear: Dodd-Frank would increase regulatory complexity, induce uncertainty, and line the pockets of regulatory compliance experts.

To an unprecedented degree, simply ascertaining the relevance of regulations stemming from an act of Congress now requires regulatory compliance expertise. To illustrate, consider a simple visualization of regulatory restrictions originating from another major financial regulatory law, the Sarbanes-Oxley Act of 2002. Sarbanes-Oxley, which dealt with audits and financial reporting, affected public companies in all sectors of the economy and induced some regulations that specifically targeted a handful of industries. Textual analysis of those regulations shows that five industries were directly targeted by regulations from two federal agencies:

 

Sarbanes-Oxley was, of course, a significant regulatory overhaul in its own right. In 2012, the Wall Street Journal Editorial Board went so far as to call it one of the reasons for slow economic growth. Furthermore, much of the effect of Sarbanes-Oxley stems from the creation of the Public Company Accounting Oversight Board, a regulatory entity that awkwardly straddles the public-private divide with considerable control over auditing firms and — indirectly — the public companies they audit.

Nonetheless, even allowing for the additional complexity of referencing accounting standards that are not formally published as regulations, Sarbanes-Oxley is a model of simplicity compared to Dodd-Frank. Consider a similar visualization of the agency-industry relationships emerging from Dodd-Frank — which, for the sake of visualization, is limited to only 10 agencies and 10 industries. In fact, at least 32 different agencies have promulgated rules under the statutory authority of Dodd-Frank:

In the post-Dodd-Frank world, understanding which regulations are relevant to a business’s activities has become immensely more difficult. Many sectors of the economy were newly exposed to regulations from a multitude of unfamiliar agencies. Duplicative and contradictory rules became a fact of life.

In 1788, James Madison worried that laws may become “so voluminous that they cannot be read, or so incoherent that they cannot be understood.” He was right to worry: current regulatory code is so complex and voluminous that, rather than spend three years reading it, I helped create text analysis software that uses machine learning to assess the probability that a given regulatory restriction targets a specific industry. But even with the insights of machine learning and text analysis software — or regulatory compliance experts who bill by the hour — considerable uncertainty remains. Regulatory agencies, themselves, are, increasingly, unfamiliar with their own regulations.

When there are more rules in place than anyone can read, and interpretation of those rules and their scope is determined by the regulators themselves, businesses must pay for experts to filter the rules that are truly relevant from the rest. Meanwhile, businesses must also keep an eye on new rules coming down the pipeline and the possibility of reinterpretation of old rules. For both federal regulations and statutes, an irrelevant requirement only remains irrelevant until a bureaucrat, or a federal prosecutor, decides otherwise.

Regulatory complexity engenders uncertainty. That may not be a problem for some politicians; but for anyone who must comply with regulations, complexity and uncertainty can be paralyzing. Simplifying the complex regulatory regime imposed by Dodd-Frank is an application of another lesson from the world of computer programming: iterative design can correct serious errors and reduce unnecessary complexity.

Patrick A. McLaughlin is a senior research fellow with the Mercatus Center at George Mason University. Chad Reese is the assistant director of outreach for financial policy at the Mercatus Center.

Democrats' Climate Agenda Deserves a Conservative Response

Devin Hartman- July 19, 2016

As the Democratic Party looks to advance what has been characterized as the “most progressive platform in the party's history,” there's never been a more urgent time for Republicans to revitalize their energy and climate agendas.

The Democrats’ formal 2016 platform will not be adopted until delegates convene for the Democratic National Convention, scheduled for July 25 to July 28 in Philadelphia. But a leaked version of the platform draft obtained by NBC News shows an intent to double down on “climate justice” and proposals to transform America into a “clean energy superpower,” long-standing priorities of the Democratic Party. Despite Republicans' best efforts over the years, there has been a pileup of regulations and market-stifling subsidies aimed at achieving these goals.

The Democrats’ platform correctly diagnoses the benefits of a renaissance in energy technology, including technologies to combat climate change. But by refusing to bend on their ideological attachment to command-and-control solutions to exaggerated problems, party leaders may impede the very future they long to see. The Republicans, meanwhile, if they hope to resist the problematic elements of the Democratic plan, must counter with their own pro-market energy and climate platform.

There is some good news in the platform. Democrats appear fortunately to have resisted disastrous “keep it in the ground” proposals, including proposed bans on hydraulic fracking and on fossil-fuel leasing on federal lands. Such proposals frame fossil-fuel use as a moral bad, disregarding the enormous economic benefits that fossil fuels provide society.

But the platform makes clear the party's intent to regulate, mandate, and subsidize our way to a clean-energy future. It is heavy on symbolism and short on cost-effective measures to reduce pollution. It reiterates support for the Clean Power Plan and for rejecting the Keystone XL pipeline. Neither policy would make a significant difference to combat climate change and both set poor policy precedents. The Clean Power Plan sets reduction targets for emissions that contribute to climate change. But these emissions reductions will largely or entirely occur anyways, thanks to prevailing economic forces, such as cheap natural gas replacing coal-fired power generation.

The platform offers little more than green industrial policy. It fails even to discuss a market-based approach to mitigate climate change. The direction is, instead, to ram politically preferred technologies onto the electricity grid, disregarding the economic processes that ensure the grid stays reliable and affordable. It also offers support for policies to extend subsidies that cost taxpayers billions, distort energy markets, and deter innovation. 

The platform altogether neglects innovation, the most vital ingredient to worldwide climate progress. Despite a common belief that renewable-energy technologies already are cost-competitive, markets tell us these technologies still have a ways to go. Clean-energy technologies must become broadly competitive before we will see deep emissions cuts in developing countries, where the rubber hits the road on climate change.

Particularly troubling is the draft platform's addition of a plank focused on investigating those who disagree with the literal party line on climate change. The document couches this in terms requesting the Justice Department “investigate allegations of corporate fraud on the part of fossil fuel companies accused of misleading shareholders and the public on the scientific reality of climate change.” 

Calling-out intentional distortions is valid, but legally prosecuting others' legitimate views is a fear tactic that makes a mockery of the First Amendment. It's also prone to backfire, as it's more likely to trigger discord and retaliatory investigations rather than foster the civil discussion America needs. Climate skeptics should be engaged with scientific evidence, not scared into submission.

Conservatives know the government has no business dictating what our energy mix should be or curtailing the rights of those who view things differently. The appropriate role of government is to ensure markets perform well. Competitive energy markets do perform well, but we need to ensure that they account for the societal impacts of pollution. Many conservative economists agree that the best remedy is a revenue-neutral carbon tax.

The time has come for Republicans to step into the climate leadership spotlight. A conservative climate-change platform can simultaneously shrink government, grow the economy, enhance choice, and deliver superior environmental results. Innovation should be the Republican energy mantra.

Such an approach begins with freeing — not restricting — the energy sector. States should follow Texas' lead and embrace competitive electricity markets and discard the choice- and innovation-stifling model of monopoly utility regulation. States and Congress should thoughtfully remove mandates and subsidies for government-preferred resources. Congress should ensure that competitive electricity markets under federal oversight encourage innovation and reward unconventional resources fairly. This will remove regulatory barriers to clean technologies and level the playing field for all technologies. 

Putting a price on pollution is central to sensible energy and environmental policy. And it’s an idea that some conservatives, at least, are warming to. As Republicans craft their own platform, they have an opportunity to advertise the idea that the market, not the government, should be to work to address climate change.

Republicans also should double-down on what they do best: promoting economic growth domestically and abroad. Preparing for the inevitable effects of climate change is a piece of climate policy that gets grossly overlooked on both sides. Poverty exacerbates the human impacts of climate change. So the wealthier we are, the better we can adapt.

American capitalism is the greatest wealth and innovation engine the world has seen. Conservatives should set their sights on freeing markets and pricing pollution as a way to tackle climate change — and then tell government to get out of the way.

Devin Hartman is electricity policy manager and senior fellow at the R Street Institute.

I Can't Believe It's Not Science

Michelle Minton- July 16, 2016

Consuming butter does not increase the risk of heart disease, a recent study found. Those who believed in the accuracy of U.S. government dietary guidelines — which for decades have demonized saturated fats — were doubtless taken by surprise. But for those of us who follow nutrition and politics, it’s just another government nutritional “gospel” that science has revealed to be misguided.

Yet, government agencies continue to spend millions to nudge consumers into following guidelines that may do little to improve health for most and may even result in harm.

For nearly half a century, the U.S. Department of Agriculture (USDA) and the U.S. Department of Health and Human Services (HHS) have put out dietary guidelines telling Americans to eat less sodium, cholesterol, and saturated fat — i.e., red meat and full-fat dairy, including butter — and to eat more whole grains, fruits, and vegetables, among other directives. These recommendations emanated from hearings held in the mid-to-late 1970s by the Senate Select Committee on Nutrition and Human Needs, despite a “boisterous mob of critics,” including those within the scientific community who pleaded with the Committee to wait for more research “before we make announcements to the American public.” In response, Committee Chairman Sen. McGovern responded that “Senators don’t have the luxury that the research scientist does of waiting until every last shred of evidence is in.”

Since the Committee issued its report in 1977, those patient research scientists have repeatedly called into question or undermined many of the Committee’s original recommendations. Increasing the level of dietary salt, for example, appears to lead to hypertension only in a small percentage of the population; and in some, lowering dietary salt can, in fact, result in higher blood pressure. Moderate levels of dietary cholesterol no longer seems to be linked to heart disease. And full-fat dairy has been shown to reduce the risk of obesity and diabetes.

To be fair, the Dietary Guidelines Advisory Committee (DGAC), which is comprised of a handful experts, diligently evaluate the research on what constitutes a healthy diet every five years. And sometimes they alter recommendations to reflect the changing scientific understanding. For example, the most recent guidelines finally did away with limits on dietary cholesterol and backed away — ever so slightly — from previously stringent sodium recommendations. But such changes are rare and often come long after shifts within the scientific community. The real issue is that government agencies pass judgement on developing science in the first place.

Scientific progress is not achieved via committee — whether Congressional or scientific. Rather, science advances toward an understanding of reality through years — often decades — of research, with scientists fighting for their own hypotheses. They present, defend, test, and modify their ideas over time. Whichever side offers the most compelling argument “wins” by gradually becoming the predominant theory. Soon, other researchers gravitate toward that theory, basing their own research on it.

Congress, of course, is an inherently political entity. And so when it — or any other government-appointed body — privileges one theory over another, it creates bias that trickles down to the research community. The problem is not simply that the government makes decisions on the basis of imperfect information, but that government intervention, itself, can distort the development of research.  

For example, the theory that dietary fat plays a large role in cardiovascular disease was controversial in the scientific community, even as the government began relying on it to develop the first federal nutritional guidelines. In fact, a lot of the existing research contradicted it. Nevertheless, the theory flourished. Why? In part, no doubt, because researchers — many of whom rely on government grants — faced risks associated with bucking the new zeitgeist created by the government.

Fortunately, the latest dietary guidelines limit the ultra-low sodium recommendation of 1,500 mg per day to those with hypertension or pre-hypertension. But the committee members still warn that food manufacturers “should reformulate foods to make them lower in overconsumed nutrients,” including salt, to help Americans — who consume an average of 3,400 mg of sodium a day — get to their recommended limit of 2,300 mg a day. Lo and behold, the White House pushed the FDA to create “voluntary” sodium reduction guidelines for food manufacturers — all this despite the tenuous connection between higher sodium and hypertension and a recent study (commissioned by the government) that found no benefit in consuming less than 2,300 mg of sodium a day for most people.

Had the government refrained from issuing these recommendations, experts might have focused, instead, on efforts to encourage increased potassium intake by eating more fruits and vegetables. This has been shown to reduce blood pressure effectively while having fewer unintended side effects and possibly conferring unintended benefits.

There are ways that federal agencies can promote dietary advice that could benefit most of the population (such as recommendations to eat more fruit and vegetables). But, in general, nutrition is far too complex and personal an issue for a one-size-fits all, top-down approach. It’s time for the government to relinquish its influence over the scientific and medical communities and let individuals (and their doctors) determine their own optimal diets.

Michelle Minton is the Competitive Enterprise Institute's fellow specializing in consumer policy, covering the FDA, alcohol, food, and gambling.

Free-Market Principles Should Guide Road-Funding Solutions

Jesse Hathaway- July 15, 2016

As Americans hit the road during summer driving season, one thing that’s probably not on their minds is how the maintenance of those roads gets funded.

Over the past century, when consumers stopped to fill up at gas stations, they also filled up state and federal government highway funds through excise taxes, consumption taxes (which are included in the price of goods), and motor-fuel taxes. Now, as technological advancements and changing consumer habits work hand-in-hand to reduce the volume of motor fuel purchased, government infrastructure budgets have become increasingly strained, prompting lawmakers to increase tax rates.

Instead of trying to retain the status quo by increasing taxes on declining motor-fuel sales, now is the perfect time for legislators to experiment with fairer funding ideas — using common-sense, free-market principles as a guide to road-funding success.

Step one — or, rather, step zero — is to make sure road money is actually spent on roads. Currently, 15 percent of all federal gas tax revenue — about 3 cents for every gallon of gas purchased — is diverted away from funding road construction and toward subsidizing passenger trains and other forms of government-provided transportation. That may not sound like much, but it adds up to roughly $5.6 billion in inefficient spending.

After patching the leaks in the pipeline between the taxes consumers pay and the benefits consumers receive, the next step is to simplify the pipeline itself. Taxes are payments, and the people paying should be the ones using the things that are being paid for. Unfortunately, that’s not the case when it comes to today’s government highway funding laws. Gas taxes are paid by everyone who purchases gasoline — not by everyone who uses the government roads. 

One, very direct way to uphold this user-benefit principle — a key free-market idea — is to get rid of excise taxes and replace it with a mileage-based user fee (MBUF). The number of miles an individual travels is much more directly connected to the miles of roads “consumed.”

With MBUFs, the fee can vary with the congestion rate of particular highways — just as the price of a good in a free market increases as demand spikes — without violating consumers’ privacy.

In Oregon, the state government has been test-driving such a program, alleviating potential privacy concerns by simply keeping track of how much is owed, rather than when or where people drive. Marc Scribner, a research fellow with the Competitive Enterprise Institute, argues that Oregon’s program proves privacy and user-fee funding are not mutually exclusive:

An on-board computer … assigned miles driven to various categories: public roads or private property, in-state or out-of-state roads. That mileage was then tallied and processed by a trusted third party, without ODOT [Oregon Department of Transportation] receiving any location data. Fuel tax rebates based on mileage data were then applied and charges were assessed — again, without the government obtaining individualized location data. 

Oregon provides just one model. The current patchwork of local, state, and federal gas excise taxes is so inefficient and wasteful that almost any alternative funding framework lawmakers rally around would be superior. 

The time for experimenting is now. Lawmakers should seize the opportunity by thinking outside the gas-tax box and devising more consumer-friendly and cost-effective ways to fund the government roads on which we drive.

Jesse Hathaway (jhathaway@heartland.org) is a research fellow with The Heartland Institute.

Smart Grids: Greener & Easier to Hack

Mark P. Mills- July 14, 2016

The dog days of summer began with a sobering warning about “cyber-jihadists” in a new analysis from the Institute for Critical Infrastructure Technology. Policymakers should anticipate sophisticated anti-American groups developing world-class hacking capabilities. Doubtless old news at the Pentagon’s Cyber Command.

Meanwhile, in a parallel universe, energy policymakers are accelerating green initiatives that will make America’s electrical grids more vulnerable to cyber-attacks.

The problem? “Smarter” and “greener” requires that the grid be more fully connected with the Internet. “Smart” grids depend on Internet “smarts.” And solar and wind energy both require Internet-centric mechanisms to meet the challenge of using episodic supplies to fuel society’s always-on power demand. 

Thus, policies from California to New York as well as the EPA’s Clean Power Plan, envision adding millions of Internet-connected devices to electric grids, hospitals, and cities. For hackers, this is called vastly expanding the attack surface. In that ‘smarter’ future, the cyber-hacking skills bad actors have honed to break into private and financial data can be directed at breaking into and controlling critical physical infrastructures. 

Experts have demonstrated hacks into the entire panoply of devices associated with smart and green power, from smart lights and power meters to the power electronics on solar panels. Cybersecurity has simply not been the priority in green policy domains — even though technical and engineering message boards and publications are filled with examples of cyber-vulnerabilities or weak or non-existent cybersecurity features. With the full flowering of smarter infrastructures, just what are we likely to face?

Imagine it’s a scorching-hot summer day in Los Angeles sometime in the near future and the power in one wing of a hospital goes down, taking with it the air conditioning and all the critical hospital equipment from MRIs to life-support. The CEO gets a text from her facilities manager a few minutes before another wing in a different, larger hospital in the network goes black, too, as the back-up generator fails to start. This is followed by an email from the hacker stating that the power at all the hospitals will be shut down within an hour. The ransom is, say, $10 million in Bitcoins.

Now imagine a different scenario, this time a hot Manhattan evening when several blocks go dark. It’s not a ransom this time but a threat: more is coming. The mayor gets an image on his smartphone of the July 25th 1977 cover of Time Magazine with its headline “Night of Terror.” That 1977 New York City blackout lasted 25 hours, involved thousands of ransacked stores and fires, 4,000 arrests and $300 million in damages. This time, the mayor also worries that the attacker could be coordinating an array of Orlando-type physical assaults to fuel the chaos.

In the first case, the ransom gets paid and power comes back. In the second scenario, no physical attacks happen, but it takes two days and heroic efforts from ConEd’s crews to restore power by reverting to older manual systems that bypass the ‘smart’ stuff. But the terrorists made their point. And in both cases forensic teams from the Department of Homeland Security, the FBI, and DOD’s Cyber Command descend.

They learn that a sophisticated phishing scam inserted a computer worm, combined with malware loaded earlier in a backdoor hack into a power monitoring device, enabling the remote seizure of local power network controls. The NSA traces the cyber breadcrumbs to anonymous servers in Georgia (the country not the state) or Iran, or China, and … a dead end.

Sound far-fetched? Consider where we are today: ransomware attacks are already a scourge. The American Hospital Association reported that several health care companies and hospitals were hit earlier this year with ransomware (most paid). But, so far, hackers can only shut down a target organization’s access to its own computer system or e-commerce Web site. As for the future, consider that for hackers, today’s Internet-connected cars look just like tomorrow’s connected grids. Researchers have hacked the Ford Escape, Toyota Prius, Nissan Leaf, and — to great fanfare — a Jeep Grand Cherokee. 

Last year’s “cyber-jacking” of a Jeep took full control from ten miles away by exploiting vulnerabilities in the Internet-connected infotainment system to backdoor into the car’s microcomputers that operate the steering and brakes. In the wake of that stunt, Chrysler recalled over a million cars and corrected those particular vulnerabilities. Earlier this year, the FBI and NHTSA issued a general alert regarding vehicle cyber vulnerabilities. Everyone on both sides knows it’s only the tip of the cyber-berg.

In fact, there have already been cases of grid-like cyber-jacking. In 2008, a Polish teenager hacked a city’s light-rail controls and caused a derailment. In 2010 the world learned of a clandestine hack — ostensibly U.S.-Israeli — that inserted the Stuxnet computer virus to damage the electrical infrastructure of Iran’s nuclear facilities. In 2015, hackers breached the operating system of a German steel mill, causing enormous physical damage. And this past December, hackers blacked out Ukraine’s electric grid.

So far there have been no such hacks on U.S. power grids that we know about. And experts testifying before Congress about the Ukraine event credibly asserted that America’s long-haul grids are better protected — at least for now. But that’s not the issue.

Exposure is a problem not so much with long-haul grids but with local grids in cities and communities where all the Internet ‘smarts’ are planned. As green connectivity is accelerated onto those grids, the attack surface expands. Today’s grids are, by Silicon Valley standards, dumb — even if deliberately so. But we already know what adding more Internet connectivity enables. 

The Department of Homeland Security asserts that America’s manufacturing and energy sectors are the top two targets for attacks on cyber-physical systems. And Cisco reports that 70 percent of utility IT security professionals discovered a breach last year, compared with 55 percent in other industries.

Here’s the rub: green grid advocates are pushing policies that will create more Internet-exposure precisely when bad actors and hostile nation states are rapidly escalating their hacking skills.

Policymakers genuflect to the importance of electric security and reliability. But actions speak louder than words. Over the past eight years, federal and state green and smart tech funding totaled $175 billion; one thousand times more money than DOE reports spending on cyber-physical security research.

Does this mean we should avoid bringing Internet-class controls to grids and infrastructures? Hardly. Engineers and entrepreneurs — not bureaucrats— will, ultimately, develop smart and secure systems. But security must be the priority. In every infrastructure throughout our history — from power and water to hospitals, cars and aircraft — policy has, rightly, put safety and security first. With society more dependent on electricity than ever, it’s no time to reverse priorities.

The cyber-jihad report concludes: “Thankfully, even successful [cyber] attacks on the United States Energy sector would not have the same impact as those against Ukraine in 2015, because the grid is much larger and minutely segmented.” That’s true — for now. But in a world where terrorist attacks are all too common, prematurely pushing “green” or “smart” tech onto the grid — leaving cybersecurity on the back burner — will set the conditions for a perfect cyber-storm. 

Mark P. Mills is Senior Fellow at the Manhattan Institute and author of Exposed: How America’s Electric Grids Are Becoming Greener, Smarter—and More Vulnerable.

Social Security's Intergenerational Conundrum

Tejesh Pradhan & James C. Capretta- July 12, 2016

Since its enactment in 1935, Social Security has become one of the most popular and effective federal programs. At the end of 2015, according to the recently published trustees’ report, 60 million Americans received retirement, disability, or survivors’ benefits from the system into which 169 million paid payroll taxes. Social Security provides the majority of cash income for 65 percent of elderly beneficiaries, makes up 90 percent or more of incomes for 36 percent of them, and offers the sole source of retirement income for 24 percent. The poverty rate among senior citizens is less than the poverty rate among working adults. 

Yet, even as Social Security has become an indispensable source of financial wellbeing and security for the retired population, it has also become financially unsustainable in its current form.

Due to increasing lifespans and declining birth rates, there has been an apparently permanent shift in the ratio of working individuals to retiring baby boomers. Further, the program earns limited interest on its investment holdings. The Social Security trust funds have been running deficits since 2010 and are predicted to run out of cash reserves, which stood at $2.8 trillion at the end of April 2016, by 2034. Thereafter, only three-quarters of scheduled benefits can be paid for by the expected tax income.

Restoring solvency to the program will require either higher tax payments from current and future workers, or lower benefit payments — or, more likely, both. However it’s done, those receiving benefits in the future will pay more and get less compared to the beneficiaries now or in previous years.

The unfairness for future generations can be seen in a simple, stylized example of the average wage earner. We calculated the net real rate of return (NRR) on lifetime payroll tax contributions for an average working male under the Old Age, Survivors and Disability Insurance (OASDI) program. The NRR is the real, average annual rate of return that tax contributions would have to earn to grow to a level sufficient at retirement to finance Social Security benefits for that worker until death.

In our calculations, we use the demographic and economic assumptions from the historical and long-term projections of the 2016 Social Security Trustees Report and supplementary data published by the Social Security Administration. Lifetime contributions and benefits for the retired, working, and future generations are computed using actual statutory tax rates and benefit formula. The 2016 report shows an actuarial deficit of 2.66 percent of taxable payroll. For simplicity, we assume that solvency is restored to the program by raising the payroll tax rate by that amount beginning in 2018. All amounts were set at constant 2016 dollars; amounts in the future were discounted at realized or expected real interest rates.

We calculated the NRR for a typical working male in five different generations, with birth years of 1925, 1950, 1975, 2000, and 2025. Workers are assumed to work every year starting age 23 until retirement at the normal retirement age, to be employed and earn the average wage in the economy each year, and to collect full scheduled benefits for the number of years equal to their period life expectancy at the age they retire. (The same methods and assumptions can be extended to study the effects of Social Security policy on people with various demographic and socioeconomic characteristics.) 

The following figure shows the expected present value (at the normal retirement age) of payroll taxes paid over employed years and benefits received in retirement by average hypothetical workers.                       

Figure 1: Net Real Returns to Lifetime Contributions to Social Security (in 2016 dollars)

The earliest generations of Social Security beneficiaries enjoyed the highest rate of return. A worker born in 1925 who earned the average wage would have gotten a 4.8 percent return on their payroll taxes. The rate of return dropped to 2.7 percent for those born in 1950, and to 1.7 percent for those born in 1975. For those born in 2000 or 2025, the program is expected to provide a very low rate of return — less than .25 percent in both cases. 

The trend is clear: over several decades, earlier cohorts of beneficiaries received benefits with an implied rate of return that would be viewed as sufficient for most retirement investments. But as changes have been made to the program, and the ratio of workers to retirees has declined, the trend has been toward higher taxes and benefits but a lower implied rate of return on lifetime payroll contributions.

The falling net rate of return for those entering the program in this century is a function of Social Security’s pay-as-you-go structure. Current workers pay the benefits for current retirees. That works well as long as the ratio of workers to retirees remains constant or rises. But when it falls — with declining birth rates and longer lives for retirees — it’s not politically feasible to take back benefits from those already in retirement. The only solution is to impose higher taxes and lower benefits on current and future workers. 

This is Social Security’s intergenerational conundrum: the program is unsustainable in its current form, but it is too late to implement changes that will be fair across generations.  The only way to restore the program to solvency is to impose changes that will lower the program’s returns to current and future workers who already get less relative to what they pay in compared to the program’s early entrants. 

Of course, the rate of return a worker gets from Social Security can vary substantially within the same age cohort, due to various factors. The program also provides substantial redistribution from high to lower earners through its progressive benefit formula and important protections, e.g., for disability, which could not be replaced easily by private savings and insurance. Still, our simulations of the rate of return earned by the average worker are indicative of the overall trend across generations.

While insolvency is not imminent, we must act sooner rather than later to give these workers time to make the necessary adjustments in their retirement plans.   

The goal of Social Security reform should be to ensure that the program can be sustained financially without undermining — and perhaps even improving — the program’s role as a floor of protection for older Americans. That can be done by reducing future benefits most for those with the highest wages; these workers can save more for their own retirement. Social Security’s limited resources could then be focused more on improving the standard of living for older Americans with limited lifetime earnings and thus also limited private savings.

The United States decided many years ago to run Social Security on a pay-as-you-go basis. For most of the program’s history, that decision posed no problems and, in fact, allowed for numerous benefit expansions as the workforce grew more rapidly than the retired population. But demographic shifts over the past half century have opened up a large projected funding gap that cannot be ignored. The gap is sizable, though it can be closed with reasonable program adjustments. The sooner Congress gets serious about taking the necessary steps to solve the problem, the better.

Tejesh Pradhan is a Ph.D. candidate in economics at American University and a Peterson Fiscal Intern at the Ethics and Public Policy Center.  James C. Capretta is a resident fellow at the American Enterprise Institute.

In Tracking Home Ownership, Marriage Matters

Alex J. Pollock & Jay Brinkmann- July 11, 2016

Home ownership has long been considered a key metric for economic well-being in the United States. Thus many are dismayed by the fact that at 63.5 percent, the 2015 overall home-ownership rate appears to be lower than the 64.3 percent of 1985, a generation ago. But viewed in another, arguably more relevant way, the underlying trend shows that the home-ownership rate is, in fact, increasing, not decreasing. 

How so? Key to the trend is the extremely strong relationship between marriage and home ownership — a relationship seldom, if ever, addressed in housing finance discussions. But if you think about it, it’s obvious that home ownership should be higher among married couples than among other households; in fact, it’s remarkably higher.

This relationship holds across all demographic groups. Importantly, it means that changes in the proportion of married vs. not-married households is a major driver of changes in the overall home-ownership rate over time. Home ownership comparisons among demographic groups are similarly influenced by differences in their respective proportions of married vs. not-married households.

Policy discussions over falling home-ownership rates frequently ignore some critical underlying demographic facts.

The current 63.5 percent American home-ownership rate combines two very different components: married households with about 78 percent home ownership, and not-married households with only 43 percent home ownership. Married households have a home-ownership rate 1.8 times higher — obviously a big difference. (As we have organized the data, these two categories comprise all households: “married” means married with spouses present or widowed; “Not-married” means never married, divorced, separated, or spouse absent.)

Table 1 contrasts home ownership by married vs. not-married households, showing how these home-ownership rates have changed since 1985.

One is immediately struck by a seeming paradox:

     - The home-ownership rate for married households has gone up by 2.3 percentage points.

     - The home-ownership rate for not-married households has gone up even more, by 7.4 percentage points.

     - But the overall home-ownership rate has gone down.

How is this possible? The answer is that the overall home-ownership rate has fallen because the percentage of not-married households has dramatically increased over these three decades. Correspondingly, married households (which have a higher home-ownership rate) are now a smaller proportion of the total. Still, home ownership rose for both component parts. So the analysis of the two parts gives a truer picture of the underlying rising trend.

The dramatic shift in household mix is shown in Table 2.

Table 3 shows that the strong contrast between married and not-married home-ownership rates and related changes from 1985-2015 hold for each demographic group we examined.

That is, home ownership for both married and not-married households went up significantly for all four demographic groups from 1985 to 2015.

Moreover, overall home ownership also increased for three of these four groups. Home ownership for black households, meanwhile, fell by 1.5 percentage points, though home ownership for both married and not-married components rose for this demographic as well. (This is consistent with that group’s showing the biggest shift from married to not-married households.)  

In another seeming paradox, Hispanic home-ownership rates rose, while still contributing to a reduction in the overall U.S. rate. The reason for this is that their share of the population more than doubled, increasing the weight of their relatively high share of not-married households.

The trends by group in the mix of married vs. not-married households are shown in Table 4.

What would the U.S. home-ownership rate be if the proportions of married and not-married households were the same as in 1985? Applying the 2015 home-ownership rates for married and not-married households to the mix that existed in 1985 results in a pro forma U.S. home-ownership rate of 68.1 percent. This would be significantly greater than both the 1985 level of 64.3 percent and the 2015 measured level of 63.5 percent.

Adjusting for the changing mix of married vs. not-married households gives policymakers a better understanding of the underlying trends. This improved understanding is particularly important when weaker credit standards are being proposed as a government solution to the lower overall home-ownership rate.

To make sense of home-ownership rates, we have to consider changes in the mix of married vs. not-married households. And these changes have been dramatic over the last 30 years.

Alex J. Pollock is a distinguished senior fellow at the R Street Institute in Washington, DC. Jay Brinkmann is the retired Chief Economist of the Mortgage Bankers Association.

San Francisco's Tech Tax Is Not a Solution to Homelessness

Tom Giovanetti- July 8, 2016

If higher taxes were the cure for homelessness, California — and San Francisco, in particular — would have solved its homeless problem years ago. 

In reality, San Francisco — one of the highest tax jurisdictions in the country and a poster child for progressive governance — is overrun with homelessness.

It’s not a new problem; anyone who has been to San Francisco in the last couple of decades has seen it. And while the city has taken steps to move the homeless into alternate facilities, it appears that more continually arrive to take their place, keeping the homeless population unacceptably high.

What can be done? Well, a couple members of San Francisco’s Board of Supervisors have decided that higher taxes is the solution to the problem of homelessness.

The “tech tax,” as it’s being called, would be an additional 1.5 percent payroll tax assessed against tech companies. Because economists generally agree that payroll taxes come out of compensation, this is really a tax on tech workers, not tech companies. 

That’s right. Those who govern the City of High Taxes by the Bay have decided that even higher taxes — targeted at their most productive residents, the tech community — will solve the problem of homelessness. 

This proposal tells us several things.

First, California is still in denial on taxes. Businesses and residents are fleeing the state’s high-tax, high-regulation climate, and yet those who govern continue to pile them on.

Second, San Francisco continues to misdiagnose the causes of its homeless problem. Homelessness is not caused by other people’s wealth; it’s not caused by low taxes. All major cities — in a variety of climates and economies — have to manage homeless populations; they all seem to do a better job than San Francisco.

Third, if enacted, the revenue will likely end up going to pay for city employee pensions and other general obligations rather than specific mitigations of the homeless.

The tech tax is a revenue grab against an easy target — not a real solution. 

And it won’t work. Here’s a prediction: absent more substantive policy changes, the homeless problem in San Francisco will be roughly the same five years from now, with or without a tech tax. 

Businesses generally make rational decisions, based on empirical data, about where to invest and where to locate their enterprises. Governments generally make irrational decisions, based on wrong diagnoses and false assumptions. San Francisco’s tech tax is just the latest example.

Tom Giovanetti is president of the Institute for Policy Innovation (IPI) in Dallas.

Mired by Cost, Obamacare Fails to Deliver

Alan Daley- July 8, 2016

The Affordable Care Act (ACA) enrollee counts have plateaued at about 11 million. While the Congressional Budget Office (CBO) foresaw this, non-payment swings are proving more severe than expected. The result? The ACA is unable to deliver on its promise of full coverage at affordable prices.

ACA enrollment fluctuates by almost 20 percent during the year. Every year, the process is the same. Early in the year, a lot of people sign up or enroll (12.7 million in 2016), but many of them don’t pay the first premium. (1.6 million failed to pay in 2016). Those who don’t pay are then removed from enrollment. Later in the year, enrollees gradually drop out. (Another 1.1 million are expected to drop out in 2016).

In 2014, the CBO estimated that in 2016 subsidized enrollment would plateau at 19 million persons and unsubsidized enrollees would plateau at 6 million in 2017. As for this year’s enrollment: “Only about 40 percent of the eligible have so far signed up and the take-up rate is far worse the higher the incomes are.” 

Enrollment fell short of expectations for several reasons, chief of which is the cost. The young and healthy often judge the cost to be higher than the benefit. That leaves the risk pool sicker and older than insurers need to lower the premiums. For everyone else, facing the premium and out-of-pocket costs without subsidy deters enrollment, especially for those whose income exceeds four times the poverty level (i.e., those who make $80,000 a year or more). For people with lower incomes, subsidies make health coverage somewhat more affordable.

The subsidies are intended to leave families paying no more than a “cap” percentage of income for premium and out-of-pocket costs. For the lowest income tier, the cap is 2 percent of income; for the highest subsidy tier, the cap is 9.5 percent. As a result, the subsidies can be large: “CBO and JCT [Joint Committee on Taxation] project that the average subsidy will be $4,410 in 2014, that it will decline to $4,250 in 2015, and that it will then rise each year to reach $7,170 in 2024.” Assuming linear growth, that means the average subsidy will be $4,900 per enrollee in 2017.

Many insurers have been leaving the ACA marketplace due to intolerable financial performance. In 2015, many insurers lost as much as 11 percent of revenues. United Health, Pemera, Aetna, Humana, and 13 co-ops have all withdrawn from at least some of the state markets.

Other insurers are suing the federal government to make good on its promise of so-called risk corridor payments, which were intended to offset higher-than-anticipated losses. The federal government refuses to make the payments because it has not collected a matching amount of excess profit — the only source for risk corridor payments. As that stalemate persists, more insurers will drop out of marketplaces, leaving fewer choices and higher prices for consumers.

Next year’s anticipated premium increases will be as much as 26 percent, bearing heavily on all people, but especially those who don’t qualify for subsidies. And, as it stands, 32 million people still lack health insurance.

The situation is clear: ACA is stuck on a plateau of enrollment due to cost. What’s not clear is what can be done about it. The cost of health care is a critical problem that, ironically enough, the Affordable Care Act is ill-equipped to address. As a result, the impetus behind the law — full health-care coverage — is not something that it can achieve any time soon. 

Alan Daley writes for The American Consumer Institute, a nonprofit educational and research organization. For more information about the Institute, visit www.theamericanconsumer.org.

Blog Archives