There's an old parlor game called "telephone" in which someone says something to one person who then repeats it to the next person and so on. By the time the message gets back to the originator, it is garbled beyond all comprehension.
Robert Reich's new book Saving Capitalism reads like the last message in a game of telephone on the topic of broadband policy. He starts with distorted perceptions of the U.S. broadband market — the endnotes confirm the role of Harvard professor Susan Crawford, who stridently opposes for-profit broadband, in Reich's thinking — and then twists them even more to promote his campaign for a Bernie Sanders-esque "democratic socialism."
Let's start with broadband speed and prices. Crawford argued in Captive Audience — a tract advocating government-owned broadband networks — that U.S. speeds ranked only the 22nd-fastest in the world, and that our prices were among the highest. We were actually in the top 10 for speed, and in fact our lower-end services were among the world's most affordable. But when Reich takes his turn in the game of telephone, he goes even further, lamenting that "the United States ha[s] some of the highest broadband prices among advanced nations, and the slowest speeds."
This simply isn't true.
Reich complains about the price of "high-speed" broadband, ignoring the fact that the United States has remarkably low entry-level prices; the International Telecommunications Union of the United Nations consistently ranks the United States in the top three best countries for entry-level broadband prices. This progressive pricing — whereby wealthier Americans pay more for the fastest speeds, essentially subsidizing cheaper, entry-level broadband — is something you would expect a progressive like Reich to embrace.
Reich is even more wrong on speed. His claim that we have "the slowest speeds" has no basis in fact, and the sources he cites don't support it. Even the more modest claim that the United States has relatively slow speeds compared with other advanced nations isn't true. Research that my colleagues and I have conducted (along with many other studies) shows that the United States does remarkably well on broadband speed, especially considering its large size and low population density (which greatly increase the cost of building networks). If individual states were ranked, instead of the U.S. as a whole, they would take six out of the top ten spots.
Overall, we are in the middle of the pack among advanced nations. There's no reason to rest on our laurels, but the notion that our broadband policy is off track simply is not borne out by the facts.
In his campaign to expand government and shrink business, Reich also asserts that we are facing a "cable monopoly" that justifies government entering the broadband business. Reich appears to have bought Crawford's warning of a "looming cable monopoly" hook, line, and sinker, going so far as to drop the "looming." He asserts that cable operators "exemplify the new monopolists" of the capitalism he seeks to save.
Crawford's "looming monopoly" proclamation was a prediction, and one that turned out to be wrong. Between Google Fiber looking more and more like a serious business every day, and the aggressive response from telco companies investing in their own fiber networks, the last five years of broadband buildout is a success story for competition.
Reich is also incorrect that broadband profits are high. Back in 2013, investment analyst Craig Moffet pointed out that the gross margins on cable broadband are considerably higher (around 90 percent) than on the traditional cable video offering (where they are more like 60 percent). Crawford and her acolytes have had a field day with this "90 percent profit margin," willfully ignoring that gross margin is a precise financial metric that disregards all the investment and expenditures necessary before a company is able to offer a product. Moffet's point was a narrow one: that cable TV is costlier to offer than broadband when operators have to purchase video rights. Looking at more appropriate metrics, return on cable investment is more like 4 percent to 8 percent, which is about the same as the average profit margin of all U.S. companies, hardly evidence of unconstrained monopoly power in action.
But by the time Reich gets a hold of this picture, the industry is portrayed as an encrusted monopoly that is sitting back and harvesting profits from Americans' wallets, while doing nothing to improve the countries' networks. In Reich's words, cable companies have "tubes in the ground" that are "slower" than they should be. This is a strange stance, especially when by the FCC's measure broadband speeds more than tripled from 2011 to 2014.
Nit-picking over things like gross margins may seem pedantic, but for those of us trying to figure out good policy to promote the growth of efficient, high-speed broadband networks, it is incredibly frustrating to watch these issues get systematically distorted. In a time when everyone loves to hate their cable company and selling stories of inequality is big business, otherwise legitimate debates quickly spin out of control. Reich's faulty analysis of broadband policy is the latest offender, where facts simply no longer seem to matter.
Doug Brake is a telecommunications policy analyst at the Information Technology and Innovation Foundation, a think tank focusing on the intersection of technological innovation and public policy. Follow him on Twitter: @DBrakeITIF.
In a recent Gallup poll, Americans named the government as the top problem facing our nation for the second year in a row — government beat out the economy, immigration, unemployment, and even terrorism. The public is frustrated with everyone from President Obama to members of Congress to politicians in general.
This isn't surprising, considering that 2015 was a banner year for government interference in Americans' lives without their assent; the Obama administration issued 39 regulations for every law approved by Congress. And that ratio wasn't a fluke. The average ratio of regulations to laws under George W. Bush was 17:1, less than half of Obama's 35:1 record. Annually, federal regulations now cost the economy about $1.9 trillion.
The president hasn't been shy about his willingness to use his "pen and phone" when Congress won't pass the laws he wants. But formal regulations and executive orders are just part of his strategy. In addition, as I detail in a new report for the Competitive Enterprise Institute, his administration has often worked through what I call "dark matter": proclamations in the form of guidance documents, memoranda, bulletins, manuals, circulars, and even blog posts.
Under the Administrative Procedure Act, most federal regulatory agencies must follow guidelines for proposing and establishing regulations. This includes public notice of a proposed rulemaking and a period for the public to submit comments. The APA also created a process for federal courts to review these agencies' actions and decisions.
But when the executive branch uses dark matter, federal agencies circumvent Congress, the American people, the courts, and essentially all oversight. Such proclamations are not supposed to be legally binding, but if you're a small-business person awaiting a permit or approval, they're hard to ignore — assuming you can find where they're published.
According to White House data, there have been 222 executive orders since 2008, but 472 executive memoranda ranging from the mundane to the weighty. Other dark-matter items, including guidance documents and other notices from the hundreds of federal agencies, are much more slippery. No one even knows how many federal regulatory agencies there are, who works for them, or what they cost American taxpayers. Agencies' public presentations of guidance and their effects are all over the place.
Agencies have voluntarily acknowledged at least 580 "significant" guidance memos — those with impacts of at least $100 million annually. Perhaps the most shocking volume of dark matter comes in the form of "public notices" that have appeared in the Federal Register: a whopping 524,251 since 1994, dwarfing the 777 executive orders published during that time. Most public notices may be trivial, but policymakers really don't know what's all in there. Not all guidance documents or other dark matter appear in the Federal Register.
Congress needs to get a handle on the extent of regulatory dark matter and what it costs us. At the root of the problem of regulatory overreach is Congress' dereliction of its constitutional legislative power. To address this, Congress should require a vote on all costly and controversial agency rules — including dark matter — before they become binding on you and me.
At the very least, Congress, and future presidents, need to assert that all decrees by federal agencies matter, not just those acknowledged and published as real "rules." Dark matter needs to receive at least the same administrative scrutiny as ordinary rules — which themselves could use greater oversight.
The Constitution isn't perfect, but it's better than a pen and phone.
Wayne Crews is vice president for policy and director of technology studies at the Competitive Enterprise Institute, and author of the new report "Mapping Washington's Lawlessness: A Preliminary Inventory of ‘Regulatory Dark Matter.'"
According to the "Ferguson Effect" theory, police became demoralized and timid following the death of Michael Brown — and the subsequent protests — in August of 2014. Other incidents, such as that involving Freddie Gray in Baltimore in April of 2015, only worsened matters. Crime rose.
As I pointed out last September, this theory makes a prediction: The protests were heavily focused on race, so cities with higher black populations should have had greater increases in crime. Presumably, police worry less about setting off a public-relations nightmare when they interact with white civilians.
I tentatively confirmed this using data FiveThirtyEight had collected from 60 cities. Those numbers covered most of 2015 and the same period in 2014, and they suggested a 16 percent increase in homicides — an increase that was indeed somewhat concentrated in cities with high black populations. The effect wasn't dramatic, but it was there.
A new academic study, using data from 81 cities and focusing on the year before and after Ferguson, argues against the existence of a widespread Ferguson Effect — but it also confirms my findings on race and homicide. From the abstract (emphasis mine):
No evidence was found to support a systematic post-Ferguson change in overall, violent, and property crime trends; however, the disaggregated analyses revealed that robbery rates, declining before Ferguson, increased in the months after Ferguson. Also, there was much greater variation in crime trends in the post-Ferguson era, and select cities did experience increases in homicide. Overall, any Ferguson Effect is constrained largely to cities with historically high levels of violence, a large composition of black residents, and socioeconomic disadvantages.
Cities with declining homicide trends were, on average, about 12 percent black, just below the national figure. Those that experienced flat trends or modest increases were 18 percent black. And the cities with the biggest increases were 35 percent black.
The authors themselves are hesitant to tie this to changes in policing, though, offering a different way of viewing the correlation:
What is important about these cities is that they had much higher crime rates before Ferguson, which in turn may have primed them for increases in crime. Cities with post-Ferguson increases in crime tended to have a higher proportion of black residents, lower socioeconomic status, and more police per capita—important macro-level correlates of crime rates (Pratt & Cullen, 2005; Sampson, 2012). Simply put, these other predictors of crime rates lead to questions that may inhibit any ability to attribute crime increases specifically to the Ferguson Effect in these cities.
The biggest question is this: If these cities were "primed ... for increases in crime," what besides the Ferguson Effect actually set off the increases? At the very least, it's an odd coincidence that homicide rose specifically in heavily black cities after Ferguson.
I should also note that some are troubled by the implications of this theory. Indeed, you could use it to argue for censorship, or for ignoring police abuses. I'm not particularly tempted by this line of reasoning — the right to protest is sacrosanct, and we should always try to find the truth, even when it makes us uncomfortable.
That includes the truth about cops. It also includes the truth about anti-police sentiment.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The State Department is reluctant to share information. Over the last few years, we’ve seen incomplete and painfully slow compliance with congressional subpoenas and stubborn refusals to answer questions from journalists. The department has even blocked the release of documents sought under the Freedom of Information Act (FOIA).
But State’s unreliable commitment to transparency extends beyond protecting its reputation or playing partisan games. I reached this conclusion during a recent effort to research historical employment trends at the department.
On the surface, State appears very transparent. In the appendix to its annual Congressional Budget Justification, it publishes lots of verbiage, numbers, and details on its resources, including Civil Service and Foreign Service employees. However, these details are spread throughout the document, with information on different bureaus and offices contained in different sections, making it difficult to tell whether the data are complete. This is key, because in recent years, the State Department has not publicly provided details on the exact total number of its employees.
The closest tally is provided in Appendix 1:
The Department operates more than 275 embassies, consulates, and other posts worldwide staffed by nearly 46,000 Foreign Service Nationals and almost 13,700 Foreign Service employees. ... A Civil Service corps of roughly 11,000 employees provides continuity and expertise in performing all aspects of the Department’s mission.
While informative, the data are rounded and presented in general terms. This also occurs in the State Department’s FY 2015 Agency Financial Report, which provides only a bar graph of State Department employees in the thousands with no specific totals.
Seeking an alternative source, I went to the U.S. Office of Personnel Management, which maintains the “FedScope” database on all civilian employment for the federal government. But the FedScope data for the Department of State only raised more questions. For instance, although employment data for other cabinet level agencies were available, FedScope reported no State Department employment data in September 2015. In September 2014, FedScope reported that State only had 23 employees in foreign countries, which is obviously incorrect. Strangely, this inaccuracy occurred in every year back to 2006.
When asked about these discrepancies, OPM responded: “The reason for the drop in the numbers is because the State Department stopped providing data on Foreign Service Personnel in March 2006. The State Department has all together stopped providing us with any data since June 2015.”
It was not always this way. The State Department’s FY 2005 Performance and Accountability Report provides detailed data on the total employment of Foreign Service Nationals, Foreign Service, and Civil Service. Then, State reported that it had 28,294 full-time permanent employees, including 8,964 Foreign Service Nationals, 11,238 Foreign Service, and 8,092 Civil Service employees.
An inquiry to the State Department asking for similar specific data for more recent years was puzzling and frustrating. The department provided more information for 2015 when I asked, though not at the level of detail provided in the 2005 report. The exact same data for other years were available only through a FOIA request.
Capitol Hill staffers report that the State Department has been similarly reluctant to provide them with this type of employment and personnel data. To my knowledge, State has provided no reason for the decision to stop providing overall employment data in its public documents or, shockingly, to other parts of the federal government whose mission and responsibilities involve federal employment matters.
Privacy and security concerns are not plausible. After all, the data are simply aggregates. Individual information is not provided. Moreover, other parts of the federal government involved in national security, including the Department of Defense, provide employment data to OPM.
It is possible, although hard to believe, that the State Department actually does not possess this information. Any institution with a payroll should be able to provide aggregate employment numbers for various broad employment categories. If the State Department does not possess or cannot provide this basic information, it would indicate disturbing managerial incompetence.
Most likely the State Department simply does not want to share this information. This secretiveness seems to be a cultural problem at the State Department.
If this is true, it raises troubling concerns. First, this opacity impedes the efforts of Congress to fulfill its oversight responsibilities, particularly efforts to reform, restructure, or modernize the State Department.
More fundamentally, however, it undermines trust. Why should Congress or the American public trust that the State Department is being forthright on controversial or politically charged matters like the Iran deal, Benghazi, or climate negotiations when it won’t even share something as common, basic, and non-political as employment data?
Brett Schaefer is the Heritage Foundation’s Jay Kingham Senior Research Fellow in International Regulatory Affairs.
Presidential candidates have talked up college as an important pathway to the middle class. Sen. Bernie Sanders has even called for making public colleges and universities tuition-free. But these ideas ignore a harsh reality in the American higher-education system: Only one-third of college enrollees graduate within six years and then get jobs requiring college degrees.
That is the conclusion of my new report in the Manhattan Institute's Issues 2016 series. Only 59 percent of four-year college students graduate within six years. Those who graduate face an additional hurdle — only 56 percent of recent college graduates work in a job that requires a college degree (though the figure for all college graduates is 67 percent, suggesting some underemployed graduates move up later in their careers).
Multiplied together, these numbers suggest that only 33 percent of students who enter college emerge with both a degree within six years and a relevant job soon after graduation. This is the true crisis in higher education, and one policymakers must address before they offer up more taxpayer money to colleges.
Encouraging more students to attend college may worsen the already-poor graduation rate. The new students attracted by free college are likely to attend institutions where graduation rates are lower, such as community colleges (where 40 percent graduate) or four-year colleges with open enrollment (where just 34 percent do).
Failing to graduate is a major cause of financial hardship — student-loan delinquency rates among dropouts are four times higher than among graduates, despite dropouts' having less debt. Additionally, the years that dropouts spend in college are years in which they cannot pursue other career paths, such as apprenticeships.
Those who do graduate face a weak labor market. Recent college graduates have no guarantee of landing a job that requires a college degree. This is not a death sentence — some of the jobs that do not require degrees pay quite well. But if students do not need college degrees to obtain the jobs they'll end up in, why invest in expensive diplomas in the first place?
Field of study is strongly associated with a graduate's likelihood of underemployment. Only 20 percent of engineering students are underemployed, compared with 63 percent of leisure and hospitality students. Currently, colleges have no incentive to guide students toward the career paths with the greatest chances of success. They get their tuition dollars regardless of a degree's economic value.
Before shuffling more students through the higher-education system, policymakers should consider some basic economics. When the supply of college graduates outpaces the number of jobs requiring a degree, the investment value of college falls. This is what has happened in European countries with free college systems. A recent college graduate in the United States can expect to earn 65 percent more than someone with only a high-school degree. In Denmark, which has free college, the comparable earnings premium is only 12 percent.
Reforming higher education in America must start with cleaning up the messes that already exist. Colleges must face accountability measures to incentivize them to raise graduation rates and improve career outcomes for students. Lowering barriers to college enrollment without addressing these other issues would fail the students who need their colleges to do better, and fail the taxpayers who would be left holding the bill.
Preston Cooper is a policy analyst at the Manhattan Institute. You can follow him on Twitter here.
The Congressional Budget Office's recent budget update revealed a dramatic deterioration in the federal government's finances. The cumulative deficit over the next ten years, through 2025, is now estimated to add up to $8.5 trillion. Just last August, the number was $7 trillion.
The CBO itself notes that "about half of the $1.5 trillion increase stems from the effects of laws enacted since August." In other words, this is the work of the 114th Congress, in which Republicans hold the majority in both chambers for the first time in the Obama presidency.
Republican apologists assert that Congress' powers to shrink the government are limited as long as President Obama is in office. This is true. So, let's see where Congress can go from here.
First, the increase in the deficit is almost entirely due to lower tax revenues, not increased spending. In this respect, the Republican-majority Congress has held the line. The federal government will spend about $48.9 trillion over the next ten years and take in about $40.4 trillion in revenues.
However, there is a lot of gimmickry written into the recent tax cuts — revenue will probably be even lower than the CBO's baseline scenario suggests. (Fortunately, the CBO also estimates alternative scenarios that include these more realistic prospects.) Many of the tax breaks last only one or two years, and as a result, they have little impact on the ten-year period. However, in reality, these tax breaks tend to be repeatedly extended.
For example, Congress imposed moratoria on three Obamacare taxes: an excise fee on health insurance, an excise tax on medical devices, and the so-called "Cadillac" tax on expensive employer-based health plans. According to the letter of the law, these three moratoria will add about $40 billion to the deficit. However, if these taxes are deferred through 2025, the cumulative deficit will grow another $239 billion.
So, the question is: Will the Republican-majority Congress follow its tax cuts with spending cuts? Last April, it signaled it would, via a budget resolution passed by both the House and the Senate. Congress had not passed a budget resolution in many years, so this was an important step.
The budget resolution does not actually control any spending. But the CBO has concluded that if the resolution's provisions were enacted as legislation, they would reduce the deficit by roughly $5 trillion through 2025 relative to the current-law baseline. Importantly, the fiscal improvement would come almost entirely from cutting spending.
The budget resolution itself was overshadowed by its focus on defining so-called "reconciliation" language for the repeal of Obamacare.
The result of that language was a bill passed in December that would have repealed Obamacare if the president had signed it. Republican leadership signaled this was a big win, even holding an "enrollment ceremony" at which Speaker Ryan signed the bill in quasi-presidential fashion.
CBO estimated that Obamacare repeal would reduce the deficit by roughly half a trillion dollars over the next ten years. Almost half that improvement is due to economic growth that would increase tax revenues. This would certainly be a positive development. However, even if it won the president's signature, the repeal of Obamacare would achieve less than 10 percent of the cumulative deficit reduction targeted in the budget resolution.
The Republican majority in Congress has shown it can cut taxes. Whether it can cut spending in line with its promises will have to wait until the next president takes office.
John R. Graham is a senior fellow at Independent Institute and a senior fellow at the National Center for Policy Analysis.
It's fashionable to slam 401(k)s — and similar tax breaks for retirement savings — as a handout to the rich. People with higher incomes pay higher tax rates, and therefore benefit more from such perks. If you're in the 35 percent tax bracket, for example, you save 35 cents for every dollar you shelter from the IRS; if you're in the 10 percent bracket, you save just 10 cents.
Not so fast, says Peter J. Brady, an economist for the Investment Company Institute (the trade association for American investment funds) who has published a free e-book on the retirement system. The critics' argument misses two key features of 401(k)s, Brady writes: The retirement plans merely defer taxation, rather than eliminating it entirely; and they're part of a much bigger retirement system that includes Social Security. On the whole, Brady finds, the retirement system is progressive.
Brady analyzes retirement subsidies by simulating the lives of six imaginary Americans. Each earns a different annual salary, ranging from $21,000 to $243,000. All of them, however, aim to replace 94 percent of their earnings in retirement, so they save money in a 401(k) as needed to supplement their Social Security. (The richest investor manages to replace just 85 percent, owing to the contribution limit.) To see how these workers would have fared without government retirement benefits, Brady runs alternative scenarios where retirement savings are placed in taxable investment accounts instead.
401(k)s actually have three different effects. First, contributions reduce the saver's taxable income in the year they're made — the effect that critics fixate on. But second, interest that accumulates is not immediately subject to the income tax. (Brady argues that this actually equalizes the incentive to save between rich and poor: Otherwise, the income tax provides a disincentive for the rich to save. With a 25 percent income tax, a 6 percent rate of return becomes a 4.5 percent rate of return.) And third, distributions during retirement are taxed, which disproportionately affects the rich. Brady's poorest simulated investor pays no income tax at all in retirement.
When all the effects are considered, the apparent unfairness of 401(k)s is diminished, though hardly eliminated:
Interestingly, in Brady's simulations, the rich benefit more because they save a lot more money, not because of their higher tax rates. Social Security replaces 67 percent of income for the poorest worker, but only 17 percent for the richest, so tax deferral plays a much larger role in the retirement plans of wealthier workers.
Which brings us to Brady's other key point: Tax deferral is just one part of the retirement system. When Social Security is brought into the analysis, it's clear that the poor actually gain the most, at least relative to their incomes:
Building on this observation, Brady cautions against worrying too much about the progressivity of any one component of our tax-and-transfer system. It's fine for a program to be regressive if that's what's necessary to achieve an important goal, so long as the system as a whole is satisfactorily progressive. (This is reminiscent of a point often made in an even broader context by the liberal Citizens for Tax Justice: While federal income taxes are quite progressive, state tax systems are often regressive. The two act in tandem to create a system that's not as progressive as you might think.)
At an event celebrating the book's debut, William Gale of the Brookings Institution laid out several criticisms of Brady's work, two of which are especially important. First, Brady's simulations overstate the system's progressivity in some ways — for example, they assume all workers save as much as they need to (up to the contribution limit) to replace 94 percent of their income, but in reality the poor and middle class are more likely to undersave or to lack benefits like 401(k)s. And second, progressivity is a rather weak demand to place on a retirement system; the true question might be whether the system is progressive enough.
Regardless, Brady lands some heavy blows upon critics of the retirement system. Apparently, it's not as bad as some thought. His book — which covers a lot of ground not explored above — is worth a close look.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
Editor's note: This article is adapted from one that ran in Ag Innovation Magazine, a publication of the Farm Equipment Manufacturers Association.
There is no debating that Obamacare contains benefits and costs and that the law continues to sharply divide the country. Its supporters say that it is working, while its opponents argue otherwise and continue pressing to undo the law. In January, Congress sent a bill to the president's desk that would have repealed large swaths of the law, including most of its taxes and much of its spending; as expected, the president vetoed the bill. Let's take a closer look at how Obamacare is working so far.
Obamacare's Impact on Coverage
The numbers have increased: Obamacare supporters primarily and repeatedly cite the decrease in the number of people without health insurance. The Obama administration estimates that 12.6 million people gained insurance coverage from 2010 to 2014. However, several million people became uninsured during the financial crash of 2008-2009. Part of the post-2010 increase simply reflects a return to pre-recession coverage levels as the economy has slowly improved. Using a 2008 starting point reveals that the number of uninsured fell by only 6.7 million people through 2014.
Most of the increase is Medicaid: The net gains in health insurance have come mainly through Medicaid and not private insurance, since the number of people covered by employer-sponsored insurance has somewhat declined. Medicaid is a program for lower-income people and is plagued with problems, including poor access to care for enrollees. A recent study found that enrollees receive only 20 to 40 cents of benefit for each dollar that Medicaid spends on their behalf.
Exchanges are doing poorly: Overall enrollments on the Obamacare exchanges are far lower than the government had previously forecast. The only people signing up in large numbers are those who receive large subsidies to reduce their premiums and deductibles.
Importantly, increasing the number of people with insurance cards does not guarantee that those people gain anything in terms of health, as numerous studies have indicated a loose connection between health insurance and health.
Obamacare's Impact on Insurance and Premiums
Premiums soar: President Obama told Americans that Obamacare would reduce family premiums by $2,500. However, since Obamacare was signed into law, family premiums for employer plans have soared — increasing by more than $4,000 since 2009.
Young and healthy decline to subsidize old and sick: Obamacare requires that health insurers offer a standardized health insurance product to all applicants, and that they charge the same premiums regardless of health status. Insurers are also prohibited from charging near-retirees more than three times the amount charged to twenty-somethings. As a result, Obamacare increased individual-market premiums, with younger and healthier people bearing the largest increases.
In 2014 and 2015, insurers selling exchange plans lost money as the plans attracted older and sicker enrollees than expected. In 2016, most exchange-plan premiums are increasing by double-digits. UnitedHealth, the largest insurer in the country, has announced that it may stop offering exchange plans altogether after 2016 because of market instability.
Plans are becoming stingier: Premiums would be even higher, but insurers designed exchange plans with very narrow provider networks and high deductibles and cost-sharing amounts. Many are discovering that their treatments are “covered” under their plans, but not actually paid for by their plans.
Obamacare's Impact on Businesses and Workers
Employer mandate arrives in full force: After two years of delay, the employer mandate takes full effect in 2016. The employer mandate requires that employers with at least 50 full-time workers offer acceptable coverage to their workers or pay tax penalties. These penalties can equal $2,000 per worker or $3,000 per worker receiving insurance subsidies on the exchanges. The mandate incentivizes employers to trim hours below 30 per week so workers are not considered full-time and reduce full-time workers (plus full-time equivalents) below 50.
Paperwork will become heavy: The IRS has created seven new forms for Obamacare. In particular, complying with the employer mandate will be a major paperwork burden for businesses. Businesses will have to report on their health-insurance offering as well as the monthly take-up rate for their workforce.
SHOP exchanges are mostly failing: The Small Business Health Option Program exchanges were designed to provide small businesses the ability to offer their workers more health-insurance options and lower overall premiums. Thus far, the SHOP exchanges have been a failure, enrolling only a small fraction of the number of people projected.
Co-ops have failed: All of the co-ops — state-based insurers established by the law through large federal startup loans — are underwater. More than half have already closed.
Fewer jobs: The Congressional Budget Office estimates that Obamacare will reduce the amount of full-time work in the economy by about 2 million jobs, decreasing American economic output by about half of 1 percent. This is largely the result of lower-wage workers' working less because additional work would reduce or eliminate subsidies.
Insurance-company subsidies were reduced and are ending: Obamacare contained two back-end subsidy programs to assist insurers offering Obamacare plans. The first, called reinsurance, pays the majority of the costs of insurers' most expensive enrollees. The second, called risk corridors, collects money from insurers with profits and pays insurers with losses. Congressional action required the risk-corridor program to be budget-neutral so taxpayers would not be on the hook for insurers' losses. That made a big impact, since a lot more insurers lost money than made money on Obamacare plans in 2014 and 2015. Insurers were upset by this action and are lobbying for taxpayers to finance the risk-corridor deficit. Both reinsurance and risk corridors end after 2016, so Obamacare plan premiums in 2017 will have to rise, perhaps substantially, to account for the loss of these back-end subsidies.
Individual mandate tops off: Obamacare supporters hope that the increase in the individual-mandate penalty, which will now equal the greater of $695 per person or 2.5 percent of household income above the tax filing threshold, will incentivize more young and healthy people to purchase coverage and stabilize the risk pools. To date, the individual mandate has not been nearly as effective as many experts had predicted it would be.
Cadillac tax faces an uncertain future: The Cadillac tax was primarily placed in the law as a way to deal with the tax exclusion for employer-provided insurance, which means that employer plans' premiums are not subject to federal income or payroll taxes. Economists generally agree that the tax exclusion causes numerous problems and contributes to the high cost of American health care. The Cadillac tax was slated to begin in 2018 and was a 40 percent excise tax on plans valued at more than $10,200 for single coverage and $27,500 for family coverage. After aggressive lobbying by business and labor groups, Congress delayed the Cadillac tax until 2020. Among people who believe the exclusion needs to be capped or limited, the Cadillac-tax delay raises concern that will ever happen.
Six years after enactment, Obamacare remains a law very much in turmoil. The law has decreased the number of people without health insurance, and its regulations and subsidies have benefited some lower-income people and many with preexisting health conditions. Many more, however, find they have less freedom and that their coverage is deteriorating: higher premiums, fewer providers, higher out-of-pocket costs. Beyond the widely reported website problems, many of the law's fundamental institutions — individual exchanges, SHOP exchanges, co-ops, mandates — are failing to perform as expected.
The law will certainly be one factor in this year's elections. Obviously, its long-term future depends heavily upon who is elected president. But perhaps more important is whether public pressure from higher premiums, higher taxes, and reduced choices will boil over and force Washington to revisit the law regardless of who is elected president.
In response to recent infections and deaths from tainted medical scopes, U.S. lawmakers are wrestling with how to keep other dangerous devices from harming patients.
Members of Congress, federal officials and health-policy experts agree that the Food and Drug Administration's surveillance system for devices is inadequate and relies too heavily on manufacturers to report problems with their own products.
But fixing the federal warning system to enable more timely identification of risky scopes, implants and surgical tools means overcoming significant challenges in Congress, from partisan divisions to the need for more government funding. Even then, it could take years for a new system to be up and running.
Patient advocates are skeptical of the FDA's commitment to reform. Federal auditors have criticized the agency's oversight of devices since the 1990s.
The latest push for changes came from Sen. Patty Murray (D-Wash.), who issued a report Jan. 13 exposing failures by the FDA, device makers and hospitals that contributed to the nationwide spread of antibiotic-resistant infections from a gastrointestinal scope. Senate investigators cited 19 superbug outbreaks in the U.S. that had sickened nearly 200 patients from 2012 to 2015. Two were in Los Angeles, at UCLA's Ronald Reagan Medical Center andCedars-Sinai Medical Center.
The Senate report faulted the FDA for taking 17 months to investigate before issuing its own warning in February 2015. In the meantime, seven more hospitals suffered outbreaks and 68 patients developed dangerous infections.
Murray has called for an entirely new medical device tracking system, akin to the way prescription drugs are monitored. It would draw primarily on insurance claims data to supplement the industry's injury reports, which are often cursory and filed months late, if at all. President Barack Obama's nominee for FDA commissioner has endorsed that data-driven approach.
Part of that proposal, putting bar codes on every instrument for the first time, is already being phased in over the next few years. But experts say those unique identifiers will be of little use unless Congress requires hospitals and doctors to include them on insurance claim forms.
Researchers say claims filed with private insurers and Medicare are useful because they give close to real-time data on a large population. By tracking device IDs, like the VIN number on a car, regulators could spot patients across the country coming into an emergency room or developing infections after a procedure.
Those red flags would trigger further investigation and possibly a safety alert — without the need to wait on incident reports from manufacturers or hospitals. The data also would help find patients who have implants that were recalled, or assist hospitals in pulling defective equipment out of service quickly.
Sen. Lamar Alexander (R-Tenn.), chairman of the Senate health committee, said the investigation into superbug outbreaks uncovered "disturbing facts" about the FDA's response. But he and other Republicans appear intent on making sure regulators are using the powers they already hold before embarking on a new government program.
Republican lawmakers have pointed out that the FDA can impose civil and criminal penalties against manufacturers for failing to report injuries or deaths, but it rarely uses those powers.
Budget hawks are likely to resist funding a new medical device monitoring system. It could cost up to $250 million to implement and maintain a new system over the first five years, drawing on government or private-sector funding, according to a Brookings Institution report last year.
Greg Daniel, one of the report's authors and now deputy director of the Center for Health Policy at Duke University, has been working with the FDA, hospitals and device makers, planning and designing a new tracking system.
"It will take a financial contribution to get this off the ground. It's expensive and complicated," Daniel said. "Most people think this is already being done, but we don't have the fundamental ability to link devices to their outcomes, like on the drug side."
Representatives of the device industry said they welcome the debate, but they too emphasize that regulators have plenty of authority already. Device makers wield considerable influence with Congress, contributing to lawmakers of both parties.
"The FDA has extensive post-market authorities — including requirements for quality systems, adverse-event reporting, mandatory recalls, corrections and removals — to help ensure the safety and effectiveness of medical technologies once they are on the market," said JC Scott, senior executive vice president for government affairs at AdvaMed, an industry trade group. He didn't address the Senate report's recommendations directly.
After safety problems with certain drugs a decade ago, Congress helped create the Sentinel program to better track medications. The program analyzes claims data on more than 170 million Americans from several large health insurers, dozens of hospitals and disease registries.
Dr. Robert Califf, the FDA's deputy commissioner and the president's nominee to lead the agency, said during his confirmation hearing that regulators need a Sentinel-like system for devices, too.
"We have plans to do that, but we are going to have to work with you on how to fund it," Califf told senators at the Nov. 17 hearing. "Imagine with these duodenoscopes, if there had been such a system, we would have seen the problem very early. We could see it independently of industry and act on it much more rapidly."
An FDA spokeswoman said the agency is carefully considering the recommendations in the Senate report, and is already taking steps to address some of the issues raised, such as notifying the public sooner about suspected problems before an investigation is finished. By the end of this year, the agency said it hopes to gain access to 25 million electronic patient records containing bar codes on the devices used.
Families affected by the recent superbug outbreaks support such changes but are skeptical about the FDA's role, given its past missteps.
For months, Glenn Smith of Woodland Hills, Calif., saw his 19-year-old son, Aaron Young, battle a superbug infection. He got it from a contaminated scope at UCLA's Ronald Reagan Medical Center in 2014.
"Something needs to be done," the father said, "but I'm wary of the FDA being in charge of anything until it gets its own house in order. They were very slow to react."
This story was produced by Kaiser Health News, which publishes California Healthline, a service of the California Health Care Foundation. Kaiser Health News is a national health policy news service that is part of the nonpartisan Henry J. Kaiser Family Foundation.
If the first step to solving a problem is admitting you have one, then Congress, we need to talk. As we mark yet another anniversary of the Citizens United decision, and with the most expensive election in history looming, it's becoming clearer and clearer that solutions are needed. Those solutions are in reach, yet congressional leadership refuses to act. This Town is in denial.
There can be no question that the flood of spending unleashed by Citizens United has created a crisis in our democracy. By almost any measure, the problem is getting worse:
• The 2000 campaign cycle cost $3 billion, but experts predict campaign spending will hit up to $10 billion this time. This means campaign costs are increasing faster than inflation, the cost of college, or health-care spending.
• In the last 25 years, the average cost of a seat in Congress has risen 64 percent for the Senate and 344 percent for the House.
• Between the two most recent midterms alone, the amount of undisclosed "dark money" spent in our elections skyrocketed 200 percent.
• Outside spending from politically active nonprofits and super PACs has increased 500 percent just since 2012.
• In 2014, for the first time ever, the total amount of money raised increased but the total number of donors decreased.
• The number of lobbyists has increased 245 percent since John F. Kennedy was in office, and that small army now spends more money lobbying Congress than taxpayers do to keep Congress running.
One of the worst side effects of the exploding cost of campaigns is how much time lawmakers have to spend dialing for dollars to raise that cash. And boy do they hate it. Don't take it from us: So says retiring Republican Sen. Dan Coats ("It's such a time-consuming distraction"); former Democratic Senate majority leader Tom Daschle ("We can't run a country this way"); the ever-colorful former RNC chairman Bill Brock ("If [the system keeps getting worse] … just shoot me"); and, most recently, Rep. Steve Israel, a former head honcho for Democratic fundraising ("I don't think I can spend another day in another call room making another call begging for money"). The situation has become so dire that President Obama highlighted the woes of fundraising fatigue in his State of the Union address.
So if members of Congress would rather quit than deal with this hideous system, why isn't their leadership attempting to do something to fix it? It's certainly not because of voter apathy. As the Pew Research Center found in November, 76 percent of both liberals and conservatives agree that money's influence in politics has grown, and similar majorities support commonsense solutions to change that. And poll after poll shows growing bipartisan disgust with Citizens United in particular. Even Justice Kennedy, the swing vote in Citizens United, has recently spoken out, stating that elements of the decision are not working as they should.
The solutions are at hand, if only Congress would grasp them. Compared with massive challenges like tackling climate change or fighting the obesity epidemic, reorienting democracy back toward average citizens is fairly simple. The DISCLOSE Act, which would expose dark-money flows to full sunlight, passed the House only to fall one vote short in the Senate in 2010. A constitutional amendment to reverse Citizens United garnered 54 votes when last considered by the Senate, and has been endorsed by over 600 state and local governments in 41 states.
Tellingly, the vast majority of Americans are not waiting for Washington to play catch-up. From new clean-elections laws in Maine, Seattle, and Tallahassee, to crackdowns on dark money in Montana and California, the American people are speaking out irrespective of party. They are using their voices and their votes to correct what is so clearly broken. Solutions already exist and are working to free elected officials from the burdens of fundraising, reduce conflicts of interest, increase transparency, and hold bad apples accountable for breaking the rules.
This explosion of state and local energy — often overlooked in national political coverage — is a sign of hope that Beltway denial will soon end. With the continued rise of outsider candidates who claim independence from this campaign-finance system, and with the public growing more outraged, congressional leadership can't keep refusing to solve the problem forever. History teaches us that big reform often follows big improprieties, from the post-Watergate campaign-finance system to the post-Abramoff changes in congressional ethics rules. Today we are one scandal away from a stampede to pass the DISCLOSE Act, adopt a constitutional amendment, implement the innovative solutions that are working in the states, or even persuade Justice Kennedy that he erred in Citizens United, securing five votes for sanity.
But it shouldn't have to come to that. Congress, let's call it a crisis now. Admit what is so obvious to everyone outside the Beltway. The sooner we do, the sooner elected officials can hang up the phone on dialing for dollars and start down the path to restoring our democracy.
Norman Eisen is a visiting fellow at the Brookings Institution. He previously served in the Obama administration, most recently as U.S. ambassador to the Czech Republic, and before that as President Obama's ethics czar. Nick Penniman is executive director of Issue One, a nonprofit organization dedicated to reducing the influence of money in politics and putting Americans of all stripes back in control of our democracy.
The BTT is back in the news.
A BTT, or business transfer tax, is a flat-rate consumption tax collected at the business level. Both Sen. Ted Cruz and Sen. Rand Paul have included one in their presidential platforms.
Supporters call it a business activity tax or business flat tax. Critics call it a value-added tax (VAT). They view this as undesirable because they think a VAT would make tax hikes easier to implement in the future. Unlike income taxes or sales taxes, consumption taxes collected at the business level, like all business-level taxes, are not immediately obvious to the people who ultimately pay them, in this case primarily the businesses' customers and owners. Both the Cruz and Paul plans repeal employer payroll taxes and corporate taxes, which raise the same concerns.
The BTT is, indeed, economically equivalent both to a VAT and to other consumption taxes such as the flat tax and the national sales tax (often called the FairTax). They all tax the same tax base, and they all tax capital and labor equally. But if the BTT is a VAT, it's unlike any VAT ever seen. Administratively, it functions much more like the corporate income tax or the flat tax. And in general, flat-rate consumption taxes — whether they're called “VATs” or not — are dramatically superior to our current system of taxation.
European VATs (and the Canadian and Australian versions) are administered somewhat like a sales tax: on a transaction-by-transaction basis. Businesses must pay the VAT on all of their sales, but they get a credit for the value-added tax paid on the inputs used to produce the goods and services they sell. The credit avoids cascading taxation, where a tax is imposed on a tax again and again before the product reaches the consumer. By contrast, under the BTT, businesses would simply pay a tax on their gross revenues, minus what they paid to other businesses for their inputs, when they filed their quarterly and annual tax returns.
The BTT is different in one important respect from the flat consumption tax that free-market advocates have championed for decades. In 1983, economists Alvin Rabushka and Robert Hall developed the Hall-Rabushka flat tax, later promoted by then majority leader Dick Armey and by Steve Forbes. Under Hall-Rabushka, businesses deduct wages on their tax returns; then wages, and only wages, are taxed on individuals' personal tax returns. In other words, the old flat tax taxes capital at the business level and labor income at the individual level. Under a BTT, by contrast, wages are not a deductible business expense. A BTT therefore taxes both wages and capital at the business level.
Replacing the current income tax with a flat-rate consumption tax would simplify the tax system considerably. It could also increase the size of the national economy by about 15 percent over a decade. That growth results from eliminating the double taxation of savings and investment, dramatically reducing marginal tax rates, and dropping the plethora of existing special preferences, deductions, credits, and exclusions. These reforms would boost employment, real incomes, and tax receipts. Higher employment and higher incomes will substantially reduce state and federal spending on income-maintenance programs.
These positive economic effects have inspired a number of BTT proposals over the last two decades. They include the 1995 USA Tax plan of former Sen. Pete Domenici, R.-N.M.; the 2005 BEST Tax proposal by former senator Jim DeMint, R-S.C.; and the plan from House Speaker Paul Ryan's 2010 Roadmap for America's Future.
Ultimately, it's largely unimportant whether you call the BTT a VAT or some other name. At the end of the day, a BTT, a flat tax, and a sales tax are all flat-rate consumption taxes. As a replacement for the income tax, they would all have a dramatic, positive impact on economic growth and the real incomes of the American people.
Any of them would constitute a huge improvement over the current tax system. And any one of them deserves very serious consideration.
David R. Burton is senior fellow in the Heritage Foundation's Thomas A. Roe Institute for Economic Policy Studies.
There's new research (paywall) from Boston University's Michael B. Siegel and Emily F. Rothman analyzing the relationship between gun ownership and various types of homicide in U.S. states. Covering the period 1981-2013, the study argues that higher gun ownership goes with higher rates of lethal violence among non-strangers, but not among strangers, and the results hold for male and female victims alike. It's the first study to look at gender and the stranger/non-stranger distinction simultaneously.
I can't pretend to be unbiased; I'm skeptical of this line of research and have not been the least bit quiet about it. But this isn't one of those simple studies that a journalism major like me can effectively pick apart. The true test will come when more qualified researchers dig in. Once that happens, the rest of us can make an informed decision about how this work should change our views.
In the meantime, I’ll stick to laying out how the study worked, detailing what it found, and pointing out a few potential pitfalls and policy implications. Fair warning: The following goes into a lot of detail.
The outcome of interest here is homicide, broken out according to the victim's gender, whether or not the killing was accomplished with a gun, and the relationship between the parties. That last part is the tricky one: The FBI does classify offenders as strangers and non-strangers to the deceased — with non-strangers defined to include those "casually acquainted," as well as groups of attackers where even one person knows the victim — but a lot of the data are missing. In no small part, this is because more than 30 percent of murders go unsolved. So the authors use a version of the FBI data where the missing information has been statistically "imputed." Across the entire time period studied, about 78 percent of homicides were committed by non-strangers.
The other key variable is gun ownership, which is also tricky. Most surveys are too small to provide state-level data; the main exception has numbers only for 2001, 2002, and 2004. To address this problem, the authors instead use a "proxy" for gun ownership, a combination of two other variables — the percentage of suicides committed with a gun, and the number of hunting licenses issued in each state.
This is a dramatic improvement over the percentage-of-suicides variable by itself, which has been popular as a proxy for decades. That measure has a nasty habit of correlating with homicide rates even in situations where the actual survey data do not. The new measure has a much tighter correlation with the survey data (technically speaking, r=0.95 vs. 0.80), suggesting it's a closer fit to the real numbers.
There's no denying that hunting licenses represent only a particular type of gun ownership, though. Handguns, as opposed to the long guns favored for hunting, make up an increasing proportion of gun sales — in the late 1990s and early 2000s, the FBI conducted more than twice as many background checks for long-gun sales as for handgun sales, but the gap has disappeared since. Since the proxy was tested against older survey data, it may be a worse fit in more recent years. One gun study incorporated the FBI's background-check data into its gun-ownership proxy, but these data begin in the late 1990s.
Armed with their best approximations of homicide rates and gun ownership, the authors take a whole bunch of other variables into account: "age, gender, race/ethnicity, region, urbanization, poverty, unemployment, income, education, income inequality, divorce rate, alcohol use, nonviolent crime rate, hate crime rate, suicide rate, and incarceration rate." (Some previous studies, including this one that Siegel coauthored, have controlled for broader measures of violent crime, such as the overall violent-crime rate or the robbery rate, instead of including just nonviolent crime and hate crimes.) They also adjust for national trends in homicide rates and use a statistical technique that "accounts for the correlation of data within the same state across time."
Here are their results:
According to this model, when you increase the gun-ownership rate by 10 percentage points (e.g., from 30 percent to 40 percent), you increase total firearm homicides by about 10 percent (e.g., from 4 to 4.4 per 100,000). The results for strangers and non-gun homicides are all statistically insignificant. This jibes with some previous studies, though others disagree. (Here is a very critical literature review from the criminologist Gary Kleck, including a whole section dissecting a previous study coauthored by Siegel.)
A qualm: It's odd to me that, while statistically insignificant individually, all nine of the non-gun results are positive. One might expect higher gun ownership to go with lower non-gun homicide: Guns can't cause non-gun homicides, but guns can deter attackers with inferior weapons, and as guns become more available, murderers may substitute guns for other weapons. There are, no doubt, also cases where the presence of a gun causes a homicide to occur that wouldn't have otherwise — raising the key question of whether the good outweighs the bad. But I tend to think these other effects exist, and this study fails to pick up on them.
Importantly, as the authors note, these results are fairly difficult to reconcile with a theory of "reverse causality" — the idea that homicides in a community cause people to arm themselves, rather than the reverse. (See the above-linked Kleck review for a discussion.) It's certainly true that this happens; the most recent example is the explosion of concealed-carry applications and gun purchases in the San Bernardino area. But one would expect this effect to be strongest with stranger homicides, while the new data show the strongest correlation between gun ownership and non-stranger homicides. Even allowing for the facts that "non-stranger" is defined broadly and there are relatively few stranger homicides in the data, that's a challenge to researchers on the other side of the issue.
The authors also provide a simpler analysis with an interesting chart to go with it. As numerous critics (including David Freddoso and yours truly) have pointed out, if you look at the raw state-level data, you see there's actually no correlation between homicide rates and gun ownership. This isn't the end of the conversation, because a relationship can pop out once you account for other important variables, as the authors do here. (And no, Vox, I never pretended otherwise.) But it is a pretty striking illustration of how complicated the topic is: Whatever effect guns have, it's subtle enough that it doesn't show up as a recognizable pattern in the overall data.
The new study confirms this result for men. Setting aside all the aforementioned statistical wizardry, gun-ownership rates explain a paltry 1.5 percent of the variance in male firearm-homicide rates. But for women it's a different story, with 41 percent of the variance explained:
The obvious criticism is that, even in this simple analysis, they shouldn't just be looking at firearm-homicide rates; they should be looking at total homicide rates. Again: Guns can substitute for other weapons, and they can deter gunless attackers, so gun homicides and non-gun homicides are not independent of each other.
For what it's worth, though, I did a poor man's version of the above analysis and still found a significant correlation between gun ownership and total female homicide rates. (I averaged the 2001-2004 surveys for gun ownership, and matched them up with CDC homicide data from the period 1999 to 2006 — because some states are small and female homicide rates are low, I had to cast a wide net to get usable numbers from the CDC.) In my data, gun ownership explains 15 percent of the variation in female gun homicides and 9 percent of the variation in total female homicides. There was no correlation between gun ownership and female non-gun homicide.
So, those are the results. What are the policy implications? If you believe all of the above claims and see no other reason to value the right to bear arms, a possible conclusion is that the government should deliberately reduce gun ownership somehow. Good luck with that. A milder suggestion would be to require background checks on all gun transfers — including between private parties — though, despite popular support, this has proven difficult to enact at the national level.
If the results for women specifically stand out to you, or if you'd like a politically modest agenda, a more focused approach recommends itself. Men who kill intimate partners, for example, usually don't just do it out of the blue; they have a history of criminal behavior. There are ideas for reform that would target these men specifically. As I wrote in National Review last month:
Hillary Clinton ... would expand the list of misdemeanors that legally disqualify people from gun ownership to include domestic violence against a non-cohabiting partner, as well as stalking. A related proposal, heavily promoted by the Center for American Progress and enacted in some states, would strip gun rights from those under a temporary restraining order, which raises some due-process concerns because such orders are issued without a hearing.
Still another reform is to make sure those convicted of domestic violence actually give up their guns. You can read CAP's 2013 report on these proposals here.
Only about 20 percent of American homicide victims are women — and among female homicide victims, only 40 percent are killed by intimate partners, and only about half are killed with guns. So this won't make a huge dent in the total homicide rate. But every little bit helps, and, importantly, it's hard to argue against disarming domestic abusers.
No one study will end the gun debate, but this latest contribution is an interesting and challenging one. It deserves careful consideration by those in both ideological camps.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
In a working paper released last week by the University of Michigan Transportation Research Institute, researchers Michael Sivak and Brandon Schoettle examined changes in the percentage of Americans with a driver's license. Younger age groups have seen their rate decline continuously since 1983 — the number for 20- to 24-year-olds, for example, fell from 92 percent to 77 percent in 2014.
Other groups actually saw an increase between 1983 and 2008, the most dramatic (from 55 percent to 78 percent) being among those over 70. But all age groups have seen a decline since at least 2011.
Why are so many Americans not getting licensed? In a 2013 study, Schoettle and Sivak conducted an online survey asking unlicensed adults age 18 to 39 that very question. The top five reasons were: too busy or not enough time to get a driver's license (37 percent); owning and maintaining a vehicle is too expensive (32 percent); able to get transportation from others (31 percent); prefer to bike or walk (22 percent); and prefer to use public transportation (17 percent). All other survey responses registered in the single digits, including "concerned about how driving impacts the environment" (9 percent). (The numbers add to more than 100 percent because secondary responses are included.)
An important caveat, however, was that 69 percent of those surveyed planned to get a driver's license within the next five years, while only 22 percent planned never to get one. So it's not clear whether this generation of younger Americans will continue to drive less than previous generations as they age.
The economy will be a factor, seeing as the expense of owning a vehicle is one of the most commonly cited reasons for being unlicensed. Labor-force participation has been inching downward in recent decades, even among those in their late 20s and early 30s. If this unsettling trend continues, many Americans in this age group will likely continue to depend on non-automotive modes of transportation.
Among Americans over 70, nearly four out of five are now licensed, and that figure appears to be relatively stable. With this age group, driver safety is the elephant in the room, and these Americans strongly view their freedom as tied to their driving rights. Fortunately, a technological solution, in the form of self-driving or "autonomous" vehicles, is in the near-term offing. Existing automobiles have already incorporated such safety features as lane-departure warnings, speed-adjusting cruise control, rear-view cameras, and automatic brakes. This trend will continue to accelerate, with some predicting that fully self-driving cars will dominate as early as 2030.
The looming public-policy challenge will be adopting an acceptable regulatory framework to govern self-driving vehicles. In an October press release, Volvo president and chief executive Hakan Samuelsson said that "the US risks losing its leading position [on self-driving cars] due to the lack of federal guidelines for the testing and certification of autonomous vehicles. Europe has suffered to some extent by having a patchwork of rules and regulations." Samuelsson was referring to the prospect of each state government instituting its own rules.
The issue of state versus federal laws is always contentious, and the evolution of this technology requires that the public-policy discussion begin in earnest. One option to consider is a model statute that each state government can consider (with minor variations allowed).
If such a statute can be developed over the next few years, it will give ample time to prepare for the coming revolution in how Americans travel.
Thomas A. Hemphill is a professor of strategy, innovation and public policy in the School of Management, University of Michigan-Flint and a senior fellow at the National Center for Policy Analysis.
As poll after poll has demonstrated, consumers want to buy "Made in the USA." In fact, it's a fairly obvious proposition. Americans overwhelmingly recognize that unless the United States maintains a strong and diverse manufacturing base, the nation's overall economic standing will suffer.
While Americans have become more concerned in recent years with restoring the nation's manufacturing sector, one aspect of industrial self-sufficiency remains problematic. And of late, it has become a worrisome problem.
America's manufacturers are growing more and more dependent on imported metals and minerals to make many of the products used in everyday life. Specifically, the United States is now completely import-dependent for 19 key minerals, and more than 50 percent dependent for another 24 important minerals.
It wasn't always this way. As recently as 1990, the United States led the world in metals and mineral production. Now, America ranks seventh, and relies on roughly $27 billion worth of imported minerals each year.
Mineral production may seem an obscure topic to some. But consider this: The average smartphone contains dozens of different metals and minerals, including copper, gold, platinum, and silver.
While consumers may be unaware of America's growing metals shortfall, the nation's manufacturers are becoming ever more worried. In a 2014 survey, more than 90 percent of U.S. manufacturing executives said they're concerned about obtaining the minerals they need, when they need them.
The real irony of this growing import dependence is that, as a nation, America possesses great mineral wealth. Indeed, the United States holds some of the greatest mineral reserves on the planet — worth an estimated $6.2 trillion.
Despite this natural abundance, less than half of the minerals consumed by America's manufacturers are actually sourced domestically. And the problem, when it comes to extracting new minerals, is mostly a bureaucratic one. It now takes as long as seven to ten years for a U.S. mining operation to navigate the permitting process needed to launch new operations.
A decade of permitting delays can eliminate half of a mining project's value before production even begins. In contrast, mine permitting in countries like Australia and Canada, which maintain environmental standards comparable to those of the United States, takes only two to three years.
It's uncertainty and delays that keep America's mine operators from extracting some of the most crucial mineral supplies used globally.
If America's manufacturers are to remain competitive, they'll need more timely access to reliable mineral sources. This is particularly apparent when one considers that many of the nation's existing mines are reaching the end of their useful lifespan. And it's why Washington urgently needs to overhaul the outdated permitting process that continues to hamper domestic mining operations.
Thankfully, Congress is catching on. The House of Representatives has repeatedly passed legislation to make the permitting process smarter and more efficient. But the Senate needs to act as well.
An engaged Washington, and one that truly values domestic manufacturing, needs to give America's mineral miners a fair shake. Otherwise, the cost of inaction will lead to a greater dependence on imported metals. And that's simply bad business for American manufacturing.
Kevin L. Kearns is president of the U.S. Business & Industry Council (USBIC), a national business organization advocating for domestic U.S. manufacturers since 1933.
The current election campaign has heightened sensitivity to racial concerns. Among liberal Democrats this has led to a focus on the impact of President Bill Clinton's cash-welfare policies. Last week, for example, in an attempt to undercut Hillary Clinton's support in the black community, Jamelle Bouie of Slate claimed that "welfare reform couldn't protect poor women in the recession that followed."
In the same vein, but with much more detail, Eduardo Porter vilified the 1996 welfare-reform law in an October New York Times column. He contended that welfare does not encourage dependency, citing a finding by Mary Jo Bane and David Elwood that over 40 percent of pre-reform welfare recipients were short-term users. Porter then claimed it was the strong economy, not Clinton's welfare policies, that explained the substantial drop in poverty rates during the late 1990s. Porter also condemned the mean-spiritedness of the current state-run cash-welfare system, citing the fact that "today only 26 percent of families with children in poverty receive welfare cash assistance ... down from 68 percent two decades ago."
A month later, the grand dame of entitlements, Frances Fox Piven, made similar claims in a Pacific Standard essay, but added that since employment policies won't work, only increased cash payments will reduce poverty.
Every one of these claims is misleading or inaccurate. The Bane-Ellwood study, for example, covered the period 1968 through 1989 — but the welfare population ballooned from a steady 3.8 million during the 1980s to 5.1 million by 1992.
Porter misrepresented what happened after welfare reform, too. The poverty rate of children living in single-mother households declined from 50 percent in 1996 to below 40 percent by 2000. To support his claim that a strong economy, not changing welfare policies, was responsible for this improvement, Porter referenced a study that focused on one aspect of welfare reform: time limits. By contrast, in a paper commissioned by the American Economics Association, Rebecca Blank surveyed more than 20 studies and concluded, "Economic factors had smaller effects post-TANF, suggesting that the policy changes of the mid-1990s were a major cause of declining [welfare] caseloads." In particular, President Clinton's Council of Economic Advisers, headed by Joseph Stiglitz, concluded that after the 1996 reform, the strong economy was responsible for only 9 percent of the welfare caseload decline while the reforms were responsible for 35 percent.
There are a number of reasons why a strong economy alone was insufficient to draw many welfare mothers into the labor market. In 2005, I interviewed staff in welfare-to-work transition programs. They consistently indicated that welfare mothers had a totally unrealistic set of wage expectations. Nydia Hernandez, administrator of the STRIVE program in Chicago, told me "they would say that they had to make at least $10 per hour, but when you looked at their lack of credentials — hadn't worked in six years and didn't even have a GED — you could see how unrealistic they were." Only after being convinced of the long-term benefits did many women accept paid employment. In addition, the requirement that cash-welfare recipients spend 20 hours a week on work-related activities (such as applying and interviewing for jobs) prodded many to shift to paid labor, including one of the three women New York Times poverty reporter Jason DeParle followed in his book on welfare reform.
Bouie and Piven are also wrong in their claim that the shift from welfare to work did not insulate single mothers from the early 2000s recession. The poverty rate among children living in single-mother-headed families remained at 40 percent from 2000 through 2002 and rose only to 42.5 percent by 2005. These rates were well below those seen prior to welfare reform, and the increase was only one-half the increase observed during the previous 1990-1992 recession. Indeed, a Federal Reserve Bank study found that, compared with the bottoms of the two previous recessions, in 2004 young single mothers were much less likely to have incomes below the poverty line.
Porter also misleads when he bemoans the dramatic decline in the share of poor families on cash welfare. The decline is happening because work is now more advantageous for single mothers: A job enables them to gain substantial safety-net benefits, including the Earned Income Tax Credit (EITC) and the child tax credit. Even if they don't quite escape official poverty, these mothers are substantially better off working than they would be if they went on cash welfare.
Most important is the 20-hour work-related-activity requirement. In 2014, the median annual cash welfare payment across states was $5,148. If mothers instead worked those 20 hours at a job that paid $8 per hour, their wage income would be $8,320 plus (if they had two children) an EITC of $3,328 and $800 in child credits. While part-time employment does not allow these families to escape poverty, it would yield $12,448, and more in the 20 states that have their own EITC programs.
Certainly a case could be made for states to modestly increase cash payments and make their work requirements more appealing. But it is the labor market that offers single mothers the best hope of escaping poverty. Indeed, those at work are more likely to socially interact with others who work, improving the likelihood of forming lasting partnerships that result in middle-class status. The employment focus of social policy should be strengthened, not weakened, so that more single mothers can move forward.
Robert Cherry is a professor at Brooklyn College.