The e-cigarette industry has blossomed into a $2 billion business, providing thousands of manufacturing and retail jobs while offering smokers a safer alternative. Unfortunately, this thriving market is under threat due to new rules finalized in May by the U.S. Food and Drug Administration (FDA).
The new rules expand the regulatory authority of the FDA to cover all tobacco or tobacco-related products — including e-cigarettes, many of which contain no tobacco whatsoever.
Like cigarettes, other forms of tobacco consumption can be addictive because of nicotine — which is why the FDA has asserted regulatory authority. While most e-cigarette products do not contain tobacco, they still use flavored juices that contain nicotine. Smokers can use e-cigarettes to supplement or, in some cases, replace their regular tobacco usage. The FDA worries that these products will increase nicotine addiction rates, especially among youth, and thereby lead to increased tobacco usage overall.
While it could be argued that e-cigarettes and other tobacco-related products should be regulated in a way that puts them on an even playing field with cigarettes, the rules do not limit the FDA to making regulations fair. The rules give the FDA broad authority to regulate tobacco products in whatever way the agency wants. For many of these rules, the FDA will issue “guidance” rather than spelling out the restrictions in the finalized regulations. Unlike formal rules, guidance does not need to go through a formal approval process. Thus, there is simply no oversight to ensure equitable regulation.
Most e-cigarette producers and retailers are small businesses, not multibillion-dollar corporations like Philip Morris. Many e-cig and vape shop owners produce their own juices and flavors which, according to the new rules, must soon be individually tested and approved by the FDA.
Open-ended regulations and big business do not bode well for a competitive market. As in other industries, large and powerful tobacco companies are able to engage in political rent seeking by pushing through policies that benefit them and harm their competition.
The motivation behind Big Tobacco’s e-cig scare is easy to spell out: e-cigarettes pose a threat to them. According to a study from the EU, e-cigarettes are responsible for up to 30 percent of cigarette smokers reducing their cigarette consumption or quitting altogether.
The rules finalized by the FDA will likely decimate much of the e-cigarette industry, from producers to retailers, potentially eliminating tens of thousands of jobs. Their business models are simply not equipped to absorb the regulatory costs imposed by these regulations; many shops and producers will have to shut down rather than attempt to comply with them. And less competition is good news for Big Tobacco — especially if vapers transition back to traditional cigarettes.
Some e-cigarette companies and advocacy groups are beginning to fight back, including the vaping industry’s Right to Be Smoke-Free Coalition. Five lawsuits have already been filed against the FDA over the rule. The groups argue that the FDA has no rationale for regulating non-tobacco products in the same way as cigarettes. We can only hope the judges agree.
In an attempt to promote public health, the FDA is limiting consumer choice through unnecessary and prohibitive regulations. Without the option of using safer alternatives, smokers addicted to nicotine may be left with only two choices: quit cold turkey or continue to consume traditional cigarettes — just what Big Tobacco wants.
Jonathan Nelson is a Young Voices Advocate and a graduate of Grove City College.
This week, the Democratic Party released their proposed 2016 platform, which includes the adoption of a $15 minimum wage — despite the fact that Hillary Clinton has admitted her concerns it will cost the economy jobs. Sounds like a big win for unions, who have been pushing for a higher minimum wage for years, right? As they applaud the inclusion of this platform commitment, however, the benefits for Big Labor may backfire.
It appears that this platform plank would nullify a number of exemptions and carve-outs unions negotiated for themselves at the state and local levels. Few realize this, but many U.S. cities, such as San Francisco, Oakland, Richmond, Long Beach, San Jose, Milwaukee and Chicago, exempt organized labor from their minimum wage mandates. That’s right — for years, labor leaders have brokered backroom deals to be granted exemptions, undermining the same minimum wage policies that they have spent tens of millions of dollars to publicly support. The U.S. Chamber of Commerce has the full list of exemptions, here.
The existence of these escape clauses prove what many in the free-market movement have said all along: Big Labor bosses don’t care about American workers, they only care about themselves and their bottom line. In abject hypocrisy, unions’ national push for what they deem a “fair wage” does not include that wage for their own members.
A step ahead of the game, organized labor has used the push for higher wages as a manipulation tool. All around the country, they encourage non-unionized workplaces to agree to union representation by presenting themselves as a lower cost labor option to hotel-owners, fast food chains, and hospitals. They argue that, with their secured exemptions, employers can pay their unionized employees less, making unionization seem more appealing.
For example, in Los Angeles, the Service Employees International Union (SEIU) — which just endorsed Hillary Clinton for president — spent millions campaigning for a $15 wage, and then asked the city council to exempt union shops from the new law. Already in Los Angeles hotels, there remains inequity between the minimum wages of unionized and non-unionized hotel workers — indeed, at the Sheraton Universal hotel, unionized employees only make $10 an hour, far less than the city’s $15.37 minimum wage for hotel workers. They’ve got unions to thank for making 50 percent less than their counterparts at the Hilton next door.
It’s evident Big Labor’s push to increase the minimum wage is not about providing workers a so-called “living wage,” but is, rather, a coordinated, disguised effort to boost union membership — putting them in an easier position to organize the workplace with the owners’ blessing. This is one of the last bargaining chips union bosses have to combat their drastically declining membership, as workers realize unions sell an outdated product few want or need, anymore.
Perhaps the Democratic Party is starting to catch on. While I do believe a $15 minimum wage will be a disaster for our economy — costing nationally between three million and five million jobs, by conservative estimates — I can appreciate that their platform may finally be calling foul on union hypocrisy. So next time you hear union protestors chant: “A fair day’s wage for a fair day’s work,” remind them that they may not be getting “fairness” at all.
Heather Greenaway is a spokesperson for the Workforce Fairness Institute (WFI).
With the July 21 anniversary of the Dodd-Frank Wall Street Reform and Consumer Protection Act now upon us, it’s a good time to reflect on how this type of Byzantine legislation spawns a convoluted network of tangled regulations.
When recently unveiling his Financial CHOICE Act, House Financial Services Committee Chairman Jeb Hensarling highlighted a key principle behind his efforts to combat this overgrowth: “Simplicity must replace complexity.” The chairman’s focus on regulatory complexity is appropriate.
In many ways, regulations are like a computer’s operating system, establishing processes and parameters within which programs must operate. But anyone who has undergone the experience of “upgrading” an operating system only to find her computer sluggish and unresponsive knows that complexity is not always a desirable feature. Steven Teles, a political scientist with Johns Hopkins, made a similar comparison when he famously referred to American policy as a “kludgeocracy,” an ever-expanding series of “inelegant patch(es)” meant to solve short-term problems, but which ultimately hinder system performance.
A recent analysis showed that Dodd-Frank accounted for nearly 30,000 new regulatory restrictions — more than all other laws passed during the Obama administration combined. These new regulations, authorized by a Congress in crisis mode, were piled on top of more than one million existing regulatory restrictions. Even former Senator Chris Dodd, one of the bill’s namesakes, admitted just after the bill’s passage that “no one will know until this is actually in place how it works.” Scholars subsequently argued that the regulatory uncertainty exacerbated by Dodd-Frank could explain the slow recovery. At the time, however, some facts were clear: Dodd-Frank would increase regulatory complexity, induce uncertainty, and line the pockets of regulatory compliance experts.
To an unprecedented degree, simply ascertaining the relevance of regulations stemming from an act of Congress now requires regulatory compliance expertise. To illustrate, consider a simple visualization of regulatory restrictions originating from another major financial regulatory law, the Sarbanes-Oxley Act of 2002. Sarbanes-Oxley, which dealt with audits and financial reporting, affected public companies in all sectors of the economy and induced some regulations that specifically targeted a handful of industries. Textual analysis of those regulations shows that five industries were directly targeted by regulations from two federal agencies:
Sarbanes-Oxley was, of course, a significant regulatory overhaul in its own right. In 2012, the Wall Street Journal Editorial Board went so far as to call it one of the reasons for slow economic growth. Furthermore, much of the effect of Sarbanes-Oxley stems from the creation of the Public Company Accounting Oversight Board, a regulatory entity that awkwardly straddles the public-private divide with considerable control over auditing firms and — indirectly — the public companies they audit.
Nonetheless, even allowing for the additional complexity of referencing accounting standards that are not formally published as regulations, Sarbanes-Oxley is a model of simplicity compared to Dodd-Frank. Consider a similar visualization of the agency-industry relationships emerging from Dodd-Frank — which, for the sake of visualization, is limited to only 10 agencies and 10 industries. In fact, at least 32 different agencies have promulgated rules under the statutory authority of Dodd-Frank:
In the post-Dodd-Frank world, understanding which regulations are relevant to a business’s activities has become immensely more difficult. Many sectors of the economy were newly exposed to regulations from a multitude of unfamiliar agencies. Duplicative and contradictory rules became a fact of life.
In 1788, James Madison worried that laws may become “so voluminous that they cannot be read, or so incoherent that they cannot be understood.” He was right to worry: current regulatory code is so complex and voluminous that, rather than spend three years reading it, I helped create text analysis software that uses machine learning to assess the probability that a given regulatory restriction targets a specific industry. But even with the insights of machine learning and text analysis software — or regulatory compliance experts who bill by the hour — considerable uncertainty remains. Regulatory agencies, themselves, are, increasingly, unfamiliar with their own regulations.
When there are more rules in place than anyone can read, and interpretation of those rules and their scope is determined by the regulators themselves, businesses must pay for experts to filter the rules that are truly relevant from the rest. Meanwhile, businesses must also keep an eye on new rules coming down the pipeline and the possibility of reinterpretation of old rules. For both federal regulations and statutes, an irrelevant requirement only remains irrelevant until a bureaucrat, or a federal prosecutor, decides otherwise.
Regulatory complexity engenders uncertainty. That may not be a problem for some politicians; but for anyone who must comply with regulations, complexity and uncertainty can be paralyzing. Simplifying the complex regulatory regime imposed by Dodd-Frank is an application of another lesson from the world of computer programming: iterative design can correct serious errors and reduce unnecessary complexity.
As the Democratic Party looks to advance what has been characterized as the “most progressive platform in the party's history,” there's never been a more urgent time for Republicans to revitalize their energy and climate agendas.
The Democrats’ formal 2016 platform will not be adopted until delegates convene for the Democratic National Convention, scheduled for July 25 to July 28 in Philadelphia. But a leaked version of the platform draft obtained by NBC News shows an intent to double down on “climate justice” and proposals to transform America into a “clean energy superpower,” long-standing priorities of the Democratic Party. Despite Republicans' best efforts over the years, there has been a pileup of regulations and market-stifling subsidies aimed at achieving these goals.
The Democrats’ platform correctly diagnoses the benefits of a renaissance in energy technology, including technologies to combat climate change. But by refusing to bend on their ideological attachment to command-and-control solutions to exaggerated problems, party leaders may impede the very future they long to see. The Republicans, meanwhile, if they hope to resist the problematic elements of the Democratic plan, must counter with their own pro-market energy and climate platform.
There is some good news in the platform. Democrats appear fortunately to have resisted disastrous “keep it in the ground” proposals, including proposed bans on hydraulic fracking and on fossil-fuel leasing on federal lands. Such proposals frame fossil-fuel use as a moral bad, disregarding the enormous economic benefits that fossil fuels provide society.
But the platform makes clear the party's intent to regulate, mandate, and subsidize our way to a clean-energy future. It is heavy on symbolism and short on cost-effective measures to reduce pollution. It reiterates support for the Clean Power Plan and for rejecting the Keystone XL pipeline. Neither policy would make a significant difference to combat climate change and both set poor policy precedents. The Clean Power Plan sets reduction targets for emissions that contribute to climate change. But these emissions reductions will largely or entirely occur anyways, thanks to prevailing economic forces, such as cheap natural gas replacing coal-fired power generation.
The platform offers little more than green industrial policy. It fails even to discuss a market-based approach to mitigate climate change. The direction is, instead, to ram politically preferred technologies onto the electricity grid, disregarding the economic processes that ensure the grid stays reliable and affordable. It also offers support for policies to extend subsidies that cost taxpayers billions, distort energy markets, and deter innovation.
The platform altogether neglects innovation, the most vital ingredient to worldwide climate progress. Despite a common belief that renewable-energy technologies already are cost-competitive, markets tell us these technologies still have a ways to go. Clean-energy technologies must become broadly competitive before we will see deep emissions cuts in developing countries, where the rubber hits the road on climate change.
Particularly troubling is the draft platform's addition of a plank focused on investigating those who disagree with the literal party line on climate change. The document couches this in terms requesting the Justice Department “investigate allegations of corporate fraud on the part of fossil fuel companies accused of misleading shareholders and the public on the scientific reality of climate change.”
Calling-out intentional distortions is valid, but legally prosecuting others' legitimate views is a fear tactic that makes a mockery of the First Amendment. It's also prone to backfire, as it's more likely to trigger discord and retaliatory investigations rather than foster the civil discussion America needs. Climate skeptics should be engaged with scientific evidence, not scared into submission.
Conservatives know the government has no business dictating what our energy mix should be or curtailing the rights of those who view things differently. The appropriate role of government is to ensure markets perform well. Competitive energy markets do perform well, but we need to ensure that they account for the societal impacts of pollution. Many conservative economists agree that the best remedy is a revenue-neutral carbon tax.
The time has come for Republicans to step into the climate leadership spotlight. A conservative climate-change platform can simultaneously shrink government, grow the economy, enhance choice, and deliver superior environmental results. Innovation should be the Republican energy mantra.
Such an approach begins with freeing — not restricting — the energy sector. States should follow Texas' lead and embrace competitive electricity markets and discard the choice- and innovation-stifling model of monopoly utility regulation. States and Congress should thoughtfully remove mandates and subsidies for government-preferred resources. Congress should ensure that competitive electricity markets under federal oversight encourage innovation and reward unconventional resources fairly. This will remove regulatory barriers to clean technologies and level the playing field for all technologies.
Putting a price on pollution is central to sensible energy and environmental policy. And it’s an idea that some conservatives, at least, are warming to. As Republicans craft their own platform, they have an opportunity to advertise the idea that the market, not the government, should be to work to address climate change.
Republicans also should double-down on what they do best: promoting economic growth domestically and abroad. Preparing for the inevitable effects of climate change is a piece of climate policy that gets grossly overlooked on both sides. Poverty exacerbates the human impacts of climate change. So the wealthier we are, the better we can adapt.
American capitalism is the greatest wealth and innovation engine the world has seen. Conservatives should set their sights on freeing markets and pricing pollution as a way to tackle climate change — and then tell government to get out of the way.
Devin Hartman is electricity policy manager and senior fellow at the R Street Institute.
Consuming butter does not increase the risk of heart disease, a recent study found. Those who believed in the accuracy of U.S. government dietary guidelines — which for decades have demonized saturated fats — were doubtless taken by surprise. But for those of us who follow nutrition and politics, it’s just another government nutritional “gospel” that science has revealed to be misguided.
Yet, government agencies continue to spend millions to nudge consumers into following guidelines that may do little to improve health for most and may even result in harm.
For nearly half a century, the U.S. Department of Agriculture (USDA) and the U.S. Department of Health and Human Services (HHS) have put out dietary guidelines telling Americans to eat less sodium, cholesterol, and saturated fat — i.e., red meat and full-fat dairy, including butter — and to eat more whole grains, fruits, and vegetables, among other directives. These recommendations emanated from hearings held in the mid-to-late 1970s by the Senate Select Committee on Nutrition and Human Needs, despite a “boisterous mob of critics,” including those within the scientific community who pleaded with the Committee to wait for more research “before we make announcements to the American public.” In response, Committee Chairman Sen. McGovern responded that “Senators don’t have the luxury that the research scientist does of waiting until every last shred of evidence is in.”
Since the Committee issued its report in 1977, those patient research scientists have repeatedly called into question or undermined many of the Committee’s original recommendations. Increasing the level of dietary salt, for example, appears to lead to hypertension only in a small percentage of the population; and in some, lowering dietary salt can, in fact, result in higher blood pressure. Moderate levels of dietary cholesterol no longer seems to be linked to heart disease. And full-fat dairy has been shown to reduce the risk of obesity and diabetes.
To be fair, the Dietary Guidelines Advisory Committee (DGAC), which is comprised of a handful experts, diligently evaluate the research on what constitutes a healthy diet every five years. And sometimes they alter recommendations to reflect the changing scientific understanding. For example, the most recent guidelines finally did away with limits on dietary cholesterol and backed away — ever so slightly — from previously stringent sodium recommendations. But such changes are rare and often come long after shifts within the scientific community. The real issue is that government agencies pass judgement on developing science in the first place.
Scientific progress is not achieved via committee — whether Congressional or scientific. Rather, science advances toward an understanding of reality through years — often decades — of research, with scientists fighting for their own hypotheses. They present, defend, test, and modify their ideas over time. Whichever side offers the most compelling argument “wins” by gradually becoming the predominant theory. Soon, other researchers gravitate toward that theory, basing their own research on it.
Congress, of course, is an inherently political entity. And so when it — or any other government-appointed body — privileges one theory over another, it creates bias that trickles down to the research community. The problem is not simply that the government makes decisions on the basis of imperfect information, but that government intervention, itself, can distort the development of research.
For example, the theory that dietary fat plays a large role in cardiovascular disease was controversial in the scientific community, even as the government began relying on it to develop the first federal nutritional guidelines. In fact, a lot of the existing research contradicted it. Nevertheless, the theory flourished. Why? In part, no doubt, because researchers — many of whom rely on government grants — faced risks associated with bucking the new zeitgeist created by the government.
Fortunately, the latest dietary guidelines limit the ultra-low sodium recommendation of 1,500 mg per day to those with hypertension or pre-hypertension. But the committee members still warn that food manufacturers “should reformulate foods to make them lower in overconsumed nutrients,” including salt, to help Americans — who consume an average of 3,400 mg of sodium a day — get to their recommended limit of 2,300 mg a day. Lo and behold, the White House pushed the FDA to create “voluntary” sodium reduction guidelines for food manufacturers — all this despite the tenuous connection between higher sodium and hypertension and a recent study (commissioned by the government) that found no benefit in consuming less than 2,300 mg of sodium a day for most people.
Had the government refrained from issuing these recommendations, experts might have focused, instead, on efforts to encourage increased potassium intake by eating more fruits and vegetables. This has been shown to reduce blood pressure effectively while having fewer unintended side effects and possibly conferring unintended benefits.
There are ways that federal agencies can promote dietary advice that could benefit most of the population (such as recommendations to eat more fruit and vegetables). But, in general, nutrition is far too complex and personal an issue for a one-size-fits all, top-down approach. It’s time for the government to relinquish its influence over the scientific and medical communities and let individuals (and their doctors) determine their own optimal diets.
Michelle Minton is the Competitive Enterprise Institute's fellow specializing in consumer policy, covering the FDA, alcohol, food, and gambling.
As Americans hit the road during summer driving season, one thing that’s probably not on their minds is how the maintenance of those roads gets funded.
Over the past century, when consumers stopped to fill up at gas stations, they also filled up state and federal government highway funds through excise taxes, consumption taxes (which are included in the price of goods), and motor-fuel taxes. Now, as technological advancements and changing consumer habits work hand-in-hand to reduce the volume of motor fuel purchased, government infrastructure budgets have become increasingly strained, prompting lawmakers to increase tax rates.
Instead of trying to retain the status quo by increasing taxes on declining motor-fuel sales, now is the perfect time for legislators to experiment with fairer funding ideas — using common-sense, free-market principles as a guide to road-funding success.
Step one — or, rather, step zero — is to make sure road money is actually spent on roads. Currently, 15 percent of all federal gas tax revenue — about 3 cents for every gallon of gas purchased — is diverted away from funding road construction and toward subsidizing passenger trains and other forms of government-provided transportation. That may not sound like much, but it adds up to roughly $5.6 billion in inefficient spending.
After patching the leaks in the pipeline between the taxes consumers pay and the benefits consumers receive, the next step is to simplify the pipeline itself. Taxes are payments, and the people paying should be the ones using the things that are being paid for. Unfortunately, that’s not the case when it comes to today’s government highway funding laws. Gas taxes are paid by everyone who purchases gasoline — not by everyone who uses the government roads.
One, very direct way to uphold this user-benefit principle — a key free-market idea — is to get rid of excise taxes and replace it with a mileage-based user fee (MBUF). The number of miles an individual travels is much more directly connected to the miles of roads “consumed.”
With MBUFs, the fee can vary with the congestion rate of particular highways — just as the price of a good in a free market increases as demand spikes — without violating consumers’ privacy.
In Oregon, the state government has been test-driving such a program, alleviating potential privacy concerns by simply keeping track of how much is owed, rather than when or where people drive. Marc Scribner, a research fellow with the Competitive Enterprise Institute, argues that Oregon’s program proves privacy and user-fee funding are not mutually exclusive:
An on-board computer … assigned miles driven to various categories: public roads or private property, in-state or out-of-state roads. That mileage was then tallied and processed by a trusted third party, without ODOT [Oregon Department of Transportation] receiving any location data. Fuel tax rebates based on mileage data were then applied and charges were assessed — again, without the government obtaining individualized location data.
Oregon provides just one model. The current patchwork of local, state, and federal gas excise taxes is so inefficient and wasteful that almost any alternative funding framework lawmakers rally around would be superior.
The time for experimenting is now. Lawmakers should seize the opportunity by thinking outside the gas-tax box and devising more consumer-friendly and cost-effective ways to fund the government roads on which we drive.
Jesse Hathaway (email@example.com) is a research fellow with The Heartland Institute.
The dog days of summer began with a sobering warning about “cyber-jihadists” in a new analysis from the Institute for Critical Infrastructure Technology. Policymakers should anticipate sophisticated anti-American groups developing world-class hacking capabilities. Doubtless old news at the Pentagon’s Cyber Command.
Meanwhile, in a parallel universe, energy policymakers are accelerating green initiatives that will make America’s electrical grids more vulnerable to cyber-attacks.
The problem? “Smarter” and “greener” requires that the grid be more fully connected with the Internet. “Smart” grids depend on Internet “smarts.” And solar and wind energy both require Internet-centric mechanisms to meet the challenge of using episodic supplies to fuel society’s always-on power demand.
Thus, policies from California to New York as well as the EPA’s Clean Power Plan, envision adding millions of Internet-connected devices to electric grids, hospitals, and cities. For hackers, this is called vastly expanding the attack surface. In that ‘smarter’ future, the cyber-hacking skills bad actors have honed to break into private and financial data can be directed at breaking into and controlling critical physical infrastructures.
Experts have demonstrated hacks into the entire panoply of devices associated with smart and green power, from smart lights and power meters to the power electronics on solar panels. Cybersecurity has simply not been the priority in green policy domains — even though technical and engineering message boards and publications are filled with examples of cyber-vulnerabilities or weak or non-existent cybersecurity features. With the full flowering of smarter infrastructures, just what are we likely to face?
Imagine it’s a scorching-hot summer day in Los Angeles sometime in the near future and the power in one wing of a hospital goes down, taking with it the air conditioning and all the critical hospital equipment from MRIs to life-support. The CEO gets a text from her facilities manager a few minutes before another wing in a different, larger hospital in the network goes black, too, as the back-up generator fails to start. This is followed by an email from the hacker stating that the power at all the hospitals will be shut down within an hour. The ransom is, say, $10 million in Bitcoins.
Now imagine a different scenario, this time a hot Manhattan evening when several blocks go dark. It’s not a ransom this time but a threat: more is coming. The mayor gets an image on his smartphone of the July 25th 1977 cover of Time Magazine with its headline “Night of Terror.” That 1977 New York City blackout lasted 25 hours, involved thousands of ransacked stores and fires, 4,000 arrests and $300 million in damages. This time, the mayor also worries that the attacker could be coordinating an array of Orlando-type physical assaults to fuel the chaos.
In the first case, the ransom gets paid and power comes back. In the second scenario, no physical attacks happen, but it takes two days and heroic efforts from ConEd’s crews to restore power by reverting to older manual systems that bypass the ‘smart’ stuff. But the terrorists made their point. And in both cases forensic teams from the Department of Homeland Security, the FBI, and DOD’s Cyber Command descend.
They learn that a sophisticated phishing scam inserted a computer worm, combined with malware loaded earlier in a backdoor hack into a power monitoring device, enabling the remote seizure of local power network controls. The NSA traces the cyber breadcrumbs to anonymous servers in Georgia (the country not the state) or Iran, or China, and … a dead end.
Sound far-fetched? Consider where we are today: ransomware attacks are already a scourge. The American Hospital Association reported that several health care companies and hospitals were hit earlier this year with ransomware (most paid). But, so far, hackers can only shut down a target organization’s access to its own computer system or e-commerce Web site. As for the future, consider that for hackers, today’s Internet-connected cars look just like tomorrow’s connected grids. Researchers have hacked the Ford Escape, Toyota Prius, Nissan Leaf, and — to great fanfare — a Jeep Grand Cherokee.
Last year’s “cyber-jacking” of a Jeep took full control from ten miles away by exploiting vulnerabilities in the Internet-connected infotainment system to backdoor into the car’s microcomputers that operate the steering and brakes. In the wake of that stunt, Chrysler recalled over a million cars and corrected those particular vulnerabilities. Earlier this year, the FBI and NHTSA issued a general alert regarding vehicle cyber vulnerabilities. Everyone on both sides knows it’s only the tip of the cyber-berg.
In fact, there have already been cases of grid-like cyber-jacking. In 2008, a Polish teenager hacked a city’s light-rail controls and caused a derailment. In 2010 the world learned of a clandestine hack — ostensibly U.S.-Israeli — that inserted the Stuxnet computer virus to damage the electrical infrastructure of Iran’s nuclear facilities. In 2015, hackers breached the operating system of a German steel mill, causing enormous physical damage. And this past December, hackers blacked out Ukraine’s electric grid.
So far there have been no such hacks on U.S. power grids that we know about. And experts testifying before Congress about the Ukraine event credibly asserted that America’s long-haul grids are better protected — at least for now. But that’s not the issue.
Exposure is a problem not so much with long-haul grids but with local grids in cities and communities where all the Internet ‘smarts’ are planned. As green connectivity is accelerated onto those grids, the attack surface expands. Today’s grids are, by Silicon Valley standards, dumb — even if deliberately so. But we already know what adding more Internet connectivity enables.
The Department of Homeland Security asserts that America’s manufacturing and energy sectors are the top two targets for attacks on cyber-physical systems. And Cisco reports that 70 percent of utility IT security professionals discovered a breach last year, compared with 55 percent in other industries.
Here’s the rub: green grid advocates are pushing policies that will create more Internet-exposure precisely when bad actors and hostile nation states are rapidly escalating their hacking skills.
Policymakers genuflect to the importance of electric security and reliability. But actions speak louder than words. Over the past eight years, federal and state green and smart tech funding totaled $175 billion; one thousand times more money than DOE reports spending on cyber-physical security research.
Does this mean we should avoid bringing Internet-class controls to grids and infrastructures? Hardly. Engineers and entrepreneurs — not bureaucrats— will, ultimately, develop smart and secure systems. But security must be the priority. In every infrastructure throughout our history — from power and water to hospitals, cars and aircraft — policy has, rightly, put safety and security first. With society more dependent on electricity than ever, it’s no time to reverse priorities.
The cyber-jihad report concludes: “Thankfully, even successful [cyber] attacks on the United States Energy sector would not have the same impact as those against Ukraine in 2015, because the grid is much larger and minutely segmented.” That’s true — for now. But in a world where terrorist attacks are all too common, prematurely pushing “green” or “smart” tech onto the grid — leaving cybersecurity on the back burner — will set the conditions for a perfect cyber-storm.
Mark P. Mills is Senior Fellow at the Manhattan Institute and author of Exposed: How America’s Electric Grids Are Becoming Greener, Smarter—and More Vulnerable.
Since its enactment in 1935, Social Security has become one of the most popular and effective federal programs. At the end of 2015, according to the recently published trustees’ report, 60 million Americans received retirement, disability, or survivors’ benefits from the system into which 169 million paid payroll taxes. Social Security provides the majority of cash income for 65 percent of elderly beneficiaries, makes up 90 percent or more of incomes for 36 percent of them, and offers the sole source of retirement income for 24 percent. The poverty rate among senior citizens is less than the poverty rate among working adults.
Yet, even as Social Security has become an indispensable source of financial wellbeing and security for the retired population, it has also become financially unsustainable in its current form.
Due to increasing lifespans and declining birth rates, there has been an apparently permanent shift in the ratio of working individuals to retiring baby boomers. Further, the program earns limited interest on its investment holdings. The Social Security trust funds have been running deficits since 2010 and are predicted to run out of cash reserves, which stood at $2.8 trillion at the end of April 2016, by 2034. Thereafter, only three-quarters of scheduled benefits can be paid for by the expected tax income.
Restoring solvency to the program will require either higher tax payments from current and future workers, or lower benefit payments — or, more likely, both. However it’s done, those receiving benefits in the future will pay more and get less compared to the beneficiaries now or in previous years.
The unfairness for future generations can be seen in a simple, stylized example of the average wage earner. We calculated the net real rate of return (NRR) on lifetime payroll tax contributions for an average working male under the Old Age, Survivors and Disability Insurance (OASDI) program. The NRR is the real, average annual rate of return that tax contributions would have to earn to grow to a level sufficient at retirement to finance Social Security benefits for that worker until death.
In our calculations, we use the demographic and economic assumptions from the historical and long-term projections of the 2016 Social Security Trustees Report and supplementary data published by the Social Security Administration. Lifetime contributions and benefits for the retired, working, and future generations are computed using actual statutory tax rates and benefit formula. The 2016 report shows an actuarial deficit of 2.66 percent of taxable payroll. For simplicity, we assume that solvency is restored to the program by raising the payroll tax rate by that amount beginning in 2018. All amounts were set at constant 2016 dollars; amounts in the future were discounted at realized or expected real interest rates.
We calculated the NRR for a typical working male in five different generations, with birth years of 1925, 1950, 1975, 2000, and 2025. Workers are assumed to work every year starting age 23 until retirement at the normal retirement age, to be employed and earn the average wage in the economy each year, and to collect full scheduled benefits for the number of years equal to their period life expectancy at the age they retire. (The same methods and assumptions can be extended to study the effects of Social Security policy on people with various demographic and socioeconomic characteristics.)
The following figure shows the expected present value (at the normal retirement age) of payroll taxes paid over employed years and benefits received in retirement by average hypothetical workers.
Figure 1: Net Real Returns to Lifetime Contributions to Social Security (in 2016 dollars)
The earliest generations of Social Security beneficiaries enjoyed the highest rate of return. A worker born in 1925 who earned the average wage would have gotten a 4.8 percent return on their payroll taxes. The rate of return dropped to 2.7 percent for those born in 1950, and to 1.7 percent for those born in 1975. For those born in 2000 or 2025, the program is expected to provide a very low rate of return — less than .25 percent in both cases.
The trend is clear: over several decades, earlier cohorts of beneficiaries received benefits with an implied rate of return that would be viewed as sufficient for most retirement investments. But as changes have been made to the program, and the ratio of workers to retirees has declined, the trend has been toward higher taxes and benefits but a lower implied rate of return on lifetime payroll contributions.
The falling net rate of return for those entering the program in this century is a function of Social Security’s pay-as-you-go structure. Current workers pay the benefits for current retirees. That works well as long as the ratio of workers to retirees remains constant or rises. But when it falls — with declining birth rates and longer lives for retirees — it’s not politically feasible to take back benefits from those already in retirement. The only solution is to impose higher taxes and lower benefits on current and future workers.
This is Social Security’s intergenerational conundrum: the program is unsustainable in its current form, but it is too late to implement changes that will be fair across generations. The only way to restore the program to solvency is to impose changes that will lower the program’s returns to current and future workers who already get less relative to what they pay in compared to the program’s early entrants.
Of course, the rate of return a worker gets from Social Security can vary substantially within the same age cohort, due to various factors. The program also provides substantial redistribution from high to lower earners through its progressive benefit formula and important protections, e.g., for disability, which could not be replaced easily by private savings and insurance. Still, our simulations of the rate of return earned by the average worker are indicative of the overall trend across generations.
While insolvency is not imminent, we must act sooner rather than later to give these workers time to make the necessary adjustments in their retirement plans.
The goal of Social Security reform should be to ensure that the program can be sustained financially without undermining — and perhaps even improving — the program’s role as a floor of protection for older Americans. That can be done by reducing future benefits most for those with the highest wages; these workers can save more for their own retirement. Social Security’s limited resources could then be focused more on improving the standard of living for older Americans with limited lifetime earnings and thus also limited private savings.
The United States decided many years ago to run Social Security on a pay-as-you-go basis. For most of the program’s history, that decision posed no problems and, in fact, allowed for numerous benefit expansions as the workforce grew more rapidly than the retired population. But demographic shifts over the past half century have opened up a large projected funding gap that cannot be ignored. The gap is sizable, though it can be closed with reasonable program adjustments. The sooner Congress gets serious about taking the necessary steps to solve the problem, the better.
Tejesh Pradhan is a Ph.D. candidate in economics at American University and a Peterson Fiscal Intern at the Ethics and Public Policy Center. James C. Capretta is a resident fellow at the American Enterprise Institute.
Home ownership has long been considered a key metric for economic well-being in the United States. Thus many are dismayed by the fact that at 63.5 percent, the 2015 overall home-ownership rate appears to be lower than the 64.3 percent of 1985, a generation ago. But viewed in another, arguably more relevant way, the underlying trend shows that the home-ownership rate is, in fact, increasing, not decreasing.
How so? Key to the trend is the extremely strong relationship between marriage and home ownership — a relationship seldom, if ever, addressed in housing finance discussions. But if you think about it, it’s obvious that home ownership should be higher among married couples than among other households; in fact, it’s remarkably higher.
This relationship holds across all demographic groups. Importantly, it means that changes in the proportion of married vs. not-married households is a major driver of changes in the overall home-ownership rate over time. Home ownership comparisons among demographic groups are similarly influenced by differences in their respective proportions of married vs. not-married households.
Policy discussions over falling home-ownership rates frequently ignore some critical underlying demographic facts.
The current 63.5 percent American home-ownership rate combines two very different components: married households with about 78 percent home ownership, and not-married households with only 43 percent home ownership. Married households have a home-ownership rate 1.8 times higher — obviously a big difference. (As we have organized the data, these two categories comprise all households: “married” means married with spouses present or widowed; “Not-married” means never married, divorced, separated, or spouse absent.)
Table 1 contrasts home ownership by married vs. not-married households, showing how these home-ownership rates have changed since 1985.
One is immediately struck by a seeming paradox:
- The home-ownership rate for married households has gone up by 2.3 percentage points.
- The home-ownership rate for not-married households has gone up even more, by 7.4 percentage points.
- But the overall home-ownership rate has gone down.
How is this possible? The answer is that the overall home-ownership rate has fallen because the percentage of not-married households has dramatically increased over these three decades. Correspondingly, married households (which have a higher home-ownership rate) are now a smaller proportion of the total. Still, home ownership rose for both component parts. So the analysis of the two parts gives a truer picture of the underlying rising trend.
The dramatic shift in household mix is shown in Table 2.
Table 3 shows that the strong contrast between married and not-married home-ownership rates and related changes from 1985-2015 hold for each demographic group we examined.
That is, home ownership for both married and not-married households went up significantly for all four demographic groups from 1985 to 2015.
Moreover, overall home ownership also increased for three of these four groups. Home ownership for black households, meanwhile, fell by 1.5 percentage points, though home ownership for both married and not-married components rose for this demographic as well. (This is consistent with that group’s showing the biggest shift from married to not-married households.)
In another seeming paradox, Hispanic home-ownership rates rose, while still contributing to a reduction in the overall U.S. rate. The reason for this is that their share of the population more than doubled, increasing the weight of their relatively high share of not-married households.
The trends by group in the mix of married vs. not-married households are shown in Table 4.
What would the U.S. home-ownership rate be if the proportions of married and not-married households were the same as in 1985? Applying the 2015 home-ownership rates for married and not-married households to the mix that existed in 1985 results in a pro forma U.S. home-ownership rate of 68.1 percent. This would be significantly greater than both the 1985 level of 64.3 percent and the 2015 measured level of 63.5 percent.
Adjusting for the changing mix of married vs. not-married households gives policymakers a better understanding of the underlying trends. This improved understanding is particularly important when weaker credit standards are being proposed as a government solution to the lower overall home-ownership rate.
To make sense of home-ownership rates, we have to consider changes in the mix of married vs. not-married households. And these changes have been dramatic over the last 30 years.
Alex J. Pollock is a distinguished senior fellow at the R Street Institute in Washington, DC. Jay Brinkmann is the retired Chief Economist of the Mortgage Bankers Association.
If higher taxes were the cure for homelessness, California — and San Francisco, in particular — would have solved its homeless problem years ago.
In reality, San Francisco — one of the highest tax jurisdictions in the country and a poster child for progressive governance — is overrun with homelessness.
It’s not a new problem; anyone who has been to San Francisco in the last couple of decades has seen it. And while the city has taken steps to move the homeless into alternate facilities, it appears that more continually arrive to take their place, keeping the homeless population unacceptably high.
What can be done? Well, a couple members of San Francisco’s Board of Supervisors have decided that higher taxes is the solution to the problem of homelessness.
The “tech tax,” as it’s being called, would be an additional 1.5 percent payroll tax assessed against tech companies. Because economists generally agree that payroll taxes come out of compensation, this is really a tax on tech workers, not tech companies.
That’s right. Those who govern the City of High Taxes by the Bay have decided that even higher taxes — targeted at their most productive residents, the tech community — will solve the problem of homelessness.
This proposal tells us several things.
First, California is still in denial on taxes. Businesses and residents are fleeing the state’s high-tax, high-regulation climate, and yet those who govern continue to pile them on.
Second, San Francisco continues to misdiagnose the causes of its homeless problem. Homelessness is not caused by other people’s wealth; it’s not caused by low taxes. All major cities — in a variety of climates and economies — have to manage homeless populations; they all seem to do a better job than San Francisco.
Third, if enacted, the revenue will likely end up going to pay for city employee pensions and other general obligations rather than specific mitigations of the homeless.
The tech tax is a revenue grab against an easy target — not a real solution.
And it won’t work. Here’s a prediction: absent more substantive policy changes, the homeless problem in San Francisco will be roughly the same five years from now, with or without a tech tax.
Businesses generally make rational decisions, based on empirical data, about where to invest and where to locate their enterprises. Governments generally make irrational decisions, based on wrong diagnoses and false assumptions. San Francisco’s tech tax is just the latest example.
Tom Giovanetti is president of the Institute for Policy Innovation (IPI) in Dallas.
The Affordable Care Act (ACA) enrollee counts have plateaued at about 11 million. While the Congressional Budget Office (CBO) foresaw this, non-payment swings are proving more severe than expected. The result? The ACA is unable to deliver on its promise of full coverage at affordable prices.
ACA enrollment fluctuates by almost 20 percent during the year. Every year, the process is the same. Early in the year, a lot of people sign up or enroll (12.7 million in 2016), but many of them don’t pay the first premium. (1.6 million failed to pay in 2016). Those who don’t pay are then removed from enrollment. Later in the year, enrollees gradually drop out. (Another 1.1 million are expected to drop out in 2016).
In 2014, the CBO estimated that in 2016 subsidized enrollment would plateau at 19 million persons and unsubsidized enrollees would plateau at 6 million in 2017. As for this year’s enrollment: “Only about 40 percent of the eligible have so far signed up and the take-up rate is far worse the higher the incomes are.”
Enrollment fell short of expectations for several reasons, chief of which is the cost. The young and healthy often judge the cost to be higher than the benefit. That leaves the risk pool sicker and older than insurers need to lower the premiums. For everyone else, facing the premium and out-of-pocket costs without subsidy deters enrollment, especially for those whose income exceeds four times the poverty level (i.e., those who make $80,000 a year or more). For people with lower incomes, subsidies make health coverage somewhat more affordable.
The subsidies are intended to leave families paying no more than a “cap” percentage of income for premium and out-of-pocket costs. For the lowest income tier, the cap is 2 percent of income; for the highest subsidy tier, the cap is 9.5 percent. As a result, the subsidies can be large: “CBO and JCT [Joint Committee on Taxation] project that the average subsidy will be $4,410 in 2014, that it will decline to $4,250 in 2015, and that it will then rise each year to reach $7,170 in 2024.” Assuming linear growth, that means the average subsidy will be $4,900 per enrollee in 2017.
Many insurers have been leaving the ACA marketplace due to intolerable financial performance. In 2015, many insurers lost as much as 11 percent of revenues. United Health, Pemera, Aetna, Humana, and 13 co-ops have all withdrawn from at least some of the state markets.
Other insurers are suing the federal government to make good on its promise of so-called risk corridor payments, which were intended to offset higher-than-anticipated losses. The federal government refuses to make the payments because it has not collected a matching amount of excess profit — the only source for risk corridor payments. As that stalemate persists, more insurers will drop out of marketplaces, leaving fewer choices and higher prices for consumers.
Next year’s anticipated premium increases will be as much as 26 percent, bearing heavily on all people, but especially those who don’t qualify for subsidies. And, as it stands, 32 million people still lack health insurance.
The situation is clear: ACA is stuck on a plateau of enrollment due to cost. What’s not clear is what can be done about it. The cost of health care is a critical problem that, ironically enough, the Affordable Care Act is ill-equipped to address. As a result, the impetus behind the law — full health-care coverage — is not something that it can achieve any time soon.
Alan Daley writes for The American Consumer Institute, a nonprofit educational and research organization. For more information about the Institute, visit www.theamericanconsumer.org.
Our national debt is at post-war record-high levels and projected to grow unsustainably. And neither former Secretary of State Hillary Clinton nor businessman Donald Trump would reverse course — Trump, in fact, would make our debt dramatically worse. That’s what we at the nonpartisan Committee for a Responsible Federal Budget found after a comprehensive fiscal analysis of the presidential campaign platforms of both candidates.
It’s encouraging that both Clinton and Trump have acknowledged the seriousness of our national debt. Trump has spoken repeatedly about the dangers of high debt and has called for a balanced budget. Clinton has emphasized the importance of paying for policy initiatives so as not to further increase the debt. But our new report, “Promises and Price Tags: A Fiscal Guide to the 2016 Election,” shows that both candidates would, in fact, add to the debt beyond what is currently projected under current law — Donald Trump, significantly so.
Specifically, we estimate that Clinton’s policies would increase the debt by $250 billion by 2026, while Trump would increase it by $11.5 trillion.
Clearly, Donald Trump’s policies would do far more damage. But it’s important to remember that debt is already scheduled to grow from today’s post-war record-high levels and neither candidate would stop it.
The result is that under Clinton’s plans, debt would grow from 75 percent of GDP today — it was half that before the great recession! — to 87 percent of GDP by 2026. Under Trump’s plans, debt would grow to a whopping 127 percent of GDP by 2026.
The reason that Clinton’s increase to the debt is relatively small is that her $1.45 trillion in new spending (on infrastructure, paid leave, and education) is mostly offset by $1.2 trillion in new revenue. Trump’s massive increase, on the other hand, is the net result of $10.5 trillion less in taxes, $650 billion less in primary spending, and $1.7 trillion in higher interest costs.
Interest on the debt is already the fastest growing part of the budget, and neither candidate would slow it down. Under Clinton’s policies, interest costs would rise 280 percent between 2015 and 2026; under Trump’s spending, interest would grow nearly 450 percent.
With rising payments on interest costs, an increasing share of our tax dollars will go toward financing the tax cuts and spending programs of the past, rather than the investments necessary to secure our future. This is true under current law, but it is especially true under the candidates’ plans.
Our estimates are, of course, subject to uncertainty. But there does not seem to be a plausible path for either candidate to put the debt on a sustainable path without modifying or adding to their plans.
With Clinton’s plans enacted, it would take $3.2 trillion of savings over 10 years to stabilize the debt-to-GDP ratio at today’s record-high levels. And more than $8 trillion would be needed to balance the budget. Trump, meanwhile, would need $14.4 trillion of savings to stabilize the debt and more than $19.3 trillion to balance the budget.
Neither candidate can reach those targets simply by growing the economy. For Clinton, 3 to 5 percent annual growth would be needed to put debt on a sustainable path. For Trump, 5 to 10 percent growth would necessary. For reference, more forecasters project closer to 2 percent growth — and the last time the country had a decade of 4 percent growth was almost half a century ago, when we benefited from much more favorable demographics.
Rather than hoping for magic levels of growth, the candidates — and especially Donald Trump — will need to propose a combination of spending cuts and new revenue to put the debt on a sustainable path. And with debt levels so high, they can’t afford to be taking things of the table for consideration.
Of course, no one wants to see one’s own taxes rise, benefits fall, or government services reduced. But no one wants to be drowning in debt either. It’s time the presidential candidates leveled with the American people about what it will take to keep us above water.
Marc Goldwein is the senior vice president of the Committee for a Responsible Federal Budget, a nonpartisan organization committed to educating the public about issues that have significant fiscal-policy impact. Click here to read CRFB’s “Promises and Price tags” paper.
The Federal Aviation Administration (FAA) just released its final administrative rules on “routine” commercial use of small unmanned aircraft systems (UAS). Effective August 29, 2016, this federal regulatory edict opens the door to the process of integrating UAS systems — or “drones” — into the nation’s commercial airspace. Aviation industry sources tout the move as capable of generating over $82 billion and creating more than 100,000 new jobs for the U.S. economy over the next decade.
Though a step in the right direction, this regulatory change falls somewhat short.
The new regulation eliminates many costly requirements currently imposed on commercial drone operators, such as the need to notify aviation operators before each flight and the need to acquire a manned aircraft pilot’s license and certification, among others. Important regulatory safety requirements include a maximum weight of 55 pounds for a drone; a minimum age of 16 to qualify for a remote pilot certificate; and flight restrictions limiting drones to a maximum altitude of 400 feet (higher if your drone remains within 400 feet of a structure), with speed not to exceed 100 miles-per-hour. Under the new rules, the Transportation Safety Administration (TSA) will conduct a security background check of all remote pilot applicants before the FAA issues a certificate of authorization allowing for the piloting of a drone.
The new limitations on usage are designed to minimize risks to other aircraft as well as people and property at ground-level. For example, the pilot must keep the unmanned aircraft within his or her visual line of sight and drone operation is limited to daylight hours, unless the drone is equipped with ant-collision lights. There are also restrictions on where drones can fly and the type of external loads they can carry.
Yet, while this path-breaking administrative rule establishes a basic regulatory foundation for commercial regulation of drone aircraft, it has surprisingly limited commercial application.
Hollywood film companies will be pleased with the new rule. But the requirement that an operator must keep his or her drone within unaided sight, for example, effectively precludes major retailers from utilizing drones for air delivery service of their products.
The FAA rule does provide an option for a waiver of most operational restrictions if a UAS commercial entity can prove its proposed operation will be conducted safely. But this would be a costly process, as specific requests would be evaluated individually by the FAA. And the line-of-sight restriction is unlikely to be waived under any circumstances, especially not if commercial drones would be landing in populated areas.
There is some cause for optimism, though. The waiver option on operational safety is regarded by industry insiders as unprecedented since it does not apply the traditional cost-benefit analysis criteria currently used in regulations of commercial airlines. Instead, the revised safety criteria consider “social benefits” resulting from anticipated growth in the infant unmanned aircraft industry, providing a more flexible regulatory approach. Agency officials must have concluded — rightly — that the fledgling commercial-drone industry recognizes market opportunities that will incentivize future safety-based industry investments.
A major area not directly addressed by the new rule is privacy. The FAA strongly encourages UAS pilots to review local and state laws before actively engaging in information gathering activities that employ sensing technology or airborne photography. But in lieu of administrative rules about privacy, the FAA is providing all drone users (commercial and non-commercial) with recommended privacy guidelines, developed in consultation with privacy advocates and industry representatives. These voluntary guidelines — part of the FAA’s privacy “education campaign” — can, in some cases, exceed existing legal requirements. But they’re specifically designed not to create legally enforceable standards or serve as a template for future regulations.
It remains to be seen whether TSA security screenings and voluntary privacy guidelines are sufficient to address public safety and privacy concerns in the circumscribed area of UAS operations allowed by the FAA’s new rule.
Hopefully, a regulatory “learning curve” will emerge, providing valuable operational insights and innovations transferable to a future FAA regime better tailored to the commercial operating environment. Retail businesses, such as Amazon, Wal-Mart and E-Bay, will be awaiting.
Thomas A. Hemphill is a professor of strategy, innovation and public policy at the University of Michigan-Flint and senior fellow at the National Center for Policy Analysis.
Source: Bureau of Justice Statistics, Government Accountability Office, ProCon.org
OpenTheBooks.com estimates that the number of border patrol, customs, and immigration officers with arrest and firearm power has tripled from 20,000 in 1993 to 60,000 in 2014. During roughly the same period, the number of undocumented entrants to the United States also tripled. So is border security a question of resources — or political will?
If you thought self-driving vehicles were still on the distant horizon, think again.
Last week, Local Motors, in partnership with IBM, announced Olli, a self-driving transportation system that will soon be operational in Washington, D.C. The announcement comes on the heels of a similar initiative revealed by Lyft and General Motors, which recently declared a joint venture to bring new autonomous taxi services to an as-of-yet unnamed city sometime in 2017. A mere three years ago such technology thought to be decades away. Some even predicted that we wouldn’t see autonomous vehicles in our lifetime.
The pessimists got it wrong.
Seemingly every week, attention-grabbing headlines announce new developments in vehicle automation. From Tesla and Volkswagen to Google and Uber, companies in Silicon Valley and Detroit are investing more heavily in this technology Everyone wants to be the first to market with their vision of what the autonomous future will look like. And more and more it’s looking like we’ll be living in an Uber-style, shared autonomous transportation system.
Undoubtedly, shared autonomous vehicles (SAV) like Olli will be the initial entrants in the transportation market. That should come as no surprise. Many of the near-term benefits will initially accrue to urban residents reliant on taxi services. Whereas human-operated taxi rides in New York City average about $7.80 per trip, an SAV-style system — think Uber but without the driver — could reduce the total per-trip cost to around $1. Those are massive savings that could have a profound impact on how we think about and use urban transportation.
First, however, regulators need to get the rules of the road right.
There’s hope that the balkanized regulatory regime surrounding driverless cars could start adapting as early as this summer. Last week, the National Highway Traffic Safety Administration (NHTSA) announced that it would be releasing “deployment guidance and state model policy on autonomous driving technology” this July. In an all-too-rare nod to intellectual humility, NHTSA has recognized that “its existing legal authority is likely insufficient to support mass deployment of autonomous vehicles.” Initiatives like these will be increasingly important if the United States is to catch up to countries like New Zealand and the United Kingdom, where regulatory impediments to testing autonomous vehicles are minimal.
Over the past two years, there’s been significant progress in addressing many of the concerns related to autonomous vehicles. But barriers still remain. Cybersecurity, privacy, and liability are leading concerns among regulators, to say nothing of the public’s persistent worries about the safety of these vehicles. Surmounting such obstacles will be no small task, but, eventually, the value of driverless vehicles will become evident to all.
As Mercatus Senior Fellow Adam Thierer and I noted in a 2015 article, many of the potential problems with this technology will be settled in due course. Although NHTSA and government agencies have roles to play, much of the onus remains on private actors. Resolving these issues lies, ultimately, with the people “developing and testing the operational systems,” rather than “in endless bureaucratic proceedings and labyrinthine layers of regulatory red tape,” we wrote. “The tort system will simultaneously evolve to help remedy harms that develop. Lawmakers should not interfere with that evolutionary process.”
Nor should lawmakers interfere with data collection practices — such as those currently being conducted by Tesla — that allow auto manufacturers and software developers to gain greater insight into how best to optimize self-driving algorithms. Government can be a valuable partner in promoting best practices and standardizing regulations, but it can also be a roadblock to this all-too-important technological development. The costs associated with self-driving cars will continue to diminish as the technology matures, customers become more intimately acquainted with these vehicles, and big data helps better inform operational roll-out.
Just as both pecuniary and social costs will dwindle, the benefits of adoption will continue to swell. Annually, tens of thousands of lives will be saved, and injuries associated with automobile accidents will plummet. We will see hundreds of billions of dollars in fuel savings, billions of travel time hours reoriented to non-motor vehicle operating tasks, and massive reductions in the total number of vehicles on the roadways. The total comprehensive savings in congestion, fuel, and health care expenditure could easily amount to over half a trillion dollars annually, possibly more.
Driverless cars hold the potential to disrupt massively our current way of life — and for the better. The question is not whether we’ll see these changes in our lifetime but when. I’d bet on sooner, not later.
Ryan Hagemann is the technology and civil liberties policy analyst at the Niskanen Center.