Filter | only charts

After Baton Rouge: Recovery, Relief, Return

Danielle Baussan- August 27, 2016

Some are already calling it the “forgotten flood.” Baton Rouge’s unprecedented flood damage, which began in earnest on August 12, has been referred to as a once-in-a-thousand year rain. Yet it failed to capture public attention for days, nearly costing federal and private aid to the 60,000 homes damaged in disaster-declared areas and almost one third of Louisiana’s parishes.

Fortunately, President Obama issued a major disaster declaration on August 14 and broadened the areas eligible for federal recovery assistance two days later. On August 24, Louisiana Governor John Bel Edwards announced a state housing plan that would address temporary housing needs as well as long and short term repairs. But as federal and public attention increases, one factor must not be overlooked: pathways for displaced residents — especially low-income residents — to return home. 

Climate displacement, in which people are displaced by climate change or natural disasters, is a terribly familiar subject to Louisianans. After Hurricane Katrina, an estimated 1.5 million people — sometimes referred to as “climate refugees” — were displaced, traveling as far as Washington State to find refuge from their underwater homes. Approximately 53 percent of those who left New Orleans returned to the area one year after the hurricane. And while some chose to stay in their new communities, many — particularly the economically disadvantaged — could not afford to return. 

The burden of Hurricane Katrina was disproportionately skewed against low-income New Orleans-area residents. As one study noted, “Those with the strongest desire to return [to New Orleans after Katrina] may be the least able to do so because they lack housing, family, and perhaps other necessary resources.”

What’s more, this inability to return had long-term physical and economic repercussions: Those who did not return to the region were two times more likely to be underemployed or unemployed and were more likely to lack higher education than those who returned. The inability to return home was also linked to decreased wage earnings, higher depression rates, and an increased need for health and other social services.

Low income, climate displaced households can become overlooked for multiple reasons. The data on where the climate displaced go are often scarce, and recovery policies in the past have focused on homeowners, salary-based earners, and residents who can be traced through federal documentation, such as tax returns or change-of-address notifications. But it is the harder-to-trace households — such as the elderly in multigenerational owned housing, the undocumented, unemployed, homeless, or wage-based earners in transitional or rental housing — who suffer the worst impacts of climate displacement, including lack of affordable housing, a dearth of social services, and recovery funding policies that tend to favor rebuilding damaged structures above rehoming climate refugees.

For these reasons, federal and state policies must focus on returning people from all socio-economic demographics, so recovery efforts can assist everyone and restore communities as quickly as possible. For the 20 percent of Baton Rouge residents living in poverty before the flood — a community with a greater percentage of poor residents than the national average —  the ability to return home could provide an economic lifeline.

But increasing attention to the return of low-income community members after an extreme weather event offers several other benefits as well. These include: reducing federal funding on temporary housing; reducing unexpected fiscal, housing, and social services strains on host communities; and increasing resilience for future climate events by maintaining community ties and social cohesion.

Past extreme weather disasters have led to promising policies for mitigating climate displacement. Following Hurricane Katrina, for instance, Louisiana’s “Road Home” program provided a path, albeit long and bumpy, for 130,000 homeowners to rebuild as long as they reoccupied homes with three years. Similarly, after Hurricane Sandy, New York and New Jersey developed several programs aimed at returning whole communities, including low-income residents. In New York City, the Rapid Repairs Program helped approximately 20,000 households avoid long-term displacement by restoring basic home services through a city-sponsored use of contractors and the “Build it Back” program prioritized rebuilding aid to low-income households. Furthermore, New York City and New Jersey both offered programs for emergency rental assistance.

While these programs have been criticized for delays in funding and availability, each iteration of post-disaster recovery funding has offered new insights into how states and localities can retain their populations, reduce federal spending on temporary housing, and strengthen the social and economic fabric of communities. These lessons are ones that should not be forgotten as Baton Rouge communities and state and federal policymakers determine how to develop and fund programs to recover, rebuild, and return residents. Given the changing climate and associated frequency of extreme weather events, this will not be the last time Louisiana — or any other state — will have to fight climate displacement.

Danielle Baussan is the Managing Director of Energy Policy at the Center for American Progress.

The Wrong Way to Stop Corporate Inversions

Eric Peterson- August 25, 2016

The federal government has a bad habit of conjuring new regulatory powers seemingly out of nothing. Especially troubling is a new regulation proposed by the Treasury Department that gives the IRS the power to determine unilaterally whether parts of a company’s financial structure are considered equity or debt. The prospective regulation, known as the Section 385 debt/equity rules, could be finalized as early as Labor Day.

The regulation is meant to stop the practice of “corporate inversions,” whereby an American company merges with a foreign company and moves its headquarters oversees in order to escape American tax burdens. In fact, however, the regulation dramatically expands IRS control over businesses — even those that have no intention of engaging in inversions.

Inversions have become commonplace in recent years for a simple reason: While the rest of the world has dramatically cut corporate tax rates — currently averaging 25 percent in developed countries — the average American business pays 39.1 percent. Additionally, the U.S. uses a “worldwide” system of corporate taxation, meaning profits are taxed both in the country of origin and again when they are brought back to America.

It’s no wonder businesses are looking for the optimum corporate structure to reduce their tax bills. There has been an uptick in inversions, with 20 taking place since 2012 alone. And some $2.1 trillion is now kept overseas to avoid being sliced into ribbons by the American tax code.

But rather than reform the high corporate taxes and encourage businesses to return to the U.S., the Obama administration is proposing yet another complicated and convoluted rule. For instance, under the new Section 385 rules, the IRS could declare that a business’s debt — which usually has tax deductible interest payments — is equity. The result? What was once tax-deductible interest becomes taxable dividends, increasing the company’s overall tax liabilities.

Moreover, the proposed rule is overly broad and will surely have unintended consequences for businesses. In the first study of these potential regulations, the accounting firm PricewaterhouseCoopers found that the 385 rules could reduce the amount of investment foreign businesses make in the U.S., while simultaneously making it harder for U.S. businesses to invest abroad.

Multinational long-term cash flow management is already complex. Businesses shift money abroad in order to fund capital expenditures, invest in profitable foreign markets, or just to cover day-to-day operational expenses. The IRS’s new authority threatens to add a new level of complexity to all these processes, thanks to an added risk of increased taxes.

Businesses without overseas investments would be impacted as well. The IRS’s new authority would extend to all domestic businesses, allowing the bureau to enlarge tax burdens at will across the nation. It would also require costly and time-consuming new compliance requirements, further adding to the current burden of corporate tax compliance.

In its war against inversions, the Obama administration has chosen the least effective method of rooting them out. Discouraging tax avoidance by raising compliance costs and cutting into the profitability of businesses is like trying to cure a headache with a hammer to the head.

The best solution for the Obama administration — or the next administration — is not to make the tax code more arbitrary and punitive, but, rather, to simplify and reduce it. Our corporate taxes need to become more competitive with those in the rest of the developed world. Lower rates and a territorial system in which businesses are taxed according to their places of residence would make it more attractive for them to remain in the USA. 

In the meantime, the Treasury Department should scrap the 385 rules before the IRS uses them to target more businesses. Treasury has a valuable opportunity to take a step back and rethink whether it’s worth hampering the American economy to collect a few more tax dollars.

Eric Peterson is a senior policy analyst at Americans for Prosperity.

Sean Kennedy- August 23, 2016

Those who advocate against “tough on crime” policies often invoke Texas as the prime example of a place where sentencing regimes are more lenient for so-called “non-violent” offenders, mostly in the drug trade. Chuck Devore and Randy Petersen of the Texas Public Policy Foundation, for instance, maintain that the results of criminal justice reforms have been: “reduced recidivism, lower costs, and a state-wide crime rate reduced to levels not seen since 1968.” But Texas crime statistics don’t paint such a rosy picture of the state’s experiment in reduced sentencing.

Start with crime. In Texas’s largest cities, data from the police departments show a significant rise in the past year — much more than can be accounted for by population growth in the Lone Star State.

 

 

Specifically, violent crime over the past year has risen in Texas’s five largest cities (Houston, Dallas, San Antonio, Austin, and Fort Worth) by 14%, comparing the first half of 2015 to 2016 according to data from the Major Cities Police Chiefs Association. And, in all of the cities, most violent crime categories are up, too.

If we now project out homicide — the most heinous and worrisome type of crime — based on first half 2016 figures, this year represents another substantial increase in murder for the Lone Star state.

 

 

 

The raw data from 2014 to 2015 demonstrates the trend line clearly: Murder is up from 2014, especially in the biggest cities in Texas. (In their response to my last piece on this topic, Chuck Devore and Randy Petersen correct an error in my previous calculation of 2014 violent crime for Houston.)

Recent data suggests that the previous two decades of crime declines are now stopping and even reversing. And Texas’ “smart on crime” approach has not changed that trajectory. Claims to the effect that criminal justice reforms reduce crime and increase public safety are talking points belied by recent trend lines.

 

 

As the above chart shows, through 2014 homicide fell dramatically in large U.S. states, including Texas, and most dramatically in New York, where the Giuliani-Bratton model saved thousands of lives.

As for violent crime, comparing U.S. levels to Texas reveals a similar pattern: Rates fell through 2014, but most dramatically during the much maligned “tough on crime” era of the 1990s. Texas actually fell below national averages during that period, but has since risen above them. “Smart on Crime” policies have little to no effect on such trends.

 

 

Reformers in Texas also claim that their approach reduces prison populations and recidivism through their lenient sentencing and diversion programs. But the data suggest otherwise.

 

 

In fact, re-arrest rates are not declining, but have recently ticked upwards or remained flat. One might point to a decline in “re-incarceration” as evidence of the effectiveness of these reforms on recidivism. However, it’s unsurprising that re-incarceration rates are reduced when penalties for offenders are reduced; that doesn’t necessarily mean more criminals have been “cured,” only that they’re receiving different — more lenient — punishments. Hence re-arrest rates are a better metric by which to measure recidivism.

Furthermore, reductions in spending on corrections — when the crime rate is up and the prison population is expected to crest again — have put strain on Texas’s ability to hold the individuals that do deserve to remain behind bars and monitor those that have been released. The Texas Department of Criminal Justice believes that this will increase recidivism.

Whatever Texas’ criminal justice reforms are doing, they aren't reducing crime or recidivism in any meaningful way.

Sean Kennedy is a writer and researcher based in Washington, D.C. He previously served as a senate aide, television producer, and fellow at public policy think tanks.

Donna Pavetti- August 22, 2016

The Temporary Assistance for Needy Families (TANF) block grant, established 20 years ago today, is overdue for reform. TANF currently provides a safety net for very few families in need and does little to prepare low-income parents for success in today’s job market.  

A Weak Safety Net

TANF provides cash assistance to a shrinking number of poor families, even though the need remains high. Nationwide, for every 100 poor families, just 23 received TANF cash assistance in 2014. That’s down from 68 such families that received cash assistance in 1996 under TANF’s predecessor, Aid to Families with Dependent Children. 

The TANF-to-poverty ratio — the number of families receiving TANF for every 100 poor families with children — varies widely among states, ranging from four in Louisiana to 78 in Vermont. In 12 states, the ratio is 10 or less. In 1996, no state had a ratio that low. 

The 1996 welfare law greatly expanded states’ control over welfare policy on the theory that they could better address the needs of poor families. Instead, state policy choices have helped fuel an increase in the number of children in deep poverty, with incomes below half of the poverty line, and left the vast majority of the poorest families to fend for themselves.

Work Requirements: Limited Investments and Results

A key reason for creating TANF was to give states more flexibility to help cash assistance recipients find and maintain work so they’d no longer need assistance. If states had more flexibility, welfare reform proponents argued, they could take funds previously used for cash grants and use them to help recipients find jobs and to cover the costs of work supports such as child care and transportation assistance. But states haven’t lived up to this expectation.

First, states devote very few TANF dollars to work. States spent only 7 percent of their state and federal TANF funds on work activities in 2015 and only 17 percent on child care assistance to enable parents to work. Some spent even less: Eighteen states spent less than 5 percent of their TANF funds on work activities, and 14 spent less than 5 percent on child care assistance.

Second, TANF reaches few nonworking families. Some 3.95 million single mothers were unemployed at some point during 2014, yet only 1.63 million families received TANF in an average month (see graph). This means that most single mothers who needed help finding jobs didn’t have access to the employment opportunities and work supports that TANF is supposed to provide.

 

Finally, parents who leave TANF generally don’t fare well in the labor market over the long term. Welfare reform proponents claim that TANF has a strong track record in moving families to work, but the little recent data available show that most former TANF recipients don’t get stable employment or raise their earnings.

A recent study of almost 5,000 Maryland families found that in the fifth year after leaving TANF, almost half didn’t work at all or worked in just one quarter of the year, up from 39 percent with little or no work in the first year after leaving TANF. The share of former recipients with no job rose from 27 percent in the first year to 37 percent in the fifth year. And only 8 percent of former recipients had earnings above the poverty line for a family of three in all five years.

Despite the rhetoric about the success of the 1996 welfare law that created TANF, the facts show otherwise. We should no longer accept a situation where some 2 million children live in deep poverty, largely due to TANF’s failure to provide assistance to the families most in need of assistance.

The good news is that TANF can improve with changes to the law. Policymakers should focus on strengthening TANF as a safety net, making it a more effective work program, and ensuring that money is directed to TANF’s core activities — work, work supports, and basic cash assistance.

Donna Pavetti is Vice President for Family Income Support Policy at the Center on Budget and Policy Priorities.

Chuck DeVore & Randy Petersen- August 17, 2016

In a recent Real Clear Policy article, Sean Kennedy examines Texas’ violent crime rate, questioning the Lone Star State’s policy of improving public safety while reducing incarceration. Unfortunately, by cherry-picking data of questionable quality, Mr. Kennedy undermines his central claims.

Start with the facts. Over the past decade, Texas closed three prisons while cutting its juvenile detainee population from about 4,000 in 2006 to 1,331 earlier this year. These reforms focused on keeping non-violent offenders out of costly lock-ups and used a portion of the dollars saved toward proven treatment, rehabilitation, and reentry programs. Texas did not reduce penalties for violent offenders, let alone murderers. The result: Reduced recidivism, lower costs, and a state-wide crime rate reduced to levels not seen since 1968.

This is the Texas model of reducing both crime and incarceration rates, and it has been successfully implemented in varying degrees in some 40 states as well as informing pending federal criminal justice legislation. So impugning Texas’ criminal justice reform efforts isn’t just messing with Texas, it’s questioning the basis of criminal justice reform work across the nation.

The idea that crime rates and incarceration are joined at the hip has been thoroughly discredited. Incapacitation of criminals via incarceration is a factor in crime rates, to be sure. But most criminals eventually get out of prison, and, when they do, helping them stay out is vitally important.

Moreover, there’s no evidence that recent criminal justice reforms pioneered in Texas have any connection to the increases in violent crime seen in some U.S. cities. To the contrary, over the past several years, crime rates have fallen faster in states that have reduced imprisonment rates than in those where prisons have continued to grow. Many of the cities now experiencing violent crime increases, such as Chicago, are in states that have yet to implement comprehensive sentencing and corrections reforms.

This isn’t to say that current criminal activity, largely concentrated in major urban centers, might not have something to do with the so-called Ferguson Effect or the Mexican drug cartels’ replacement of marijuana smuggling with heroin. Criminal behavior and crime rates are the result of a complex interplay of demographics, policing, sentencing, incarceration, rehabilitation, reentry, and other factors. For that reason, parts of the system can be improved while others fail, resulting in increased overall crime.

So, what, exactly, is the problem with Texas? 

Mr. Kennedy admits that crime plummeted in Texas as prison populations were reduced right through 2014. But he goes on to say, “Now violent crimes — especially homicide — have spiked again in Texas’ biggest cities,” suggesting that Texas’ criminal justice reforms are to blame. But there is no correlation between increased violent crime in Texas’ largest cities and criminal justice reforms largely aimed at nonviolent offenders. In fact, there’s a long history in Texas and a large body of research that shows these well-implemented alternatives not only cost less but work better than the prison cell. Such reforms, when implemented correctly, reduce recidivism and crime rates.  

If we misdiagnose the problem, we won’t find the solution. Kennedy points to two charts showing the number of homicides and violent crimes committed in Texas’ five largest cities for the first six months of 2014, 2015, and 2016. Because FBI statistics are only available through the first half of 2015, Kennedy combines raw data from the police departments themselves as well as from the Major Cities Police Chiefs Association, a lobbying and advocacy group.

The problem? A staffer at the Major Cities Police Chiefs Association warned us that their data are “not scientific” and used only “as a benchmark for the agencies to see where they stand in relation to one another.” Further, the data are not checked for accuracy but simply self-reported by the member agencies, who return a survey sent out by the association. This ought to raise red flags.

How do Kennedy’s data compare to the official FBI data available for two of the three periods on which he reports? Not well.

For instance, Kennedy asserts that the number of homicides in Houston went up 57.8 percent in the first six months of 2014 compared to 2015; the FBI data says 44 percent. Both indicate a large jump, but the difference between 57.8 percent and 44 percent is statistically significant. 

Kennedy’s overall violent crime number for Houston is even more at odds with available data. He claims that the number spiked by almost 25 percent, comparing the first half of 2014 to first half of 2015. The FBI data for the same period doesn’t show such increase, but, rather, a decrease of 1.8 percent. Factoring in population growth, the decline in violent crime in Houston over that period is closer to 3 percent — far lower than Kennedy’s 25 percent.

Since the Major Cities Police Chiefs Association didn’t report Houston’s statistics in their compilation of data, Kennedy likely derived the figure (7,957 violent crimes from January 2014 to June 2014) from the Houston Police Department. But that department reported 10,000 violent crimes for the period while the FBI lists 10,401. Thus Kennedy’s base year for Houston is more than 20 percent lower than it should be, undermining his subsequent calculations. 

Now, let’s take a look at 2013 data. For the first half of that year, Houston reported 10,106 violent crimes. Comparing the first six months of 2013 to 2016, the overall number of violent crimes rose by 14.7 percent in three years. Factoring in population growth and using the proper baseline for 2013, we see that violent crime rate is up about 9 percent in three years and about 13 percent over the last two, more than a third less than Kennedy’s 36 percent jump.

Looking at the 29 major Texas cities comprising 39 percent of the state’s population that reported 2014-2015 data to the FBI, (Austin didn’t report), we see violent crime going up in 15 cities and down in 14, an average increase of 4 percent (not factoring for population growth). For urban centers statewide, violent crime was up 2.4 percent. Factoring in population growth, the violent crime rate in these cities over the first six months of 2015 compared to 2014 increased about 0.6 percent.

Murder spikes are concerning, but the violent crime rate is more telling. Murders are, thankfully, a small proportion of the aggregate of violent crimes in any city, so a movement up or down is a large percentage of that small number. Keep in mind that the Uniform Crime Report, a database maintained by the Federal Bureau of Information on national crime statistics and used in innumerable research efforts, only captures the most serious offense from any single incident. For instance, if a victim is killed as a result of a rape, robbery, or aggravated assault, only the murder is counted. So it would be more alarming if the murder rate and the violent crime rate were rising in tandem (they’re not).

Since violent crime, especially homicide, is relatively rare, property crime rates can provide a better barometer of trends and the effectiveness of criminal justice policies. What happened to the number of property crimes in Texas’ major cities (excluding Austin) reported to the FBI? Down by 5.9 percent from the first half of 2014 to the first half of last year. Converting to a crime rate, property crime in these cities is down almost 8 percent.

The Major Cities Police Chiefs Association report, which is comprised of 61 urban law enforcement agencies (and does not take population growth into account), indicates that violent crime, including homicide and non-fatal shootings, is up nationally by 2.3 percent from the first half of 2015. This means that the violent crime rate in this subset of cities is up on the order of just over 1 percent — not good, but certainly not a massive crime spike. For the full year, 2014 to 2015, the violent crime rate is up a similar 2.2 percent.

So it’s simply not the case that, in aggregate, violent crime in Texas’ major cities “has risen year-on-year for the first time in a generation,” as Kennedy asserts. On the contrary, Texas crime rates are at historic lows and its incarceration rates are heading lower, mainly due to a shift in treatment of non-violent offenders, which leaves more room in state lockups for the violent. These successful criminal justice reforms have resulted in improved public safety and a lower cost to taxpayers. 

The evidence from Texas is clear: Criminal justice reforms have improved public safety, not imperiled it. The Lone Star State is a model for the rest of the nation.

Chuck DeVore is a vice president with the Texas Public Policy Foundation and a former California lawmaker. Randy Petersen is a senior researcher with the Texas Public Policy Foundation’s Right on Crime initiative and a veteran of 21 years of law enforcement as a sworn officer.  

An Effective Plan for Regulatory Reform

Jerry Ellig- August 17, 2016

In his recent Detroit Economic Club speech, Donald Trump labeled federal regulation “the anchor that is dragging us down.” He promised to remove this burden by adopting a temporary regulatory moratorium and asking every federal agency to identify and eliminate regulations that “are not necessary, do not improve public safety, and which needlessly kill jobs.”

It will take a lot more than that to ensure that regulations solve real problems at a reasonable cost. A targeted and effective regulatory reform program would consist of at least three elements:

1. Agencies should be required to show evidence that a significant problem exists, that they understand its root cause, and have considered alternative solutions that address it — before the agency proposes any new regulations. 

Data from the Mercatus Center’s Regulatory Report Card project show that only one in eight of the 130 major prescriptive regulations proposed by executive branch agencies between 2008 and 2013 were accompanied by substantial evidence demonstrating the existence, size, or cause of the problem the agency sought to solve. And for about one-quarter of the regulations, the agencies considered no significant alternatives to the regulations they proposed.

Moreover, agencies often produce economic analyses that seek to justify regulatory decisions that have already been made, instead of informing the decision-making process. A colleague of mine who had a long career at a regulatory agency reports that he was often told on a Friday that if he couldn’t find enough benefits to justify the costs of a proposed regulation over the weekend, he shouldn’t bother coming back to work Monday. 

This is backwards. The assessment of a regulation’s benefits, costs, and alternatives should be completed before the agency decides on it. One solution is to require agencies to publish their analyses for public comment before they propose regulations, encouraging them to look before they leap.

2. The regulatory system should have an external check on the accuracy of agencies’ analyses. Judicial review could provide this check.

The Securities and Exchange Commission’s (SEC) experience illustrates the salutary effects of judicial review. Unlike many agencies, the commission is required by statute to conduct economic analyses for many of its regulations. After having several important regulations overturned in court due to shoddy economic analyses, the SEC’s staff issued new guidance in 2012 that lays out analytical standards and involves economists in rulemaking from the beginning. By some accounts, the SEC’s methodology has since improved. 

3. Review of existing regulations will not be fully effective unless conducted by an expert entity that is independent of the agencies issuing the regulations.

Having agencies review their own regulations is like asking students to grade their own homework. An independent commission could be patterned after the Base Realignment and Closure Commission that recommends military bases for closure. It would assess the benefits associated with a defined group of regulations — such as all regulations with the same intended outcome — and identify a package of regulations that should be modified or eliminated if they are not effective or produce only small benefits at high costs. And the changes would take effect unless Congress voted to disapprove the entire package.

Rather than slashing regulations willy-nilly, these three reforms would help protect us from those regulatory burdens that do more harm than good.

Jerry Ellig is a senior research fellow with the Mercatus Center at George Mason University.

Sean Kennedy- August 15, 2016

Since 2007, when the Lone Star State began to reform its sentencing laws and criminal justice system, advocates for broad criminal justice changes have pointed to Texas as the best example of how both crime and incarceration rates can be reduced.

For several years after the reforms — right through 2014 — crime plummeted as prison populations were reduced. But that trend has been arrested and reversed. Now violent crimes — especially homicide — have spiked again in Texas’ biggest cities.

 

 

From the first half of 2014 to the first half of 2016, with the exception of Fort Worth, violent crime in aggregate (murder, rape, robbery, and aggravated assault combined) has risen year-on-year for the first time in a generation.

 

 

Last week, Mark Holden of Koch Industries and Ronal Serpas, previously of the Nashville and New Orleans police departments, argued in The Washington Post that pointing to such data, which show downward crime trends reversing, is merely “twist[ing] data and dangerous rhetoric.”

But the data are right there: Crime, particularly murder, is rising faster than population growth in major Texas cities.

To be sure — as I’ve pointed out time and time again — the trend is not uniform (some cities are up a little, others down a lot and vice versa). But the trend, based on FBI statistics from early 2015, is very real: The crime decline has arrested and is reversing.

Holden and Serpas argue that this is only “localized” violence. The problem is that it appears to be localized in America’s biggest cities including Texas.

And New Orleans — where Mr. Serpas once led the police — has recently joined the nation-wide spike in homicides, with a 9 percent jump in 2015 from 2014 lows. The city now has a murder rate that rivals some developing nations.

Nashville, meanwhile, saw murder rates jump 83 percent from 2014 to 2015 — the second biggest increase of the 50 largest cities in the United States — from 41 homicides in 2014 to 75 in 2015. The trend is projected to continue this year.

These are the facts.

It’s true that crime and murder are, thankfully, lower than they were at the 1991 peak. But that’s no excuse for complacency or ignoring facts, just so a political agenda — criminal justice reform — can move forward unimpeded.

Sean Kennedy is a writer and researcher based in Washington, D.C. He previously served as a senate aide, television producer, and fellow at public policy think tanks.

Data is drawn directly from police agencies & Major Cities Chiefs Association reports.

CORRECTION: An earlier version of this article incorrectly calculated 2014 violent crime and homicide figures due to a tabulation error by the author, who deeply regrets the error.

The Benefits & Costs of Net Metering: Brookings Gets It Wrong

Tom Tanton- August 13, 2016

In the ever evolving yet mostly hidden world of electricity policy, the hot topic is “net metering.” Is it a policy whose time has come? Or is it an idea with deficiencies that need to be addressed?

The Brookings Institution recently published a report concluding that net metering provides a net social benefit. Unfortunately, however, the report’s analysis ultimately fails, botching what could have been a thoughtful look at this important issue. Perhaps worse, various publications, including the Las Vegas Review Journal, have unquestioningly accepted Brookings’ erroneous conclusions. But the report, Rooftop Solar: Net Metering Is A Net Benefit, suffers from four fatal flaws and should not be used to guide public policy on net metering.

What is net metering? It's a way for households and businesses to generate their own electricity, usually with rooftop solar photovoltaic (PV) systems, which convert sunshine directly into electricity. Net metering comes into play when the system puts out more electricity than the household or business is using, enabling the PV owner to sell the excess back to the local electric utility. Net metering in one form or another is mandated in 41 states, the District of Columbia, and four territories, though the prices paid for the excess power, eligibility, and limits vary.

The first problem with the Brookings report is that it contains no original analysis. It’s a simple compilation of reports done by others, including state public utility commissions. And it’s a selective compilation at that, excluding myriad other reports critical of net metering on the basis of excess costs. No surprise that Brookings finds a “net benefit.”  

This is related to the second flaw: The issue of net societal benefits is a red herring. The debate is actually about what price to pay PV owners for excess power. Pay too little, and the economics don’t pencil out for PV customers; pay too much, and non-PV customers are hit with unfair additional costs. In many states the price is set at the full retail price of electricity, not the price the utility pays wholesale. The retail price should include the cost of transmission and distribution infrastructure; ignoring such factors shifts the actual costs from PV owners to others. The debates and regulatory proceedings taking place in numerous states should focus on setting the right price and ensuring that infrastructure is paid for by all those who use it, both PV owners and non-PV owners. 

Third, comparing societal benefits to cost is an overly simplistic and, ultimately, useless approach. There may be more cost-effective means of achieving the same benefits. Given that rooftop solar PV is the single most expensive means of generating electricity, such programs should compare lower cost alternatives for achieving the same benefits. Since the reduction of greenhouse gas emissions is often cited as the primary benefit of net metering, why favor a policy which encourages a technology that costs over ten times more than other available means? The alternatives are just as clean and cost a fraction of rooftop solar.

Fourth — and most importantly — estimating net benefits as the Brookings report does ignores the asymmetric distribution of costs and benefits. Net metering programs have a reputation, confirmed by numerous impartial evaluators, of subsidizing the rich at the expense of the poor and middle class. The reason is that, in general, only the more well off can afford rooftop solar PV systems. Such considerations are at the heart of public utility commission proceedings on net metering. For instance, even the California Public Utilities Commission found that these subsidies hurt lower-income customers the most and moved to correct the price for net metering.

Thus Brookings perpetuates bad policy when it asserts that, “Net metering … frequently benefits all ratepayers when all costs and benefits are accounted for.” This conclusion does not answer the question whether each individual’s costs are in reasonable proportion to his or her benefits.

Numerous states are seriously and methodically reevaluating their net metering programs to limit or eliminate the cost shift inherent in retail price net metering. The Brookings report is disingenuous, cherry picking only the studies that show a net benefit and thus painting a rosy picture of a policy that is outdated, inequitable, and in dire need of reform.

Tom Tanton is Director of Science and Technology at Energy and Environment Legal Institute and former Principal Policy Advisor at the California Energy Commission.

Stephen DeMaura- August 12, 2016

Over the past several months, debates about our nation’s health-care system have reentered the public arena, as the many flaws of the Affordable Care Act (ACA) — including failing co-ops and narrowing coverage — have come to light. Now, as Election Day edges closer, consumers find themselves facing another battle: proposals for more increases on health-care premiums in 2017.

Not only are these premium increases unaffordable, there’s no confirmed release date for the final numbers for all states. In 2015, some states revealed insurance premium increases for 2016 throughout the summer and early fall, but the Department of Health and Human Services (HHS) failed to release the final rates until October 25 — a mere few days before open enrollment began. If this pattern repeats in 2017, businesses and taxpayers will be left with no time to research or prepare for choosing the best coverage option.

The lack of transparency surrounding the premium increases creates a slew of challenges for Americans who simply want accessible and affordable health-care coverage.

As of June 2016, only 33 states have released premium increase requests. A vast majority of those numbers are in the double digits, with states such as Arizona and New Hampshire facing potential increases of up to 60 percent. Both Insurers and the Obama administration are attempting to calm nerves by suggesting that these increases are only requests and that they may not be realized. They should skip the spin and instead offer open, transparent, and consistent information — and sooner rather than later.

Businesses and taxpayers deserve to know how much their insurance premiums will increase, and they need certainty rather than speculation. HHS should provide answers with ample time before the election — not the week prior. Consumers, in particular, need this information so they can see where their taxes are going and make educated decisions about their health-care plans before putting a pen to the ballot on November 8.

As providers such as UnitedHealth plan to exit a majority of the insurance exchanges, the ACA’s implementation has gone from bad to worse. Consumers are now presented with a blatant, though disheartening fact: the ACA has failed to do the one thing it promised, namely hold insurers accountable. By not releasing the final 2017 premium increases as soon as they’re available, insurers and the administration are taking a gamble not only on the affordability and quality of care, but also on competition and consumer choice.

If our nation’s leaders want to keep Americans insured — and, more importantly, healthy — two things are desperately needed: transparency and time for consumers to decide which coverage is best for them. If last year’s trends continue for the 2017 premium rates, insurer’s and the government will, once again, leave consumers and businesses frustrated and out of the loop and many taxpayers without affordable, quality coverage. 

Stephen DeMaura is the President of Americans for Job Security.

FDA's E-cigarette Rules Are a Public-Health Hazard

Joel L. Nitzkin- August 11, 2016

Since they first were introduced in the United States in 2006, electronic cigarettes have helped millions of U.S. smokers to cut down or quit and diverted teens from smoking. But recently announced Food and Drug Administration (FDA) rules on e-cigarettes, which start taking effect this week, could undo that progress, damaging public health and creating a political and administrative quagmire. 

If unchanged, the FDA rules will eliminate from the market more than 99 percent of e-cigarettes and related nicotine vapor products. And millions of vapers who now use these products as a substitute for cigarettes will be forced to return to smoking or to find black-market sources. This is especially troublesome for teens who use e-cigarettes to help quit smoking combustible cigarettes or who never smoked “real cigarettes” in the first place.

The problem stems largely from requirements spelled out in the Family Smoking Prevention and Tobacco Control Act, passed by Congress in 2009. The law requires any tobacco-related product that was not on the market as of February 2007 — a category that includes nearly all e-cigarettes — to submit to a “pre-market tobacco product application” (PMTA).

The FDA estimates the cost of these PMTAs to be $334,000 for each combination of device, flavor and strength of nicotine, and each separate component of an e-cig product. Industry sources put the figure higher still, in the range of $3 million to $5 million per application. At that price, only the largest corporations could afford to apply. Meanwhile, all of the major Big Tobacco company cigarette products are exempt from the PMTA requirement.

The Tobacco Control Act requires manufacturer-sponsored research to prove for each individual vaping product not one but two negatives: First, that the product will not recruit nonusers to nicotine use; and second, that it will not inhibit smoking cessation. At a minimum, five to six years of study would be required to answer these questions. But the FDA is giving manufacturers only two years to submit their completed applications or else remove their products from the market.

The FDA is also imposing costly requirements for detailed chemical analyses, but no guidance is offered as to what results would be considered good enough for approval. What’s more, it’s not even clear how much illness and death would be prevented by reducing or eliminating the specified toxic chemicals. 

Multiple lawsuits have been filed to block implementation of the FDA rules. But whichever way the lawsuits go, the American public is likely to lose. Why? If the FDA prevails, a large and relatively mature industry will be destroyed and recent substantial gains in smoking reduction may be reversed. But if any of the lawsuits prevail — as seems likely — the FDA would be  empowered to regulate the new tobacco products via other means.

As currently formulated, the PMTA requirements do not protect public health. Instead, they protect the sales and profits of the Big Tobacco cigarette companies and the drug companies that make nicotine-replacement gums, patches, lozenges, and other smoking-cessation treatments.

It doesn't have to be this way. Congress could change the grandfather date for PMTA applications from February 2007 to August 2018 (the date the new regulations come fully into effect). This would buy time to work out the kinks in the Tobacco Control Act that are, as written, technically infeasible and will only increase tobacco-related addiction, illness, and death.

For instance, by focusing on the Tobacco Control Act's provisions on “filth and adulteration,” the cost of mandated laboratory studies could be substantially reduced and the public-health benefits expanded. Dropping requirements for tests whose results can't be interpreted in terms of the risk of addiction, illness, and death would also be sensible. Predatory marketing could be addressed by requiring strict controls for advertising, packaging, and labeling. Such an improved regulatory process could rapidly secure public-health benefits not likely achievable by any other means.

Reforming the PMTA process in this way is preferable to eliminating it entirely as that would not weaken FDA regulation of tobacco products. On the contrary, eliminating the PMTA would only empower the agency to regulate e-cigarette manufacturing and marketing directly as well as to banish rogue operators from the market.

The lawsuits should serve as a wake-up call to the FDA and others in the tobacco-control movement to reconsider how best to protect and enhance the health of the public. Lawmakers and regulators should act now before permanent damage is done — and before they destroy a major new industry that’s saving lives.

Dr. Joel L. Nitzkin is senior fellow in tobacco policy for the R Street Institute. He is former co-chair of the Tobacco Control Task Force of the American Association of Public Health Physicians. References to back up the allegations made in this essay are available on request from Dr. Nitzkin at jlnitzkin@gmail.com.

Trade With China Is a Net Plus for Americans

Bryan Riley- August 10, 2016

“The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade,” a scholarly paper that examines the impact of U.S. trade with China, has made quite a splash in policy circles. Media outlets hail it as “influential,” “famous,” “excellent,” and “a real bombshell.” And the Paulson Institute’s Damien Ma is right when says that “‘China Shock’ has driven a lot of the trade debate in this [election] cycle.”

The paper spotlights the alleged negative effects of trade with China. But are its findings accurate? Given the stakes of this year’s election — and the centrality of debates over free trade — that question is especially relevant. As it turns out, “China Shock” doesn’t prove that trade with China has made Americans worse off, nor make a compelling case for locking American workers and their offspring into low-wage manufacturing jobs in perpetuity.

If we compare “China Shock’s” central claims with undisputed facts from other sources, we can see that trade with China is, on the contrary, a net plus for Americans.

China Shock claim: “Views on how trade affects wages and employment turned less sanguine in the 1990s. As wage inequality rose, low-skill wages and employment fell, and manufacturing employment contracted in the U.S., globalization was seen initially as a prime suspect.”

The facts: Low-income U.S. households have been getting richer. In the 1990s, according to the Congressional Budget Office, real income increased by 17.9 percent for the lowest quintile of U.S. households. By 2013 real household income for the lowest quintile was 30.3 percent higher than in 1990.

China Shock claim: “At the national level, employment has fallen in U.S. industries more exposed to import competition, as expected, but offsetting employment gains in other industries have yet to materialize.”

The facts: Offsetting employment gains have definitely materialized. From 1991 to 2007, the period analyzed in “China Shock,” the economy added 29 million net new jobs — an employment increase of 27 percent.

China Shock claim: “China's rise has provided a rare opportunity for studying the impact of a large trade shock on labor markets in developed economies.”

The facts: The sharp rise in imports from China was accompanied by a big drop in the share of imports coming from other Pacific Rim countries. In 1991, 40 percent of U.S. imports came from Pacific Rim countries, including China. In 2007, just 35 percent of U.S. imports came from the Pacific Rim. Not much of a shock there.

China Shock claim: “In trade theory, it is standard to assume that trade is balanced.”

The facts: When all financial transactions with China are accounted for, trade is balanced. If a U.S. family spends $100 on shoes made in China and those dollars are used to invest in U.S. companies or to buy government treasury bonds, the result is reportedly a $100 trade imbalance. But Americans still benefit, and U.S.-China financial flows balance.

China Shock claim: “Suppose that policy distortions in China — such as the excess absorption of credit by state-owned enterprises — induce the country to run a trade surplus and the U.S. to run a trade deficit.”

The facts: Suppose that policy distortions in the United States induce the U.S. to run a trade deficit. From 1991 to 2007 the U.S. government ran cumulative budget deficits totaling $2.4 trillion. These deficits were partially financed by investment from China. Hundreds of billions of dollars from China were used to purchase “exports” of U.S. treasury bonds instead of privately produced goods and services.

This was an entirely predictable result of deficit spending by the U.S. government. As the Congressional Budget Office (CBO) explained at the time: “[Trade] deficits are not caused by either U.S. or foreign trade policies. Rather, they are determined by the balances between saving and investment in the United States and in other countries and the effects of those balances on international flows of capital.”

China Shock claim: “When looking within manufacturing, Tennessee, owing largely to its concentration of furniture producers, is far more exposed to trade with China than is Alabama, which has agglomerations of relatively insulated heavy industry…. [Regions] that were more exposed to increased import competition from China experienced substantially larger reductions in manufacturing employment.”

The facts: Tennessee displays no sign of being harmed by imports from China. During the period measured in “China Shock,” the state’s real manufacturing GDP increased 77 percent — even more than the concurrent 65 percent increase in Alabama. Looking at the big picture, total real GDP increased even more in Tennessee (131 percent) than in Alabama (110 percent) or in the U.S. as a whole (123 percent).

Although both states lost manufacturing and farm jobs as workers and farmers became increasingly productive, overall employment increased. Job growth was higher in Tennessee (34 percent) than in Alabama (27 percent).

China Shock claim: “Applying the direct plus the indirect input-output measure of exposure increases estimates of trade-induced job losses for 1999 to 2011 to 985 thousand workers in manufacturing, and to 2.0 million workers in the entire economy.”

The facts: Did trade with China result in any net job loss in the U.S.?  No. From 1999 to 2011, the U.S. economy added over 3 million net new jobs.

Trade destroys some jobs and creates others, just as technology does. As economist Scott Sumner put it: “why focus on jobs lost by the China shock, but not German exports or robots replacing workers?”

Money saved buying a made-in-China product is money that can be spent or invested in other parts of the economy, creating U.S. jobs. Moreover, the Chinese can use the money they earn from exports to import U.S.-made products or invest in the U.S. economy, also creating U.S. jobs. According to Nobel economist Paul Krugman, this process “should be seen as jobs shifted out of manufacturing to other sectors, not total job loss.”

China Shock claim: “It is incumbent on the literature to more convincingly estimate the gains from trade, such that the case for free trade is not based on the sway of theory alone, but on a foundation of evidence that illuminates who gains, who loses, by how much, and under what conditions.”

The facts: “China Shock” attempts to illuminate the impact of trade on those who lose, but it also obscures the impact of trade on those who gain. Since China joined the WTO, the United States undoubtedly lost some low-wage, low-skill jobs. But that’s not the whole story. Since that time:

·      Real U.S. GDP has increased by 27 percent

·      Real U.S. manufacturing GDP has increased by 23 percent

·      Real income for the lowest quintile of U.S. households has increased by 12 percent

·      Employment has increased by 10 percent

The facts suggest not that trade with China costs jobs but that we need a more dynamic and growing economy so that Americans who lose their jobs — for whatever reason — have ample opportunities to find new work and continue their pursuit of the American Dream.

Bryan Riley is the Jay Van Andel Senior Trade Policy Analyst in The Heritage Foundation’s Center for Trade and Economics.

Shining a Light on the Public Pension Crisis

Robert Fellner- August 9, 2016

Taxpayer costs for U.S. public pension plans, already up three-fold since 2001, are going up yet again — the necessary consequence of long-term investment returns plummeting to record-low levels

While underperforming investments receive the most attention, they aren’t the real reason for the tax hikes and cuts in government services needed to bail out public pensions. In reality, the culprit is the extraordinarily generous nature of the benefits themselves, whose costs are only now coming to the surface.

Take my home state of Nevada, for example. Like most U.S. plans, the Public Employees’ Retirement System of Nevada (NVPERS) outperformed its investment target over the past 30 years, yet costs soared anyway — totaling 12 percent of all state and local tax revenue in 2013, the second highest rate nationwide.

A pair of studies by the American Enterprise Institute’s Andrew Biggs reveals why. The average benefit for full-career state workers is $1.325 million, at least 55 percent greater than that of their  private-sector counterparts.

Those figures exclude NVPERS top-earners — police and fire officers — whose benefits are so rich and available at such a young age that it’s not uncommon to see retirees collecting six-figure pensions while still working full-time elsewhere. Former Las Vegas police officer Dan Coe tops that list: his starting $110,804 pension at age 38 is projected to total $13,216,000 in combined lifetime payouts.

Nevada isn’t the only place where this is happening. California’s multiple independent pension plans allow government workers to double dip without even leaving the state.

Marin County counsel Steven Woodside, for example, added two government pensions on top of his $258,000 salary last year: a $82,606 payout from his 12 years with Sonoma County plus a $97,206 allowance from his 29 years at Santa Clara County, according to the Transparent California website. 

Unfortunately, the cost to sustain such generosity has grown so dramatically that even those who benefit from the system consider it “outrageous.” 

Just across the bay from Woodside sits the Rodeo-Hercules fire district, where, in 2013, chief Charles Hanley cleared roughly $540,000 in total pay and benefits from California governments — $395,000 for his services as chief plus a pension of nearly $145,000 from the City of Santa Rosa.

When CBS San Francisco asked Hanley why the chief of such a tiny fire department — the district serves roughly 33,000 people over 25 square miles — costs so much, he was refreshingly candid. “People should be upset,” he responded, “and they should ask questions” regarding his “way too expensive” retirement costs.

To be clear, these employees did nothing wrong. They merely took advantage of the system offered to them, as anyone would. But as Hanley suggests, taxpayers should be asking questions. In particular, why are government retirement systems paying out six-figure pensions to those still in the prime of their working career?   

Harvard economist Edward Glaeser considers public pensions a “shrouded cost of government” because of their inherent complexity. This shroud enabled Nevada’s public unions to lobby successfully for numerous pension enhancements, with legislators either not knowing (or caring) about their long-term costs.

Ultimately, a recurrent pattern emerged. Rather than using excess investment returns to pay down NVPERS billion-dollar deficit — as fiduciary guidelines dictate — lawmakers used them to pay for the additional enhancements.

This scheme extends far beyond Nevada. The most infamous example is the California Public Employees’ Retirement System, where the system’s $139 billion shortfall has been largely blamed on similarly irresponsible benefit enhancements in 1999 and 2001.

It’s easy to see why this legislative practice is so appealing: it allows lawmakers to get the best of both worlds. They can curry favor with government unions by enriching their benefits while keeping the costs of these increases invisible to most voters. And if costs skyrocket later on, that will be someone else’s problem.

This feature of defined benefit (DB) plans was highlighted as a fundamental “disadvantage” in a 2010 study commissioned by NVPERS and conducted by the Segal Group, an actuarial firm that receives millions of dollars from numerous U.S. public pension plans.

The failings of DB plans extend beyond their predisposition to financial mismanagement, however. They’re also an inefficient way for employers to compensate their employees. 

If DB plans were superior in this regard — as is often claimed by their defenders — one would expect them to be embraced by private firms, which prize efficiency. But just the opposite has happened: the percentage of private workers enrolled exclusively in DB plans fell from 28 percent in 1979 to just 2 percent in 2013, according to the Employee Benefit Research Institute.

In fact, Cornell Professor Maria Fitzpatrick found that Illinois public school employees “value their pension benefits at about 19 cents on the dollar,” suggesting that governments are dramatically overpaying for their employees’ retirement benefits. Like Glaeser, Fitzpatrick attributes this inefficiency to the political nature of DB plans, which “drive[s] a wedge” between actual and perceived costs.

In other words, unions favor exorbitant pensions for government workers’ pensions because they see it as an easy way to covertly increase their compensation — not because they have some uniquely strong preference for a retirement-heavy pay package.

The DB model is failing government workers, too. By design, DB plans push costs onto future generations. Consequently, future workers will have more taken out of their paychecks to help pay for the benefits of those already retired, while at the same time often receiving less generous benefits themselves.

Finally, U.S. public pension plans’ flawed accounting methods and over-reliance on investment returns — rejected by private U.S. pension plans and both public and private plans in Canada and Europe — put retirees at risk. Should these practices lead to more defaults, as has already happened in Puerto Rico and several U.S. cities, retirees may face benefit cuts. 

Transparency and equity are uncontroversial goals of government. The public should be able to ascertain the true costs of government programs, and those costs should be borne by those who benefit from them. Defined benefit plans fail spectacularly on both counts.

What’s the solution? Shifting to a defined contribution plan would provide superior transparency, cost stability, and reliability for both employees and employers. Lawmakers can learn from successful past reforms enacted by the federal government as well as more recent examples in Arizona and Utah

Robert Fellner is the director of transparency research at the Nevada Policy Research Institute and author of “Footprints: How NVPERS, step by step, made Nevada government employees some of the nation’s richest."

Higher Minimum Wages, But Not for the Young

Preston Cooper- August 8, 2016

This fall, several states, including Washington and Maine, will vote on ballot initiatives to raise their local minimum wages. Despite evidence showing that higher minimum wages have the perverse effect of lowering employment among low-skilled workers, most of these initiatives will likely succeed. After all, most Americans are kind, and few want to vote against giving their fellow workers a raise. 

The biggest victims will be the young. While some economic studies find mixed effects on the employment of adults, most agree that higher minimum wages reduce the job opportunities available to young people. One recent analysis by economists Jonathan Meer and Jeremy West of Texas A&M University found that a higher minimum wage lowers youth employment 11 times as much as it lowers the employment of middle-aged adults.

The reason is that young jobseekers usually don’t have the long résumés, acquired skills, and glowing references that enable them to land well-paying work. Instead, they must accumulate these assets through entry-level jobs, which usually means working — temporarily — for a low wage. If the minimum wage is set above what employers are willing to pay for unskilled, inexperienced labor, many young people will find themselves out of work. 

As the minimum wage has risen, this is precisely what has happened. Teenage labor force participation, which was 52 percent in 1996, has fallen to just 35 percent today. For political reasons, it’s unlikely that minimum wages will be lowered — or even frozen. Instead, the best way forward is to allow young people to work for a special youth minimum wage below the standard rate. 

The United States already has such a program. Since 1997, employers have been allowed to pay workers under 20 a wage of $4.25 per hour for their first 90 days on the job. The trouble is that more restrictive state laws supersede the relaxed federal standard. Unless states include a similar provision in their labor codes, the federal youth minimum wage is useless. Perhaps unsurprisingly, many states have not played along.

35 states plus the District of Columbia either have no youth minimum wage exemption or have a more limited one than what the federal government allows. In an era of slow wage growth, many of these states have opted to raise their minimum wages, and it’s likely many more will. In order to mitigate the worst effects of these government-imposed wage floors, states should follow the federal government’s lead and allow young people to work for a lower wage.

Congress should also consider expanding the youth minimum wage program, since its usage has been quite limited. More specifically, the 90-day limit on employment should be repealed. Employers may be reluctant to take advantage of the youth minimum wage if they must give all workers a 71 percent pay bump after just three months on the job.

In a new report for the Manhattan Institute, I estimate the employment effects of expanding the youth minimum wage. If all states adopt a youth minimum wage rate of $4.25 per hour and Congress abolishes the 90-day time limit, the economy could generate a maximum of 450,000 new jobs for young people — increasing their employment by nearly 9 percent. Like all estimates, this is contingent on various assumptions. But in all likelihood the job-creation numbers would at least be in the hundreds of thousands.

The campaigners behind state-level minimum wage increases should consider devoting some of their energies to the inclusion of youth minimum wage provisions in this fall’s ballot initiatives. The youth minimum wage won’t solve the youth employment crisis on its own. But we can start turning the tide of policy towards the interests of young people, who desperately need a lifeline in this slow economic recovery.

Preston Cooper is a policy analyst at the Manhattan Institute and the author of the forthcoming report, Reforming the U.S. Youth Minimum Wage.

Reforming the Digital Millennial Copyright Act

Wayne T. Brough- August 6, 2016

Millennials are a disruptive generation. They are the first generation to abandon landline telephones in favor wireless smartphones, the first to cut the cord with pay-TV, and the first to turn to the Internet as their go-to source for music and video entertainment. These trends have many old school industries not just puzzled, but incensed that millennials do not respect the rights of those creating the content they consume.

In fact, the music industry sent a letter to Congress signed by a host of the industry’s biggest stars calling for reform of the Digital Millennial Copyright Act (DMCA), which was enacted to address online copyright issues. Piracy is a real concern, but any reforms need to acknowledge the realities of today’s markets — especially the fact that, in today’s world, content creators are everywhere and unnecessary restrictions favoring one specific business model can hamper creativity elsewhere.

The advent of the digital world clearly posed challenges for Hollywood and the music industry. Once in digital format, music and videos can be reproduced perfectly at virtually no cost. The costs of piracy are substantial and the content industries have been challenged to adapt. But the vertically integrated business models established in the 1950s appear ill-suited for today’s world of instant downloads.

Recognizing the problem, Congress passed the DMCA in 1998 to create a regime of digital rights management that criminalized technologies or services intended to circumvent copyright protections as well as the very act of circumvention, whether or not there was a copyright infringement. Yet the legislation also acknowledged that things work differently online and created a “safe harbor” for Internet service providers who would not be held liable for infringing content uploaded by their customers as long as they quickly identified and removed any copyright violations. A “notice and takedown” process was included in the act to provide copyright holders an avenue for pursuing unlawful use of content.

With over a billion websites online, more than a billion Facebook users, and another billion YouTube users, patrolling the Internet for copyright violations is a daunting task. The larger tech companies and Internet service providers have resorted to computer algorithms that constantly sift through online content in search of copyright violations. Google, for example, developed Content ID to monitor YouTube for violations. Likewise, Facebook created Rights Manager to identify offending materials, and Audible Magic is a third party tool that can be used for content recognition and copyright compliance. When a violation is found, the rights holder is offered various options, including removing the content, tracking the content’s statistics, or monetizing the content through ads.

Such systems are far from perfect, and the notice and takedown process has received criticisms from both sides. The recording industry, as made clear in its recent letter, considers the notice and takedown process ineffective, with far too much pirated content still available on the Internet. Others claim that notice and takedown bots mistakenly flag uses of copyrighted materials that are perfectly legal — either through the fair use doctrine or licensing permissions.

In any case, the world of content creators has fundamentally changed. The Internet has expanded the reach of smaller creators, allowing performers to find audiences without relying on the global entertainment industrial complex that creates the Taylor Swifts and Beyoncés of the world. To be sure, the creation of superstars is not cheap, and the recording industry evolved into a fine-tuned machine specializing in this task, assuming the risk and making significant investments in advances for artists, recording costs, promotion and distribution, tour support, and video production.

But for many, the Internet has unwound all that, eliminating a swathe of middlemen between the creator and the audience. This matters, because the Internet has introduced a new creative class, free from the need to sign with a major label, but utterly dependent on the Internet to survive. Kickstarter, Indiegogo and other online vehicles offer new ways to raise money, spreading risk across a much wider group of investors. Likewise, sales and promotion can be done through social media sites such as YouTube, Facebook, and Bandcamp, allowing bands and fans to connect with one another no matter how niche or obscure their tastes at a much lower cost. 

These changes have been unsettling for many, and a concerted push is underway to reform the DMCA to strengthen the music industry’s hand. Many are calling for a more draconian “notice and stay down” approach to replace the current process. Indeed, the recent letter claims that “the tech companies who benefit from the DMCA today were not the intended protectorate when it was signed into law nearly two decades ago.” This clearly echoes the concerns of lobbyists, not the general public.

Public laws are intended to enhance the general welfare of society, not to feather the nests of particular special interests. Accordingly, the DMCA must be viewed from a broader perspective, balancing the competing interests of all parties — creators (broadly defined to include the new class of Internet-based creators), consumers, and broadband providers — to maximize social welfare. Reasonable people may disagree in their assessments of how well the law works, but focus should be placed on the overall implications for social welfare, not whether one industry wins or loses.

Wayne T. Brough, Ph.D., is the Chief Economist and Vice President for Research at FreedomWorks.

Sean Kennedy- August 4, 2016

 

A new study by the Lurie Children's Hospital of Chicago, using data derived from Illinois Violent Death Reporting System, paints a grim picture for young black men in Chicago: the odds of being killed are rising from already high levels.

 

 

And it may get much worse before it gets better. Homicide projections for 2016 suggest that the Windy City may see as many as 650 murders this year — twice that of New York City, a megalopolis three times as big as Chicago.

 

 

Given that 80 percent of Chicago homicide victims in 2015 were black, the toll on that community is staggering. The odds that a young black man in Chicago will be killed now rival those found in the world's most dangerous cities.

Sean Kennedy is a writer based in Washington, D.C. Previously, he was a U.S. Senate aide, television producer and a fellow at public policy think tanks.

Blog Archives