Tomorrow at 6:30 p.m. Eastern, the Manhattan Institute will host a discussion on the future of conservatism. You can watch the event live right here:
The panel comprises Josh Barro, Yuval Levin, Megan McArdle, Reihan Salam, Avik Roy, and David Brooks. In my view that's a good mix -- all the participants have a decidedly intellectual bent, but they run the gamut from full conservatives (Levin) to moderate/center-right types (Brooks) to libertarians (McArdle). Each of these factions has a stake in the future of the movement.
Some other conservatives are less happy about it.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The United States, Iran, Venezuela, and Ukraine are very different countries, with very different languages, on very different continents. And yet there’s at least one thing they all have in common: energy subsidies.
In Ukraine, the government sets the price for natural gas well below market rates. This discourages investors from searching for new gas, encourages people to use more than they need, and forces the government to spend billions of dollars to make up the difference.
In Venezuela, the government keeps the price of gasoline so low that it’s virtually free: less than one cent per gallon. So even though Venezuela has the largest known reserves of oil, the country’s government is going broke at a remarkable rate. It costs an estimated $12 billion each year to provide the gas subsidies.
In Iran, drivers pay a stiffer price -- between $1.60 and $2.60 per gallon. But that is far below the going rate for gasoline, even in Iran. Tehran’s Islamic Republic is yet another government spending itself into oblivion to subsidize energy for its citizens.
Clearly, subsidies aren’t the largest problem these countries face. But they are making bad situations worse, by diverting resources away from productive activity, by encouraging people to be wasteful, and by hiding the scope of the country’s problems from voters.
Of course, it’s easy for an American to be smug about energy policy. Today we sell more oil abroad than we buy there. States that allow fracking, such as Texas and North Dakota, are booming -- although states that ban it, such as California and New York, are struggling. But so be it. Federalism gives states the right to fail as well as the right to succeed.
Even as prices have been on the rise in Europe (strengthening Russia, a natural-gas exporter) they’ve plunged in the U.S. That’s helping to spur an increase in manufacturing jobs in this country as well.
Still, there’s plenty of subsidizing in the American energy market. President Obama’s 2009 “stimulus bill” directed some $40 billion to the Department of Energy. Much of that money turned into subsidies for wind- and solar-generated power. There are also a number of tax breaks and mandates for favored industries. The Renewable Fuel Standard, for example, essentially forces refiners to use ethanol.
Affordable energy leads to economic growth and opportunity. But you can’t generate affordable energy through government subsidies. Only a free market can do that. But Americans are still teaching the wrong lesson; John Kerry recently flew to Kiev to offer Ukraine $1 billion in energy assistance. Most of that money will just end up in Russian hands.
Free-market competition, in energy as in other sectors, is the real answer. Perhaps an answer future governments, in all four of these countries, will turn to.
Rich Tucker is senior writer in the B. Kenneth Simon Center for Principles and Politics at the Heritage Foundation.
To answer the question, I combined a few different data sources -- this list of SAT results by income, this list of ACT results by income, and this list of ACT-SAT concordance scores. I converted the ACT scores to SAT scores and then, because the income groupings are different, I calculated the midpoint of each group and used it as the X variable. (I had to leave off the "and up" groups in both.)
Here are the results:
The ACT had much more fine-grained data at the low end, and the SAT had more data for the rich, but otherwise the two show pretty much the same thing.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
Since the recent announcement that major changes are coming to the SAT, I've seen about 10,000 articles claiming that (A) the correlation between family income and SAT scores proves that the test is biased against poor kids and (B) the SAT is easy for the rich to "game" through test prep.
There are indeed problems with the SAT, including problems relating to parental income. Here is a persuasive argument by Charles Murray that the SAT should be scrapped entirely and replaced by subject tests like the SAT II. But we shouldn't be shocked by the simple fact that parental income is linked to children's academic achievement, and the effects of "gaming" the SAT have been wildly exaggerated.
The problem with treating the SAT-income correlation as an argument in itself is that richer kids, fairly or unfairly, actually do have higher academic capabilities. As I've written before, and frankly as anyone with eyes in his head ought to know, there are numerous ways that parents can pass advantages on to their kids -- genes, better schools, better neighborhoods, and so on. The simple correlation between income and scores tells us nothing at all about whether the SAT has an income bias, because it's exactly what we would expect from a legitimate measure of academic ability.
Income gaps are evident on basically every academic measure we have. Here is an informative paper bringing together all sorts of achievement-test results in regards to the 90/10 gap -- the gap between kids at the 90th and 10th percentiles of parental income.
It provides two important charts, with Y axes indicating the gap in standard deviations. Here's the gap in reading:
And here's the gap in math:
As you can see, across plenty of different tests, in recent years the 90/10 gap has usually been a full standard deviation or more. Measuring the gap with one test instead of another makes little difference.
[Update: Also, the ACT and SAT show very similar income trends, as I show here.]
And what about "gaming" the SAT? Well, here is a report finding that SAT prep typically raises math scores 10-20 points and reading scores 5-10 points. The standard deviation of each subtest is around 110 points, and the black/white gap is about the same. What's more, free test-prep services are targeted toward poor and minority kids -- surprisingly, black and Hispanic kids are actually more likely to use test prep, and they gain slightly more than whites from doing so (though Asians seem to gain unusually large amounts, as much as 70 points).
I'm glad to see a discussion about the merits of the SAT. But let's drop these two unhelpful talking points.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
For years, environmentalists and the gas drilling industry have been in a pitched battle over the possible health implications of hydro fracking. But to a great extent, the debate — as well as the emerging lawsuits and the various proposed regulations in numerous states — has been hampered by a shortage of science.
In 2011, when ProPublica first reported on the different health problems afflicting people living near gas drilling operations, only a handful of health studies had been published. Three years later, the science is far from settled, but there is a growing body of research to consider.
Below, ProPublica offers a survey of some of that work. The studies included are by no means a comprehensive review of the scientific literature. There are several others that characterize the chemicals in fracking fluids, air emissions and waste discharges. Some present results of community level surveys.
Yet, a long-term systematic study of the adverse effects of gas drilling on communities has yet to be undertaken. Researchers have pointed to the scarcity of funding available for large-scale studies as a major obstacle in tackling the issue.
A review of health-related studies published last month in Environmental Science & Technology concluded that the current scientific literature puts forward "both substantial concerns and major uncertainties to address."
Still, for some, waiting for additional science to clarify those uncertainties before adopting more serious safeguards is misguided and dangerous. As a result, a number of researchers and local activists have been pushing for more aggressive oversight immediately.
The industry, by and large, has regarded the studies done to date — a number of which claim to have found higher rates of illness among residents living close to drilling wells — as largely anecdotal and less than convincing.
"The public health sector has been absent from this debate," said Nadia Steinzor, a researcher on the Oil and Gas Accountability Project at the environmental nonprofit, Earthworks.
Departments of health have only become involved in states such as New York and Maryland where regulators responded to the public's insistence on public health and environmental reviews before signing off on fracking operations. The states currently have a moratorium on fracking.
New York State Health Commissioner Nirav Shah is in fact conducting a review of health studies to present to Governor Andrew Cuomo before he makes a decision on whether to allow fracking in the state. It is unclear when the results of the review will be publicly available.
Other states such as Pennsylvania and Texas, however, have been much more supportive of the gas industry. For instance, Texas has been granting permits for fracking in ever increasing numbers while at the same time the Texas Commission on Environmental Quality, the agency that monitors air quality, has had its budget cut substantially.
1. An Exploratory Study of Air Quality near Natural Gas Operations. Human and Ecological Risk Assessment, 2012.
The study, performed in Garfield County, Colo., between July 2010 and October 2011, was done by researchers at The Endocrine Disruption Exchange, a non-profit organization that examines the impact of low-level exposure to chemicals on the environment and human health.
In the study, researchers set up a sampling station close to a well and collected air samples every week for 11 months, from when the gas wells were drilled to after it began production. The samples produced evidence of 57 different chemicals, 45 of which they believe have some potential for affecting human health.
In almost 75 percent of all samples collected, researchers discovered methylene chloride, a toxic solvent that the industry had not previously disclosed as present in drilling operations. The researchers noted that the greatest number of chemicals were detected during the initial drilling phase.
While this study did catalogue the different chemicals found in air emissions from gas drilling operations, it did not address exposure levels and their potential effects. The levels found did not exceed current safety standards, but there has been much debate about whether the current standards adequately address potential health threats to women, children and the elderly.
The researchers admitted their work was compromised by their lack of full access to the drilling site. The air samples were collected from a station close to what is known as the well pad, but not the pad itself.
The gas drilling industry has sought to limit the disclosure of information about its operations to researchers. They have refused to publicly disclose the chemicals that are used in fracking, won gag orders in legal cases and restricted the ability of scientists to get close to their work sites. In a highly publicized case last year, a lifelong gag order was imposed on two children who were parties to a legal case that accused one gas company of unsafe fracking operations that caused them to fall sick.
In 2009, the Independent Petroleum Association of America started Energy In Depth, a blog that confronts activists who are fighting to ban fracking and challenges research that in any way depicts fracking as unsafe.
Energy In Depth responded to this Garfield County study and criticized its lack of proper methodology. The blog post also questioned the objectivity of the researchers, asserting that their "minds were already made up."
The industry has also been performing its own array of studies.
Last year, for instance, an industry-funded study on the methane emissions from fracking wells was published in the prestigious journal, Proceedings of the National Academy of Sciences. It concluded that only very modest amounts of methane — a known contributor to climate change — was being emitted into the air during fracking operations.
The study came under heavy criticism from Cornell researcher Robert Howarth, who two years prior had published work that claimed methane emissions from shale gas operations were far more significant.
"This study is based only on evaluation of sites and times chosen by industry," he said.
2. Birth Outcomes and Natural Gas Development. Environmental Health Perspectives, 2014.
The study examined babies born from 1996 to 2009 in rural Colorado locations — the state has been a center of fracking for more than a decade. It was done by the Colorado School of Public Health and Brown University.
The study asserted that women who lived close to gas wells were more likely to have children born with a variety of defects, from oral clefts to heart issues. For instance, it claimed that babies born to mothers who lived in areas dense with gas wells were 30 percent more likely to have congenital heart defects.
The researchers, however, were unable to include data on maternal health, prenatal care, genetics and a host of other factors that have been shown to increase the risk of birth defects because that information was not publicly available. A common criticism of many scientific studies is that they do not fully analyze the possibility of other contributing factors.
The study has thus come under attack from both the industry and state public health officials. In a statement, Dr. Larry Wolk, the state's Chief Medical Officer, said "people should not rush to judgment" as "many factors known to contribute to birth defects were ignored" in the study.
But Lisa McKenzie, one of the lead authors of the study, said there was value to the work.
"What I think this is telling us is that we need to do more research to tease out what is happening and to see if these early studies hold up when we do more rigorous research," she said.
In Pennsylvania, Elaine Hill, a graduate student at Cornell University, obtained data on gas wells and births between 2003 and 2010. She then compared birth weights of babies born in areas of Pennsylvania where a well had been permitted but never drilled and areas where wells had been drilled. Hill found that the babies born to mothers within 2.5 kilometers (a little over 1.5 miles) of drilled gas sites were 25 percent more likely to have low birth weight compared to those in non-drilled areas. Babies are considered as having low birth weight if they are under 2500 grams (5.5 pounds).
Hill's work is currently under review by a formal scientific journal, a process that could take three or four years.
3. Health Risks and Unconventional Natural Gas Resources. Science of the Total Environment, 2012.
Between January 2008 and November 2010, researchers at the Colorado School of Public Health collected air samples in Garfield County, Colo., which has been experiencing intensive drilling operations. Researchers found the presence of a number of hydrocarbons including benzene, trimethylbenzene and xylene, all of which have been shown to pose health dangers at certain levels.
Researchers maintained that those who lived less than half a mile from a gas well had a higher risk of health issues. The study also found a small increase in cancer risk and alleged that exposure to benzene was a major contributor to the risk.
"From the data we had, it looked like the well completion phase was the strongest contributor to these emissions," said Lisa McKenzie, the lead author of the study.
During the completion phase of drilling, a mixture of water, sand and chemicals is forced down the well at high pressure, and is then brought back up. The returning mixture, which contains radioactive materials and some of the natural gas from the geological formation, is supposed to be captured. But at times the mixture comes back up at pressures higher than the system can handle and the excess gas is directly vented into the air.
"I think we ought to be focused on the whole thing from soup to nuts because a lot of the potential hazards aren't around the hydraulic fracturing step itself," said John Adgate, chair of the Department of Environmental and Occupational Health at the Colorado School of Public Health and co-author on the study.
Energy In Depth, the industry blog, responded at length to this study and cited several "bad inputs" which had affected the results of the study. The researchers' assumptions and data were criticized. For instance, the researchers had assumed that Garfield residents would remain in the county until the age of 70 in order to estimate the time period over which they would be exposed to the emissions.
"Unless the 'town' is actually a prison, this is a fundamentally flawed assumption about the length and extent of exposure," Energy In Depth said.
Naveena Sadasivam is a reporting intern at ProPublica, where this piece originally appeared.
Bloomberg has an interesting article on the effects of the state's 1998 decision to dramatically hike its minimum wage and index it to cost of living. Key details:
In the 15 years that followed, the state’s minimum wage climbed to $9.32 -- the highest in the country. Meanwhile job growth continued at an average 0.8 percent annual pace, 0.3 percentage point[s] above the national rate. Payrolls at Washington’s restaurants and bars, portrayed as particularly vulnerable to higher wage costs, expanded by 21 percent. Poverty has trailed the U.S. level for at least seven years.
"At least seven years" is a cute way to put it -- Washington had a below-average poverty rate long before the minimum-wage hike -- but at any rate I was curious to see if there were different effects on different age groups. Here's the overall unemployment rate, which, relative to national trends, did rise a bit after the wage kicked in but settled back down:
Basically, Washington experienced a sustained youth-unemployment hike after the minimum-wage law went into effect. As the Washington Policy Center has noted, "Since 2002, well before the recent recession, Washington consistently ranked among the top ten states with the highest teen unemployment. The single exception was 2007, when Washington briefly broke out of the top ten to rank 12th."
It's always tough to draw conclusions based on data from a single state, and here the results seem decidedly mixed. Job growth has continued, even in industries where we might have expected otherwise, but the most difficult-to-employ people have suffered.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
If you want America’s broadband providers to put their foot on the gas, you need to have regulators take their foot off the brakes. FCC regulations still require telephone companies to build and operate obsolete copper-based telephone networks, even though the traffic on these networks has been quickly diminishing toward nothing. A recent study covered by the Washington Post explained why this is a problem.
Broadband platforms can do everything that the old telephone networks can do -- plus high-speed Internet, messaging, apps, and video. Duplication of networks means a duplication of costs, which means less investment in broadband and higher costs for consumers.
Anna-Maria Kovacs, the author of the report, found that telephone companies were required to spend more than half of their $154 billion investment in their communications networks between 2006 and 2011 on "maintaining fading legacy networks." Ninety-nine percent (and rising) of all U.S. communications traffic is now carried over Internet-based platforms -- wireline, cable, wireless, and satellite networks.
Until regulators modernize their rules to reflect the realities of the marketplace, outdated regulations will continue to have adverse impacts on telephone consumers. As the study suggests, "Regulation, however well-intended, changes too slowly for the fast moving digital world. It distorts the market and hinders innovation."
A good example of how cutting the regulatory red tape can benefit consumers can be found in Kansas City, where Google has begun to deploy a superfast fiber network. Unlike the telephone companies, Google has no regulatory obligation to maintain a copper-based network or provide voice services, and it can select its own "fiberhoods," allowing it to build out fiber at its own pace. The result is the potential for "gigabit" services for consumers. It would appear that allowing Google to experiment with innovation and investment serves the public much better than the regulatory "father knows best" approach.
To this end, the FCC recently opened a rulemaking proceeding to encourage company trials for cutting over to an all IP-based network. Congress is also considering rewriting the Communications Act to reflect the changing times. These are positive developments that could bring about better, faster, and cheaper services for consumers, but only if the new regulations don’t get in the way themselves.
Perhaps policymakers can learn from the Google example, speed the IP transition, and soften regulatory barriers to investment to all competitors. Indeed, it would be a shame to let another six years pass without action by the FCC, which may result in another $81 billion being dedicated to abandoned copper networks that only a small fraction of U.S. consumers will be using.
If we want to speed up investment and innovation, regulators need to get their foot off the breaks. Speeding up the IP-based transition will be a great benefit all American consumers.
Steve Pociask is president of the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization.
Pensions are in the news these days, as the retirement benefits of our nation’s hard-working public employees are being squeezed by economic realities and mismanagement by politicians. The most notable example of the danger posed by an underfunded pension system is Detroit, where in December a federal judge ruled that pension promises will not be protected during bankruptcy proceedings. Puerto Rico also has serious fiscal problems, and, despite attempts to reform its pension systems, the island’s debt was downgraded to junk status earlier this month by the major rating agencies.
These examples are extreme, but even more typical cases are unsettling given the roughly $2.7 trillion pension hole that states have to dig out of -- a hole that continues to deepen in many places. In the absence of new revenue sources, ballooning pension obligations are likely to crowd out other vital public services, such as education. For example, a recent analysis found that Milwaukee Public Schools face increasing pension costs that, without additional funding, could require the district to fire 24 percent of its teachers or cut their salaries and benefits by the same amount between now and 2020.
After years of failing to make adequate payments to public pension systems, politicians are finally starting to act. The resulting policies, such as reductions in benefits and shifts to the 401(k)-style plans common in the private sector, have been defended as necessary by reform proponents and attacked by public-sector employees and their unions as draconian and unfair.
There is an element of truth in both narratives. In our new report, "Improving Public Pensions: Balancing Competing Priorities," Patten Mahler, Russ Whitehurst, and I propose a middle ground in the pension wars.
It is clear that there is no one specific policy that will solve the problems of every state and local pension plan, as priorities and constraints vary widely, but any well-designed pension plan will strive to meet three goals: to provide an adequate and secure retirement for workers, to be financially sustainable, and to promote a highly effective public-sector workforce.
Existing defined-benefit pension systems promise benefits to employees but entrust politicians with the responsibility of setting aside adequate resources to pay for those benefits when they come due. These plans provide retirement security to some workers, such as those who work in one job for their entire career, but at the expense of many other workers; retirement wealth is significantly redistributed to career employees from more mobile employees -- even those who spend more than a decade serving the public.
This system creates incentives for workers to stay or quit that just don’t make any sense. An unhappy mid-career employee may feel compelled to stay in his current job to avoid losing future pension wealth rather than leaving and finding a more suitable position. On the other hand, a highly productive worker who has become eligible to receive his pension benefit may feel pushed to leave his post rather than forfeit that benefit. On top of it all, politicians have often made pension promises while leaving it to their successors -- and future taxpayers -- to foot the bill.
Further, while defined-benefit pension plans may have made sense in an earlier era, too often they fail to meet the goals of a well-designed retirement system today. A common proposal is to replace these plans with the kinds of defined-contribution plans that are common in the private sector, where employees have individual retirement accounts but no guaranteed yearly payment. These plans cannot be underfunded by definition, so employees are protected from the whims of politicians -- but the plans leave retirement accounts unprotected from market risk and are vehemently opposed by many public-sector workers and their unions.
Our proposal, a collective defined-contribution plan, combines many of the benefits of both defined-benefit and defined-contribution plans. In our plan, workers have individual, portable accounts that are professionally managed and spread risk across participants, so no one suffers from bad investment decisions or the poor condition of the economy at a particular point in time. The version of this plan we propose cannot be underfunded by short-sighted politicians, provides fair benefits to all employees (not just some, as current plans do), and protects employees from the market risks that plague retirement plans in the private sector.
The collective defined-contribution plan meets another important objective of pension reform: it is a policy with the potential to transcend traditional ideological boundaries. In the effort to address under-funded pension systems across the country, it is essential that policymakers of all political stripes find a middle ground that is fair to public employees, while saving all citizens from greater pain down the road. The collective defined-contribution retirement plan is one way to do just that.
Matthew M. Chingos is a fellow in the Brown Center on Education Policy at the Brookings Institution.
The United Automobile Workers spent a reported $5 million trying to organize a Volkswagen plant in Chattanooga, Tenn., but ultimately fell short in a 712-to-626 vote. Now, the organization has lodged a formal appeal with the National Labor Relations Board, requesting that the board permit a second election -- a do-over, if you will.
The UAW claims that state officials deprived workers of their right, under Section 7 of the National Labor Relations Act, "to vote in an atmosphere free of coercion, intimidation and interference." The UAW is primarily alluding to comments made by Tennessee senator Bob Corker, who stated, "I've had conversations today and based on those am assured that should the workers vote against the UAW, Volkswagen will announce in the coming weeks that it will manufacture its new mid-size SUV here in Chattanooga."
But as Fred Feinsten, NLRB's general counsel from 1994 to 1999, has said, "the board doesn't have any jurisdiction over politicians, or anybody outside the plant. So the board can't order them to not say these things." Because nothing can be done to actually stop the alleged problem, the UAW's logic could easily lead to a paradigm where election results are continually discounted, costing companies and the government millions of dollars.
Moreover, there is a precedent that applies with regard to this matter. In 2011, the Communications Workers of America won an election at a company called Affiliated Computer Services, but the employer objected on the basis that a New York state senator and a U.S. congressman had made public statements in support of the union. The board disregarded the objection, ruling that "public officials, even public officials involved in the regulation of the employer's industry, like other third parties, are not required to remain neutral and may properly seek to persuade employees." A ruling to the contrary in this case would reveal a blatant pro-union bias on the part of the board.
Even if the board accepts the idea that third-party comments can constitute illegal coercion, the UAW will have to show that the comments sullied the election results. As the Washington Post reports, "in order to prove that the threat actually moved the dial, the UAW will likely have to find workers who'll say they felt threatened enough to vote against the union on those grounds." However, this will be difficult, because the ballots were cast in secret and workers had many reasons for voting against the union (including, as the Post reports, a "two-tiered wage system for new hires").
And if Corker's comment illegally influenced this election, other developments could come into play next time around: Some Volkswagen workers have claimed that the company and the union are working together to organize the plant, and a board member from Volkswagen has threatened to withhold future investments if workers don't organize.
Despite all of this, there is still a chance that Obama's labor board could rule in favor of the UAW. From a legal perspective, the board is supposed to act as an independent, non-partisan regulatory body, but this NLRB is stacked with Obama appointees who have a track record of making partisan decisions. Some of its members have strong union ties -- the general counsel previously worked for the International Union of Operating Engineers (IUOE) -- and nothing can be certain.
If the NLRB complies with this absurd request, it will be overturning its own precedent purely for the political advantage of Big Labor. It will be making a mockery of the workplace-election process and discounting the votes of Volkswagen employees.
Fred Wszolek is a spokesperson for the Workforce Fairness Institute.
One-in-three patients in skilled nursing facilities suffered a medication error, infection or some other type of harm related to their treatment, according to a government report released today that underscores the widespread nature of the country's patient harm problem.
Doctors who reviewed the patients' records determined that 59 percent of the errors and injuries were preventable. More than half of those harmed had to be readmitted to the hospital at an estimated cost of $208 million for the month studied 2014 about 2 percent of Medicare's total inpatient spending.
Patient safety experts told ProPublica they were alarmed because the frequency of people harmed under skilled nursing care exceeds that of hospitals, where medical errors receive the most attention.
"(The report) tells us what many of us have suspected -- there are vast areas of health care where the field of patient safety has not matured," said Dr. Marty Makary, a physician at Johns Hopkins Medicine in Baltimore who researches health care quality.
The study by the inspector general of the U.S. Department of Health and Human Services (HHS) focused on skilled nursing care -- treatment in nursing homes for up to 35 days after a patient was discharged from an acute care hospital. Doctors working with the inspector general's office reviewed medical records of 653 randomly selected Medicare patients from more than 600 facilities.
The doctors found that 22 percent of patients suffered events that caused lasting harm, and another 11 percent were temporarily harmed. In 1.5 percent of cases the patient died because of poor care, the report said. Though many who died had multiple illnesses, they had been expected to survive.
The injuries and deaths were caused by substandard treatment, inadequate monitoring, delays or the failure to provide needed care, the study found. The deaths involved problems such as preventable blood clots, fluid imbalances, excessive bleeding from blood-thinning medications and kidney failure.
One patient suffered an undiagnosed lung collapse because caregivers failed to recognize symptoms. The patient later had a reaction to medication and a blood clot and had to be transferred to a hospital.
Projected nationally, the study estimated that 21,777 patients were harmed and 1,538 died due to substandard skilled nursing care during August 2011, the month for which records were sampled.
Medicare patients "deserve better," said Sen. Bill Nelson, D-Fla., chairman of the U.S. Senate Special Committee on Aging. Nelson said he would push for better inspections of the facilities. "This report paints a troubling picture of the care that's being provided in some of our nation's nursing homes," he said.
The report said it is possible to reduce the number of patients being harmed. It calls on the federal Agency for Healthcare Research and Quality and the Centers for Medicare & Medicaid Services (CMS) to promote patient safety efforts in nursing homes as they have done in hospitals.
The authors also suggest that CMS instruct the state agencies that inspect nursing homes to review what they are doing to identify and reduce adverse events.
In its response to the report, CMS agreed with the findings and noted that the Affordable Care Act requires nursing homes to develop Quality Assurance and Performance Improvement programs. The agency's quality improvement work includes a website for nursing homes that was launched in 2013.
A "skilled nursing" facility provides specialized care and rehabilitation services to patients following a hospital stay of three days or more. There are more than 15,000 skilled nursing facilities nationwide, and about 90 percent of them are also certified as nursing homes, which provide longer-term care.
As hospitals have moved to shorten patient stays, skilled nursing care has grown dramatically. Medicare spending on skilled nursing facilities more than doubled to $26 billion between 2000 and 2010. About one-in-five Medicare patients who were hospitalized in 2011 spent time in a skilled nursing facility.
John Sheridan, a member of the American College of Health Care Administrators, which represents nursing home executives, called the report valuable but noted that it sampled only a small number of patients. He questioned whether the findings apply broadly to skilled nursing facilities.
Sheridan also strongly disagreed with the report's observation that there's less known about patient safety in skilled nursing facilities compared to hospitals. He said Medicare has robust inspections of nursing homes it certifies -- they take place annually or when there are complaints and are usually conducted by state contractors. Medicare also keeps detailed data on the violations, he said. (ProPublica's Nursing Home Inspect makes it easy to search and view Medicare inspection reports.)
Sheridan agreed that skilled nursing facilities could improve, but said the caregivers face a daunting task and work diligently despite low reimbursements Medicare pays to the facilities.
"They don't go to work every day to cause an adverse event," Sheridan said of the providers. "They do it to care for the residents there. They do it with sacrifice and love."
Dr. Jonathan Evans, president of the American Medical Directors Association, a group focused on nursing home care, said while he doesn't dispute the estimates in the inspector general's report, they are typical of problems that exist throughout the health care sector.
Evans said that patients receiving skilled nursing care are leaving hospitals sooner and that many are not medically stable and have more intensive needs. Nursing homes, originally designed for long-term patients who did not need intensive care, and have been slow to adapt, Evans added.
"You have a system of long-term care that's trying to retrofit to be a system for post-acute care," he said. "The resources to care for them and commitment from those sending them from one facility to another haven't kept pace."
Evans called the study significant and said he hopes it raises awareness and sparks improvements.
Makary, the Johns Hopkins' doctor, said the patient safety movement has been more focused on problems at hospitals than in nursing homes.
A 2010 report by the HHS inspector general estimated that 180,000 patients a year die from bad hospital care, and other estimates have been higher. The patient safety research community has focused on reducing bloodstream infections and surgical errors at hospitals but has done less to address issues specific to nursing homes, Makary said.
Developing metrics to track improvement would be more effective than annual inspections, which don't do a good job of capturing a facility's everyday performance, Makary said.
Patient advocates said the study verifies what they've heard from skilled nursing patients and their families. Richard Mollot, executive director of New York's Long Term Care Community Coalition, said he was "flabbergasted" by medication errors, bedsores and falls that were identified in the report.
They are prominent problems that nursing homes should be "well versed" to address, he said.
Mollot said the report should have more forcefully called for better enforcement of the existing standards in nursing homes.
States inspect nursing homes on behalf of Medicare every year and when there are complaints, he said, but some inspectors are tougher than others. Medicare's current standards of care are good, he said, and "if they were enforced we wouldn't have these widespread problems."
About 40 percent of people over age 65 will spend time in a nursing home at some point, Mollot said. Hopefully, he said, the inspector general's report will help the public see that care needs to improve.
"They are dangerous, dangerous places," he said.
This piece originally appeared at ProPublica, where Marshall Allen is a reporter. ProPublica is investigating health care quality and welcomes your input by filling out its questionnaires for patients who've been harmed and for medical providers.
Jed Graham of Investor's Business Daily has the numbers on workers making $7.25 to $10 an hour from the Current Population Survey. The lines represent the number of people working slightly more and slightly less than 30 hours a week, the Obamacare threshold for "full time," relative to the numbers in late 2011:
The ranks of $7.25-to-$10 hourly wage earners usually working 25- to 29-hour weeks in their primary job surged 17%, or 224,000, from the fourth quarter of 2012 to 2013.
Meanwhile, those very low-wage earners typically clocking 31- to 34-hour weeks in their main job fell 11%, or 84,000. Within this wage range, 35 hours-plus workers declined 7%, or 725,000.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The Brady Campaign has a report today commemorating 20 years of background checks for guns bought at licensed dealers. It says that the law stopped 2.1 million firearm purchases. "Countless lives have been saved, and crimes have been prevented thanks to the Brady law," it claims.
If the law has indeed saved any lives, this is the wrong way to go about proving it. As I wrote previously:
Actually blocking sales is not the point of background checks. In order for a sale to be blocked, a prohibited buyer needs to willingly fill out the paperwork for a gun purchase and have it run through the system. Doing this is incredibly stupid, so it almost never happens. When it does happen, it's usually because the person didn't realize he was prohibited. (For example, his crime was a long time ago, or he was dishonorably discharged from the military.) Prosecutions rarely result.
That's the problem with touting blocked purchases, even if 2.1 million over 20 years sounds impressive: About 94 percent of failed checks are "not referred to field, overturned, or canceled" when law enforcement looks into them, and some more cases fall apart after that, too. These folks aren't necessarily allowed to buy guns, but they also aren't seen as threatening enough to prosecute, and they're left free to pursue guns elsewhere if that's what they want.
As I've also said in this space, it matters greatly what a criminal's second-best option is. If guns are easily available without going through a licensed dealer and passing a background check -- by, for example, buying from a private seller -- illegal sales are rerouted rather than squelched. (Think of it like closing a Burger King to fight obesity when there's a McDonald's next door, as opposed to closing the only fast-food joint in a ten-mile radius.) Indeed, most criminals didn't get their guns from dealers even before the Brady law. Even Philip Cook and Jens Ludwig, two researchers often labeled "anti-gun," failed to find any crime-reduction effect when they compared states affected by the Brady law with those that had previously passed similar measures at the state level.
The report advocates universal background checks, as opposed to checks only at licensed dealers, dredging up the widely cited if highly disputed statistic that 40 percent of gun sales take place between private parties. But whether the correct number is 40 percent or much lower, I think this is a considerably more promising approach.
Well, if it's implemented correctly. If we could require checks to be conducted before guns change hands, and if we could trace crime guns and consult records to prove that people didn't follow the law, we could make it a bit harder for criminals to get their hands on guns. But those are very difficult things to do, both practically and politically.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
Recently a federal court ordered the removal from YouTube of Innocence of Muslims, the controversial video that you may remember from the Benghazi crisis. The problem with the video was not that it is anti-Muslim, but that it violates the copyright of an actress who appeared in it. This is a "preliminary injunction," and the case is still being litigated.
A number of commentators have voiced concerns about censorship and overly strong copyright protections, including R Street's Jeremy Kolassa. I'm a little less concerned. Here are four facts that seem highly relevant to me:
1. The actress is not claiming to have a copyright interest in the movie as a whole. She is merely claiming ownership of her own appearance in it. This type of thing is not litigated often, because people who appear in movies -- or work on them behind the scenes -- typically sign contracts beforehand. (Surprisingly, there doesn't even seem to be much case law about whether a performance is a copyrightable "work" at all.) Further, the problem here can be addressed by removing or re-shooting her segment, which is 5 seconds of the 13-minute film.
2. The actress says she was filmed for a role in a fictional story, not an anti-Islamic "documentary." From the decision:
Cindy Lee Garcia ... agreed to act in a film with the working title "Desert Warrior." ... "Desert Warrior" never materialized. ... Garcia first saw "Innocence of Muslims" after it was uploaded to YouTube.com and she discovered that her brief performance had been partially dubbed over so that she appeared to be asking, "Is your Mohammed a child molester?"
3. Garcia never signed any sort of agreement saying the filmmaker could do whatever he wanted with the footage.
4. The ruling addresses, fairly persuasively in my view, the biggest concern that has been raised about it.
Some have worried that this could invite a rash of lawsuits -- anyone who had any role in a film or newspaper story can now say their copyright has been violated. Signed agreements address this in many situations, but there are indeed questions raised about "implied" agreements, such as when a source talks to a journalist.
I find the court's argument on this point convincing, though it will be hard to draw a precise line:
If the scope of an implied license was exceeded merely because a film didn't meet the ex ante expectation of an actor, that license would be virutally meaningless. A narrow, easily exceeded license could allow an actor to force the film's author to re-edit the film -- in violation of the author's exclusive right to prepare derivative works. Or the actor could prevent the film's author from exercising his exclusive right to show the work to the public. In other words, unless these types of implied licenses are construed very broadly, actors could leverage their individual contributions into de facto authorial control over the film.
Nevertheless, even a broad implied license isn’t unlimited. Garcia was told she’d be acting in an adventure film set in ancient Arabia. Were she now to complain that the film has a different title, that its historical depictions are inaccurate, that her scene is poorly edited or that the quality of the film isn’t as she’d imagined, she wouldn’t have a viable claim that her implied license had been exceeded. But the license Garcia granted Youssef wasn’t so broad as to cover the use of her performance in any project. Here, the problem isn’t that “Innocence of Muslims” is not an Arabian adventure movie: It’s that the film isn’t intended to entertain at all. The film differs so radically from anything Garcia could have imagined when she was cast that it can’t possibly be authorized by any implied license she granted Youssef.
A clear sign that Youssef exceeded the bounds of any license is that he lied to Garcia in order to secure her participation, and she agreed to perform in reliance on that lie. Youssef’s fraud alone is likely enough to void any agreement he had with Garcia. But even if it’s not, it’s clear evidence that his inclusion of her performance in “Innocence of Muslims” exceeded the scope of the implied license and was, therefore, an unauthorized, infringing use.
Essentially, anyone filing a follow-up lawsuit will have to show that they did not grant an implied license for their work to be used, and implied licenses will be construed broadly. Very rarely will a performer be able to make a case as strong as this one.
By the way, here's Google's response:
Remember the video that President Obama cited as the reason behind the attacks in Benghazi? The 9th Circuit Court of Appeals has voted 2-1 to order the video be taken down from YouTube -- though not for the reasons you may expect.
The court ruled that YouTube must take down Innocence of Muslims, the (rather poorly made) "documentary" about the religion of Islam, not on the basis that it was anti-Muslim, but because of a copyright violation. The plaintiff in the case, Cindy Lee Garcia, claims that she was hired to work on a totally different film, and was misled about the nature of the final product. Indeed, her short appearance had her own voice dubbed over by another actor. On top of this, Ms. Garcia claimed she was receiving death threats for her part in the film.
Ms. Garcia is asking to have the video taken down because of her copyright claim, even though she was barely in the video itself and constituted a very small role in its production with no control over shooting, writing, or post-production. Are we now going to allow bit players the power to take down an entire production? What about the other actors and personnel involved in the production? What if they disagree with the removal of a production, as it may infringe on their rights to their performances (and potential royalties)?
Marvin Ammori notes that this ruling creates a host of a thorny legal questions. Indeed, people won't even need to sue to take things down; DCMA takedown notices will expand greatly. He also notes that uncertainty over who owns copyright will increase, which will make it harder to actually produce anything. He brings up tragedy of the anti-commons, which really is a tragedy when it involves speech.
Corynne McSherry at the Electronic Frontier Foundation also makes a great point that Ms. Garcia's copyright claim is on very shaky ground:
Second, the merits of this case are indeed doubtful. Very doubtful. Garcia is claiming a copyright interest in her brief performance [5 seconds from a 13 minute production], a novel theory and one that doesn't work well here. After all, Garcia herself admits she had no creative control over the movie, but simply performed the lines given to her. There may be a context where an actor could assert some species of authorship, but this doesn't seem to be one of them. Movie makers of all kinds should be worried indeed.
This is, of course, before we get to the elephant in the room: the First Amendment. The majority opinion asserts there is no problem with this because the First Amendment doesn't protect copyright infringement, but we're talking about a controversial video inextricably linked to a event of great historical significance, as well as one explicitly promoting a particular opinion and viewpoint. Do copyright questions override these? That would seem to put free speech on rocky ground in the United States.
This is certainly not a clear cut issue by any means. We cannot forget that Ms. Garcia was allegedly misled by the video's producer into working for what was essentially another production, that was recut and redubbed into Innocence of Muslims. But at the same time, couldn't there have been alternative solutions? Did the video really need to be taken down (and necessitate a court order forcing YouTube to "take all reasonable steps to prevent further uploads")? Could Ms. Garcia really claim a copyright when she was just performing a set of instructions for an incredibly tiny part in this production? What sort of Pandora's Box are we opening with this decision?
Read the full court opinion here.
Jeremy Kolassa is an associate policy analyst for the R Street Institute. This piece originally appeared on the R Street blog and is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
One, here are the rate changes. It's kind of odd -- loosely speaking, the people in the 10 percent and 25 percent brackets won't see a rate change, while everyone else will get a cut:
Two, the plan increases the standard deduction so that 95 percent of taxpayers no longer have to itemize. That simplifies things, but it also neuters the incentives created by tax deductions, which can be good or bad.
The case of charity is especially interesting. The blueprint notes that people who donate to charity "will no longer need to keep all the receipts and fill out all the forms" -- but that's because they get the same deduction whether they donate to charity or not. Even for taxpayers who itemize, the plan would limit the charitable deduction to contributions exceeding 2 percent of their total income. (The example they use: If you earn $100,000 and donate $10,000, you can deduct $8,000.) The blueprint claims that the plan would still increase charity, because it would improve the economy and the economy in turn is linked to charitable donations.
Three, the plan keeps the mortgage deduction for the most part. Current mortgages will be unaffected, but in the coming years the cap will be reduced from $1 million to $500,000. Not a bad move, but critics of the deduction wanted to see a lot more.
And four, at a time when even a lot of conservatives support expanding the Earned Income Tax Credit -- which today is fully refundable -- this plan would pare it back, making it, in the blueprint's words, "a credit against actual payroll taxes paid, strengthening the program's integrity and better guarding against mismanagement."
There's lots more, of course: repealing the Alternative Minimum Tax, cutting the corporate rate, condensing the various education deductions, eliminating the "double tax that only applies to the over-seas earnings of U.S. companies if those companies want to reinvest those earnings in the United States" ...
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen