Comedian John Oliver's rant against mandatory minimums is making the media rounds. Watching it, I was struck by the story of Kevin Ott (3:32), who says he was given life in prison for three ounces of meth. Oliver scoffs that "we're treating him like he's Season 5 Walter White when he's barely Episode 1 Jesse Pinkman."
Oliver's right: The drug war isn't going well to say the least, and some aspects of mandatory minimums need reform. But was Ott really put away for life just for having three ounces of meth? And how much meth is three ounces anyway?
A key fact is that Ott had a significant criminal history. He had previously been convicted of wife battery and drug and weapon violations. During his final arrest he was found with a loaded handgun in addition to the drugs, despite the fact he was under court supervision. But it was drugs alone that got him the life sentence in 1997: Oklahoma's "three strikes" law applied whenever a person was convicted of a drug felony and had two previous drug-related convictions on his record. (The law was weakened somewhat just a few months ago.)
As for the amount of meth, in an appeals-court court decision the precise amount is reported as 102.8 grams, actually closer to four ounces than three. (There are about 28 grams in an ounce.) The document also says meth is normally sold in 1/16- or 1/8-ounce packages, which amounts to 1.75 to 3.5 grams. Yes, Ott was more akin to early Jesse Pinkman than to late Walter White — and perhaps drugs should be legal entirely — but 102.8 grams is a fair amount of meth to have sitting around at a single point in time: about 30-60 sales, worth thousands of dollars. The court said, quoting from a previous decision: "This is not a minor drug offense but a major crime."
Illegal drugs are typically sold in fractions of an ounce, sometimes even fractions of a gram. (This report about the drug trade in Ohio has some more up-to-date numbers collected during interviews with drug users.) When something is sold in quantities that small, even seemingly tiny amounts can be substantial.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The age of technology may be upon us, but not all are convinced we should cast our votes online. The Heritage Foundation has released a paper, "The Dangers of Internet Voting," chronicling other countries' experiences with online voting and arguing that America is not ready for it.
We talked with Hans von Spakovsky, the paper's author, to learn more. The interview has been edited for length and clarity.
Who is proposing online voting, and how likely it is to become reality?
There was a big push in the country in the late 1990s early 2000s over this, and then that kind of subsided when a series of reports came out from the National Science Foundation and other task forces that said this is dangerous, we shouldn't do this. But in recent years many states have allowed, not Internet voting, but the return of voted ballots by e-mail, and other states are considering joining them. Other folks are talking about using Internet voting for primaries.
There's also a real push with regard to overseas voters. I'll be the first to say we have a real problem with military voters, but proposals to allow them to vote over the Internet or to allow e-mail return of ballots is not a good idea, because it would make our systems very vulnerable.
What would the process for online voting would be like?
The District of Colombia several years ago was going to provide an online voting capability that would allow people to go to a website, set up new registration, check in, and then cast a ballot, and they'd be able to cast it from their home computers or their work computers. They were very confident that they had a very secure system and opened it up in a mock election, and they challenged hackers to try to get into it. It was almost immediately breached, and the election officials didn't even realize that folks had been able to get into the system.
Can you elaborate on the various cybersecurity issues here? Is there an argument that the increase in turnout would be worth the risk?
The problem with Internet voting is kind of inherent in the technology itself. Hardware, software, and computer scientists almost overwhelmingly say there's almost nothing that can be done giving the current state of the technology — the way the Internet is designed — to actually make a safe system. Those risks way outweigh any possibility that it might increase turnout, and actually, there's evidence from some other countries that have actually tried Internet voting that it doesn't really increase voter turnout. It just makes it easier for people who would vote anyway to cast their ballot, but it does it at a much greater risk.
Can you summarize some of the other countries' experiences?
Estonia is a country that in 2005 because the first country to offer Internet voting in a national election. They've used it a number of times since then. And they've done that despite the fact that a team of computer scientists at the University of Michigan — who, by the way, were the same people who easily breached the proposed District of Columbia system — went in and identified numerous major security risks and vulnerabilities in the Estonian system and recommended its immediate termination. The biggest problem they saw was that hackers, particularly dedicated, well-organized hackers, such as a foreign agency, perhaps in Russia or China, could not only get into the system and manipulate election results, but likely would be able to do it without detection, and that makes that kind of system even more dangerous.
The same kind of system was proposed in the neighboring country of Latvia, and there, they said we are not doing this, because with the current technology, it's not possible to ensure the security of the Internet voting process.
Some of the problems you highlight happened more than a decade ago. Do you know if these countries have improved their systems since then?
There's no indication that they have. Other countries that have tried it have stopped doing it after having problems. France tried to do this just two years ago in a mayoral election in Paris. This was for a primary election. Again, the backers of the system said it was fraud-proof, that it was ultra-secure; however, reporters were able to breach the security of the system and vote several times using different names, and in fact, one of the reporters was able to vote five times in that primary under the name of the former French president, Nicolas Sarkozy.
Why has there not been another test program since D.C.'s got hacked in 2010?
That one was particularly interesting, because when the Michigan team got into the D.C. system, they found hackers from other parts of the world trying to get into the system, and that exposes one of the great dangers of an Internet voting system.
Everyone knows very well the huge breaches of security we just had with not only the Office of Personal Management, but now the IRSI. It was suspected in the OPM breach that this was part of a special team that the Chinese government set up some years ago. There have been a number of newspaper rticles that have talked about this — how professional hackers are being used by the Chinese government. This kind of system in a U.S. election would be a prime target, not just for individual hackers, but for a government trying to get into the system to manipulate elections.
What is your recommendation for those who want to switch to online voting? How can online voting be safe?
Given current technology, online voting cannot be safe. All they have to do is read the various reports that have been done by people who are experts in the field — computer scientists, software engineers, who almost overwhelmingly say that the current system is not able to be secure.
And to those who say we do a lot of e-commerce now over the Internet — that system itself is not very secure. There are billions of dollars of fraud committed with e-commerce, and the requirements for that are quite different. If someone has breached your bank account through the Internet, when you go and check your bank statement, you'll be able to figure that out. If someone intercepts the vote over the Internet that you're trying to cast at a website, there's no way for you to check whether that's happened, or whether your vote has been changed or not. There is just no way to combine security and the anonymity that is required for the secret ballot.
Is there anything I didn't ask but should have?
This is not really a partisan issue. A lot of election issues, particularly regarding the rules, seem to unfortunately devolve down into different party issues. This is not one of those. This is something that people of all political parties ought to realize would be a very dangerous development in America, and it is not one that we should encourage.
Courtney Such is a RealClearPolitics intern.
When he announced his candidacy for president in mid-June, Donald Trump made the provocative assertion that Mexican immigrants are "bringing crime." The comment gained greater resonance when, two weeks after Trump's speech, Kathryn Steinle was shot and killed in San Francisco by an illegal alien from Mexico. The alien, Francisco Sanchez, had been in local policy custody back in April, and federal officials intended to deport him. But Sanchez was instead released due to San Francisco's "sanctuary city" policy.
In response to the resulting outcry, some mainstream media outlets correctly noted that, although good data are had to come by, the overall immigrant crime rate does not appear to be especially high. But then advocates of mass immigration went much further, making wild claims that Mexican immigrants have a miniscule crime rate that somehow even suppresses native crime. Only an uninformed rabble-rouser would worry about criminals crossing our borders, according to immigration enthusiasts.
The truth is more complex. In a detailed report, a colleague and I have explained why it is very difficult to measure immigrant crime. There is research showing that immigrants do commit a disproportionate share of crime, but there is also research showing that the opposite is the case. Census Bureau data collected on the institutional population (such as those in prisons and jails) might be a way to at least measure incarnation rates in an unbiased fashion. But as we explained in the reported mentioned above, the Bureau's ability to record whether the institutionalized are immigrant or native broke down in the past and it still not clear if this problem has been entirely corrected.
There is also the issue of what should be the proper benchmark for measuring immigrant crime. As we point out in our crime study:
In social science research, raw numbers need to be placed into some kind of context, often by comparing one population of interest to another. Assuming one can measure immigrant crime, the next question that arises is: To what should it be compared? This is an important question because crime rates among natives differ widely by group. For example, the share of native-born black men arrested or incarcerated is dramatically higher than for all other groups… However, the discrimination and racism black Americans have experienced and the severe social problems that exist in some black communities make this population unique when it comes to the issue of crime. One can reasonably ask whether it makes sense to compare immigrants, who are overwhelmingly not black, to black Americans who have a unique historical experience.
Data collected by the Census Bureau in 2013 shows that 23 per 1,000 male Mexican immigrants ages 18 to 40 are institutionalized (mainly in jails or prisons; few people at that age are in nursing homes or similar institutions). This compares to 31 per 1,000 for native-born men in this age group. However, looking at only non-black native men (18-40) shows an incarceration rate of 20 per 1,000. This is somewhat lower than the rate of Mexican-born men and a good deal lower than the 38 per 1,000 for U.S.-born men of Mexican ancestry. It is also worth noting that Mexican men are included in the figure for non-black natives; if they are excluded then the rate for natives would be 18 per 1,000. The rate for native-born whites alone is 16 per 1,000.
All this matters because studies that examine what happens to crime rates in predominately black areas when immigrants move in are looking at communities with crime rates that reflect the marginalization and unique situation of black Americans. When it comes to crime, these communities are statistical outliers. So even if crime falls as the immigrants arrive, it is somewhat misleading because the baseline rate was unusually high in the first place. Further, the impact of Mexican immigration on other communities, with much lower pre-existing crime rates, could be very different.
Two other points are worth making with regard to immigrant crime. First, the crime rate of immigrants generally, or illegal immigrants in particular, is irrelevant to the issue of sanctuary cities, which as a matter of policy release illegal immigrants from jails even after Immigration and Customs Enforcement asks them to hold these individuals. That policy is directly responsible for Steinle's death and for the deaths of many others over the years — regardless of statistics about overall crime rates. The public is right to be outraged.
Second, immigration is supposed to benefit our country. Therefore the goal of policy is to select immigrants that have much lower crime rates than natives, not rates that are somewhat higher or even somewhat lower than natives'. Given the strong correlation between crime and educational attainment, moving away from our current system that selects immigrants based primarily on whether they have a relative in the U.S. to one that emphasizes education levels would be one way to move toward such a goal.
Steven A. Camarota is director of research at the Center for Immigration Studies. This piece originally appeared on CIS's blog.
OB/GYN Jen Gunter tells us that the Planned Parenthood fetal-tissue donation debate has nothing to do with "baby parts," because medical professionals don't use the word "baby" until the child has been born. The specimens are instead the "products of conception."
This is not how language works. The medical community is perfectly free to restrict words' meanings in its own conversations and publications, but it has no right to impose those restrictions on the wider debate. And even a cursory analysis of common usage reveals there's nothing unusual about referring to an unborn child as a "baby," even in contexts that have nothing to do with the politically charged issue of abortion.
Anyone who's ever known a pregnant woman has heard her talk about how she can "feel the baby kick" in her stomach. Here is an example from 1947, and this construction has only become more popular since then (from Google Ngram, American English):
We can trace this type of thing back further — again using nothing but Google's Ngram tool — if we include the etymologically related "babe," which until the mid-1800s was more common than "baby" in the U.S. From 1806: "The uncommon motion of the babe in her womb, was a token of the extraordinary emotion of her spirit under a divine impulse." It's also been common for decades to refer to a miscarriage as "losing the baby."
Again, the medical community can use language however it wants. And none of this speaks to the broader question of whether what Planned Parenthood is doing is immoral or illegal. But linguistic preferences do not magically become facts when the people holding them are doctors.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The Survey on the Future of Government Service, released last week by Vanderbilt University's Center for the Study of Democratic Institutions, reveals significant problems with the federal workforce. According to the data, collected from 3,551 federal executives, the civil service is struggling to recruit and retain America's best and brightest — and agencies are plagued by underperforming employees who are difficult to fire.
We have seen the by-products of this malfunctioning personnel system for years. The Department of Veterans Affairs has lurched from one crisis to another. Government-wide improper payments reached a new height of $124.7 billion in 2014, fueled by mistakes made by the Department of Health and Human Services and the U.S. Treasury. The General Services Administration, for its part, is unable to provide a correct inventory of the number of federal properties, let alone unload the unneeded ones.
The true insight of this survey is that these crises are predictable; our current civil-service system is not structured to be highly productive. Politicians have ladened the system with other objectives, such as job security.
Here are four more notable findings from the study.
1. The federal workforce is inadequately skilled and likely to stay that way.
Recruitment is a problem for the public sector — 42 percent of federal executives believe their agency is unable to recruit the best employees. Troublingly, 39 percent of respondents think inadequately skilled federal workers represent a significant obstacle for agency mission fulfillment.
Recruitment is hindered by a lack of opportunity (cited by 54 percent of respondents), "rigid civil service rules" (54 percent), and salary (53 percent). However, only 32 percent of federal executives report lacking a qualified applicant pool. So not all is lost; high performers are still interested in public-sector jobs despite these negatives. But there are barriers (like the cumbersome USAjobs site and baroque agency hiring practices) that keep employers from effectively landing them.
2. Underperforming federal managers and employees are seldom fired.
Even if agencies streamlined recruitment, they still would be stuck with low-performing employees who are nearly impossible to dismiss. Some 64 percent of respondents said subpar managers are rarely (if ever) dismissed, and 70 percent said the same for non-managers. Private companies face far fewer obstacles, with 52 percent of private-sector executives surveyed saying non-managers could be reassigned or dismissed within six months. Only 4 percent of public-sector executives said the same.)
3. Federal executives do not feel that they have been properly trained.
Not only do the executives feel their workforce is inadequate, but many also don't feel up to the task of managing those employees. Fewer than three-quarters of career executives and just 45 percent of appointees felt they had received "sufficient training and guidance on how to manage" federal employees.
The appointee/career split is stark, but unsurprising. Political appointees often come to their positions with little background in the civil service and its maddeningly complex thicket of statutes and rules.
Though career executives felt more confident in their abilities than appointees, the percentages are still distressingly low. It prompts the question, How are people getting to such high levels without sufficient training?
4. Agencies are poaching leaders from one another.
Some agencies are lucky enough to hire the best and brightest. But once they do, the battle to keep them begins. Of the executives polled, 42 percent of appointees and 39 percent of career executives said they've been approached about other positions within the last year. Who was top poacher? Other federal agencies.
Having executives with experience at multiple agencies is a good thing, so long as there are enough of them to go around. As agencies struggle to pull in new talent, they are turning to beggar-they-neighbor hiring to get their workforce up to par.
This behavior is unsurprising. Since the 1960s, federal spending has quadrupled while federal-employee counts have remained steady. Congress and the federal courts have created a complicated system of hiring and firing that prevent agencies from acquiring and maintaining a skilled workforce.
The new survey reveals a high degree of variability among agencies, demonstrating that the situation is not hopeless. When asked about employee retention, 66 percent of the executives from one agency said they were able to retain top employees, while only 30 percent of executives at another agency reported the same. Some agencies, like the Federal Trade Commission, are doing a particularly good job. In the Best Places to Work Index of 2014, the FTC scored highly, and the survey confirmed the agency's executives felt it could recruit top performers.
This variability was also found in a recent GAO survey that gauged federal employees' level of engagement. The agency breakdowns are similar to those in the new study, with the VA and Department of Defense scoring low while the FTC maintains high engagement.
As the Vanderbilt survey points out, we can easily examine which agencies are succeeding to determine best practices to implement at others. We have the data for this type of reform; we just need to use it.
Chloe Booth is a research assistant, and Kevin R. Kosar is the director of the Governance Project, at the R Street Institute.
Reducing young people's access to marijuana was one rationale offered by the movement to legalize and regulate the drug. Therefore, a crucial question is whether that promise has been delivered — both in states that run "medical marijuana" programs and in states that have legalized recreational marijuana.
Many early studies of medical marijuana found little effect on juveniles. But the most recent, comprehensive, and methodologically careful studies, reported in June, show exactly what opponents feared — an adverse impact on youth from both medical marijuana and outright legalization.
The clearest finding of a negative impact is from the school-based survey "Monitoring the Future," which was used to examine California's decision to decriminalize marijuana in 2010. Youth who were 10th graders when the law changed showed, in comparison with youth in other states, 25 percent higher current use of marijuana by the time they were in 12th grade.
That study was complemented by a sophisticated longitudinal analysis using data from the National Survey on Drug Use and Health, which was able to capture school dropouts — likely heavier users. Medical-marijuana laws, the authors conclude, "amplify" rates of youth marijuana use, arguably because they allay social stigma and placate fear of a negative health outcome.
These results fit in with previous research better than one might think. Media reporting on these studies is biased; commonly, reports with "good news" for legalization are featured, those finding danger, ignored. The actual academic literature on this subject is highly contested, which isn't surprising because studying the impact of liberalized marijuana laws is not easy.
For example, it's difficult to "bound" the impact of more accessible marijuana, which readily moves across state lines and is used by neighboring youth. In addition, the specifics of the programs (eligibility, penalties, etc.) matter. With medical marijuana in particular, because of different rules of eligibility and distribution, lumping all programs together and looking for their effect turned out to be not particularly revealing.
But that's not to say we knew nothing until last month. Earlier research showed that a generalized decline in perceptions of risk in using marijuana, as well as norms of social disapproval, imply greater marijuana use, and likely follow from official approval. A study of marijuana legalization in Colorado examined these declines, while also presenting some evidence of increased marijuana abuse and dependence. Moreover, some studies found increases in adult use, as well as youth initiation, in association with medical marijuana.
Further, most studies found that states with medical-marijuana programs had significantly higher rates of youth use, and correspondingly lower perceptions of risk in using the drug, though these differences seemed to have pre-dated the programs. There was also evidence that increased childhood exposure to marijuana edibles was associated with medical-marijuana programs — episodes of poisoning increased at four times the rate in states with such programs compared with nationwide increases.
Certainly, it's a broad literature with many conflicting results, and until June the evidence was less than convincing that medical-marijuana programs produced greater youth marijuana use. But now, as we have seen, the research profile has changed. Emerging studies of marijuana commercialization show pronounced negative effects. And more comprehensive studies of even medical marijuana show harm to youth.
The legalization movement must confront this new reality. Expressions of relief that their "reforms" do not actively damage youth must be revisited, as current evidence has disabled that comforting assurance. It remains stunning that media decline to report these troubling findings to the public.
David W. Murray and John P. Walters direct Hudson Institute's Center for Substance Abuse Policy Research. They both served in the Office of National Drug Control Policy during the George W. Bush administration.
The Fourth Amendment protects people from unreasonable searches and seizures, requiring that warrants for these activities be backed up by probable cause. But the proliferation of computers and electronic data has raised new questions. What is an unreasonable search and seizure of computer files?
We recently spoke with Orin Kerr of George Washington University Law School, who argues in a new paper that electronic searches and seizures should be limited by what he calls the "ongoing-seizure approach": Searches and seizures become unreasonable when the government uses data that extends beyond the limits of the warrant. The conversation has been edited for clarity and brevity.
In your paper, you repeatedly mention that Riley v. California was something of a game-changer when it came to electronic seizures. What was the Riley decision? And how did it affect your views on computer searches and the Fourth Amendment?
Riley v. California dealt with how the Fourth Amendment applies to searches incident to arrest. The traditional rule is that when somebody is arrested, the government can search everything on their person for evidence, with no limitations. The question in Riley was whether that rule applies when the item is a cell phone. And the Supreme Court said there's a different rule for cell phones because of the nature of computer searches: Computer and cell-phone searches are so different, so much information is stored there — and such personal information — that if the government wants to search a cell phone incident to arrest, they need a warrant. And the result is a computer-specific rule: one rule for physical searches, another rule for computer searches.
This doesn't really change my view of computer searches, because the Court adopted the approach that I've been saying they should adopt, so I was pleased to see that. It's the first Supreme Court decision on computer search and seizure, and it really points out an important dynamic, which is that computer searches are different in terms of how they're carried out than physical searches. So we need new rules on the traditional limits of the Fourth Amendment in this current environment.
Your paper advocates an "ongoing-seizure approach." Can you tell us about that?
Here's the basic idea: When the government executes a computer search, they usually go into the suspect's home, seize all of their computers, and then take them away for searching later. And they need to do that for practical reasons. It turns out it just takes too long — it can take weeks to search the target's computer — so they usually seize all and search later. And what that means is that the government has access to all of this "nonresponsive" information, information that doesn't relate to the warrant, that they can search at their leisure back in the government's lab.
My argument is that the government is allowed to seize all that nonresponsive information, but they're not allowed to use information that they find that's outside the scope of the warrant when they search through the electronic information.
That means that if the government gets a warrant for fraud records, they can go into a house, seize the computers, and search the computers for fraud records, but they can't use that search for fraud records as an excuse to look for everything else on the computer. They can't turn that into a general search. When they're back in the lab and they're searching the computer for weeks, they might come across information about other crimes or even just information that's embarrassing. I think that when the government tries to use that information in the ongoing seizure of the nonresponsive information, it becomes an unreasonable seizure as the Fourth Amendment prohibits.
Near the end of the paper, you mention that you're not suggesting that the data be destroyed afterwards — you're just saying that it shouldn't be used. What is the difference between having it not be used at all in the future and just destroying it?
I'm skeptical that there's a requirement of destruction, although you could have it. Clearly, if the item is destroyed, it can't be used, but it actually is tricky to figure out what it means to destroy data. Does it mean zeroing out the hard drive? What if there are other copies of the file? I think use is a clearer idea. We could say that disclosure is use, or we could say use as evidence is use.
Use is in some ways simpler a concept to follow, and also it doesn't have a time element. If there's a Fourth Amendment rule that the government has to destroy the nonresponsive record at some time, when do they have to do that? Is it a week? Is it a month? Is it a year?
What if the government needs the original computer to show that there was not exculpatory evidence on the nonresponsive files? If you're a defendant charged in court, you're going to say, "I want to see the full computer because I think all the evidence is showing that I didn't commit the crime." And so there are reasons, for trial integrity purposes, to keep the full computer, at least while the case is pending. After the case is over, it's a different story.
So my approach isn't necessarily rejecting destruction — I just don't think you need it in order to ensure that computers aren't searched in an unlimited way.
Could you connect this to some of the political debates that we've seen over the past couple of months over topics like the NSA?
The NSA debates are mostly over what a search or seizure is, not so much when a search or seizure is reasonable. For example, the Section 215 debates about collecting metadata are about whether non-content records held by the phone company are protected at all. If they're protected by the Fourth Amendment, then the program is very likely unconstitutional. The real debate is what is a search, not what is a reasonable search.
This paper, in contrast, is about what is a reasonable search. Everybody agrees that the contents of your hard drives or the contents of your cell phone are fully protected by the Fourth Amendment, whether in your home or in your pocket or even in the cloud.
There might be a similarity in that the big question is: What do you do with information that's not actually evidence of a crime or not actually incident to a terrorist attack? In all of these cases you've got so much data out there. Some of the data is responsive to the government's concerns, some of it is not. If the government necessarily gets lots of information in the hunt for the important information — they need to get the whole haystack to find the needle — the broad question is similar: What do you do with the data, once the government has found its needle or it turns out there is no needle?
Matthew Disler is a RealClearPolitics intern.
The Supreme Court's recent decision upholding federal subsidies to help low-income Americans buy health insurance means health reform is here to stay, and states have no reason to delay taking up the option under health reform to expand their Medicaid programs. At the same time, Medicaid continues to face attacks from critics who would cut it deeply or undermine it structurally.
With all this in mind, and with Medicaid turning 50 this month, now's a good time to take stock of the program. One aspect of Medicaid is especially worth considering: According to a significant body of recent research, it has long-term benefits for the millions of children it has served in the past and the 32 million kids it serves today.
For starters, Medicaid provides cost-efficient and effective coverage for all its beneficiaries, including children; the cost of covering a child under Medicaid is 27 percent less than private insurance. And participation among children is very high: More than 87 percent of eligible kids participate in Medicaid or the Children's Health Insurance Program (CHIP).
By ensuring that families and children can access primary and preventive care, in addition to emergency care like hospital visits, Medicaid helps people of all ages live healthier lives. For children, the benefits begin even before birth. Comprehensive health coverage for a pregnant woman improves her child's cognitive ability and educational outcomes, the research shows.
Largely because they have access to preventive and primary care, children who are eligible for Medicaid are generally healthier, miss fewer days of school due to illness or injury, and perform at a high level in the classroom.
And these benefits extend up the educational ladder. People eligible for Medicaid in childhood are less likely to drop out of high school and likelier to earn a bachelor's degree than those who weren't eligible.
Covering more low-income children on Medicaid between 1980 and 1990 had an impact equivalent to cutting today's high-school dropout rate by 9.7-14 percent and raising the college-completion rate by 5.5-7.2 percent, one recent study demonstrated.
Those results are dramatic. In fact, the scale of gains from Medicaid access is similar to those from educational reforms like reducing class sizes and adopting schoolwide performance standards.
The results also show that, in addition to transforming the lives of individual kids, covering children produces a workforce with higher skill levels, which is important for fueling stronger economic growth.
Medicaid's also a powerful tool for expanding opportunity for low-income kids. Medicaid coverage narrows the gap in college graduation rates between low-income and higher-income children, research shows.
The benefits of Medicaid in childhood also extend to a healthier and more prosperous adulthood.
Children eligible for Medicaid for more of their childhood were hospitalized 8 to 13 percent less and visited the emergency room 3 to 4 percent less at age 25, a recent study reported. Along with improving overall health and quality of life, this drop in hospitalizations and trips to the emergency room generated considerable savings for the government.
Finally, children eligible for Medicaid have higher earnings as adults, according to a May 2015 study. Like the drop in hospitalizations, the higher incomes from adults help pay for the program. Each of these adults contributed $186 more in taxes through age 28 for each additional year they benefited from Medicaid.
Research on Medicaid's long-term benefits is part of a growing body of work showing that safety-net programs promote opportunity for their beneficiaries. In recent months, our research has highlighted the long-term benefits of other safety-net programs.
As Medicaid approaches its 50th birthday, the program clearly has wide-ranging benefits for kids — just one part of an impressive legacy of providing access to health care for millions of Americans while cutting the number of uninsured Americans.
Judith Solomon is vice president for health policy at the Center on Budget and Policy Priorities.
Thirty-five years ago, President Jimmy Carter signed the Staggers Rail Act, which largely deregulated freight railroads. Deregulation reduced rail rates for most shippers, restored railroads to profitability, and eliminated the risk that taxpayers would be on the hook for future railroad bailouts. But unfortunately, several contentious issues perpetually threaten to prompt ill-considered legislation or renewed regulation. A recent report from a Transportation Research Board committee, on which I served, proposes targeted solutions to these problems.
Railroad deregulation was a response to a well-known crisis. By the late 1970s, one-fifth of the nation's track was operated by bankrupt railroads. One-third of the largest railroads were losing money. The federal government spent $7 billion to bail out several Northeastern railroads and combine them to form Conrail. Railroads faced a sea of red ink in spite of the fact that rail rates were rising faster than inflation. The industry's woes even pervaded popular culture as well-known singers like Jimmy Buffet and Arlo Guthrie crooned matter-of-factly about dying railroads.
Bipartisan majorities in Congress chose deregulation to prevent future bailouts. Deregulation generated large productivity increases that allowed railroads to reduce rates substantially for most shippers — and freight railroads became profitable, eliminating the danger that they would require ongoing taxpayer subsidies. Their improved ability to attract capital allowed railroads to invest in maintaining and upgrading the rail system, improving service and safety.
In 2012, Congress appropriated funds for the Transportation Research Board (part of the National Academy of Sciences) to convene the committee I served on. The report addresses the topics that have created the most acrimonious debate since deregulation, including: maximum rate protections, mandated switching, shipper service complaints, railroad merger approvals, and annual calculations of railroad "revenue adequacy."
Maximum rate protections. The Staggers Act eliminated rate regulation for the majority of rail shipments. But shippers who lack good transportation alternatives to a single railroad can have their rates reviewed by the Surface Transportation Board. Because railroads incur very high fixed costs to build and maintain the network, they will inevitably have to charge different shippers different markups beyond the marginal cost of serving each shipper. Rate regulation is supposed to ensure that these markups are not "too high" — clearly a distributional issue that requires policymakers to make subjective value judgments.
To determine whether a rate is eligible for review, and then to judge whether it is reasonable, regulators compare the rate to a "cost" figure that pretends many costs of providing the rail network can be allocated to individual shippers or shipments, even though those costs are not caused by an individual shipper or shipment. These cost figures are inherently arbitrary.
For example, the regulators' "cost" calculations imply that railroads lose money on about 20 percent of traffic because it is priced below the cost of providing the service. Railroads have been accused of a lot of evil things since deregulation, but intentionally losing money is not one of them! The nonsensical numbers clearly suggest that the system overestimates the cost of many shipments.
The report recommends that regulators use the rates charged for similar shipments in markets where the railroad faces competition as a benchmark for determining whether a rate is eligible for challenge, instead of comparing rates to arbitrary and misleading cost figures. Rate challenges would go to an arbitrator instead of regulatory hearings. This change would provide a transparent mechanism for determining whether a rate can be challenged, and it would get regulators out of the business of conducting individual rate cases.
Mandated switching. Shipper groups want regulators to increase their competitive options by ordering a railroad to physically transfer cars to a nearby competing railroad when the shipper is served by only one railroad, so the customer can access the other railroad's network and prices despite not being located close enough to contract with that railroad for the entire length of the shipment. Regulators have usually declined. The report recommends that shippers should be allowed to propose switching as a remedy in arbitration.
This proposal could allow some increase in the use of mandatory switching, but only in individual cases where a clear problem has been demonstrated to exist — a shipper is "captive" to one railroad and the rate has been judged unreasonable.
Shipper service complaints. Shipper complaints about the responsiveness and timeliness of rail service ebb and flow. Unfortunately, the evidence about alleged service problems is anecdotal, because regulators do not collect shipment-level data on the timeliness of service, like the on-time data collected for airline flights.
The report recommends that regulators should collect these data to help determine whether there is a significant problem. It also recommends a top-to-bottom review of all rail industry data collection to eliminate data reporting requirements that no longer serve a useful purpose.
Railroad merger approvals. The Surface Transportation Board reviews proposed railroad mergers under a vague "public interest" standard that lets regulators consider virtually any factors they believe might be relevant. The report recommends that merger-review authority should be transferred to the Department of Justice's Antitrust Division, which reviews mergers in other transportation industries solely for their effects on competition.
Railroad revenue adequacy. The Surface Transportation Board annually calculates whether individual major railroads are earning revenues adequate to let them attract capital to maintain and improve the rail network. This calculation was important information to have when railroads were going bankrupt and the government wanted to see if deregulation would improve their financial health. Now, the annual calculation has turned into a highly contentious event, because regulators hinted in the past that they might regulate rates more strictly once railroads became "revenue adequate."
The report recommends that this annual ritual should be eliminated, thus eliminating the danger that it could be used as a vehicle to impose public-utility style rate of return regulation on railroads. Instead, the Department of Transportation should undertake a broader assessment of the industry's financial health over a number of years.
Most of these proposals would require legislative changes, and most would require some new (one-time) regulatory proceedings. All of them would help preserve the benefits of railroad deregulation by laying to rest the persistent problems that threaten to derail it.
Jerry Ellig is a senior research fellow with the Mercatus Center at George Mason University and a member of the committee that produced the Transportation Research Board report, "Modernizing Freight Rail Regulation," released in June.
In May, Republicans in Congress announced a joint budget resolution that, if enacted, would repeal Obamacare and balance the federal books in ten years. That is all well and good. Unfortunately, when they pass health-care legislation that actually has a chance of becoming law, they fail to pay for their promises. Can they be trusted to repeal and replace Obamacare with fiscally responsible, patient-centered health reform?
Last month, the Congressional Budget Office estimated that repealing Obamacare would increase the federal deficit by $353 billion over ten years, not counting the economic growth that would result from repeal. Factoring in such growth, the deficit would still rise by $137 billion. So, if Republicans actually repeal Obamacare, they would still have to cut $137 billion of spending from elsewhere in the budget.
Yet the Republicans have not even proposed minuscule spending cuts to pay for their current health-related bills. Their latest lapse involves the medical-device excise tax. This is a 2.3 percent tax on medical devices — from pacemakers to MRI scanners — to help pay for Obamacare.
On June 18, every Republican in the House of Representatives who was present voted to kill the tax, as did about one-fifth of Democrats. With those 46 Democrats joining the majority, the votes in favor added up to 280, just eight short of the number needed to override the promised presidential veto. The bill awaits a vote in the Senate.
President Obama has promised to veto the bill because it is fiscally irresponsible. The CBO estimates that device-tax repeal will increase the deficit by $24 billion in the next ten years. Spending offsets? Zero. Nada. Zilch.
If there is a chance to get rid of any part of Obamacare, it should be taken at the earliest opportunity. So, by all means, Congress should eliminate the medical-device tax. And if a repeal bill can get enough Democrats to override the president’s veto, better yet.
However, Congress has no excuse for avoiding the spending offsets necessary to prevent the deficit from rising. Indeed, finding spending cuts is easier now that it was a few years ago, when the device tax was expected to generate much more revenue than it has. Repealing the medical-device tax without enacting spending offsets does nothing to repeal Obamacare; it just gives us a deficit-financed Obamacare.
This episode is the second time in 2015 that the Republican-majority Congress has voted to increase deficit spending on health care. In April, they jacked up Medicare spending on physicians’ fees — winning the praise of physician lobbyists. At least that time around, they found a few pennies on the dollar to pay for the increase. Still, the CBO estimates the so-called Medicare “doc fix” will add $141 billion to the cumulative ten-year deficit.
In the grand scheme of government expenditures, or even just health spending, these are small sums. To anyone who is earnestly looking for spending offsets, it is hard not to find them. For 2016, the medical-device tax repeal will cost the federal government just $1.8 billion of revenue, while it will spend over $1 trillion on Medicare and Medicaid.
President Obama himself has proposed a way to cut Medicaid spending that should appeal to conservatives. In his February 2012 budget, he proposed reforms to "provider taxes." Because the federal government automatically matches (or, in most states, more than matches) each dollar the state pays for Medicaid, hospitals and state politicians have figured out a neat trick to maximize federal payments. Hospitals agree to a special state "tax," and the money flows into the state Medicaid program — and thereby attracts more federal dollars. Most of that money becomes hospital revenue, so hospitals actually earn more than they are "taxed."
Congress could stop this abuse and thereby save $22 billion over ten years. All it has to do is steal the Medicaid proposal from President Obama’s 2012 budget, and it would pay for almost all of the revenue lost from repealing the medical device tax.
The Obama administration is not known for fiscal discipline, but even the president has had enough of Republicans’ fiscally reckless approach to health spending. It is long past time for congressional Republicans to walk the talk on balancing the budget.
John R. Graham is an Independent Institute senior fellow and a senior fellow at the National Center for Policy Analysis.
Ann Coulter makes important points in her new book about our chaotic, overloaded immigration system. She is right that we have too much immigration, especially of the poor and poorly educated. That immigration is widening the gap between rich and poor and accelerating our decline into a country of haves and have-nots. That the job prospects of young blacks and many others are damaged by the influx of workers willing to work for whatever they can get. That poor enforcement of the law undermines public confidence in our government. That political correctness has muzzled liberals who were once committed to limited population growth in the name of environmental conservation. That immigration is transforming the electorate by expanding the numbers of people who depend on government programs and therefore are likely to vote for Democrats.
These are big issues that need far more examination than they receive from our news media. But the problem is that Coulter writes with such venomous hostility toward immigrants and their liberal enablers that most people will turn away from her screed as they would from a street-corner rant. Instead of creating space for the national discussion we badly need to have, she will once again stake her claim to the true-believing and legitimately frustrated Americans who make all her books bestsellers.
Coulter is the shock jock of the printed page. She writes with wit, hyperbole, and Cassandra's fascination with impending doom. Liberals think global warming is cooking our goose, but Coulter is convinced that immigration will get us first. She thinks it has become a sort of national self-immolation, brought to us courtesy of the soft-headed advocates of open borders and immigration unconstrained by law.
The title of Coulter's new book introduces her gloomy thesis: "¡Adios, America! The Left's Plan to Turn our Country into a Third World Hellhole."
Coulter doesn't seem to like any immigrants except those like her Northern European ancestors. But she is particularly nasty to those from Mexico, the largest immigrant group by far, whose numbers have grown from about 700,000 in 1970 to more than 10 million today.
Coulter takes credit for Donald Trump's Mexican-immigrants-are-rapists rant. "Where do you think all that spicy stuff about Mexican rape culture came from?" she tweeted. Sure enough, Trump called the book "a great read".
Trump's obnoxious denunciation of Mexican immigrants at least included the caveat that "some, I assume, are good people." It was a concession not supported by Coulter, who prefers this description of our southern neighbors: "Mexicans specialize in corpse desecration, burning people alive, rolling human heads onto packed nightclub dance floors, dissolving bodies in acid, and hanging mutilated bodies from bridges."
This is nasty stuff. It's malicious hysteria. Coulter's reporting would benefit from a trip to the Mexican state of Jalisco, where tens of thousands of Americans have flocked to a retirement community that was featured on last night's PBS News Hour. One of the retirees, who happened to be a native of Great Britain, summarized the contentment of her contemporaries when she said the Mexicans who work there "have compassion written into their DNA."
But Coulter sees immigrants from many lands as genetically or culturally predisposed to rape. She tells gruesome stories of brutal sexual attacks committed by the Hmong, tribal people admitted to the United States in order to shield them from retaliation for helping American forces in Southeast Asia. One of their most important advocates was Michael Johns, a former aide to President Reagan who said that to deny them asylum would be "a betrayal." That came in a 1995 article in William F. Buckley's National Review.
Coulter missed all that. Her story is that the Hmong were admitted under the 1965 immigration legislation that knocked out the old system that had favored Northern Europeans. That bill's principal Senate sponsor was Ted Kennedy. Therefore, Coulter erroneously concludes, Kennedy is the sponsor of the Hmong and is responsible for the rapes any of them committed. "Thank you, Teddy Kennedy," she writes sarcastically.
Kennedy is Coulter's public enemy No. 1. No. 2 is the New York Times, which she accuses of tailoring its immigration coverage to the open-border specifications of controversial billionaire Mexican businessman Carlos Slim. In 2009, when the Times was in financial peril and money was tight, Slim lent the paper $250 million.
Coulter's conspiratorial theory is that in return for the money, the Times sold its journalistic soul. She spins a post-hoc-ergo-propter-hoc fantasy in which the Times was vigilant against illegal immigration until it cashed Slim's check. Her conclusion: "What a difference one thieving Mexican billionaire makes!"
As someone who has reported on Latin American immigration and politics for years, I am familiar with the style and tenor of Coulter's book. Ironically, her views from the strident right are reminiscent of a leftist tract that has long been a bestselling denunciation of the United States and Europe and all their imperialist works. The Open Veins of Latin America by Eduardo Galeano, was aptly described in The Economist as: "written in powerful prose, with intoxicating passion. But it is also a work of crude propaganda, a mix of selective truths, exaggeration and falsehood, caricature and conspiracy theory."
The same can be said of ¡Adios, America!
Jerry Kammeris a senior research fellow for the Center for Immigration Studies.
The unemployment rate has dropped to 5.3 percent, which is near the level some economists consider "full employment" and is substantially lower than the 10 percent peak in October 2009. Total nonfarm job creation has been 11.9 million since then, for an average annual job creation of almost 2 million.
In the Obama administration's telling, these numbers prove that liberal policies of spending, taxing, and regulating create jobs. Unfortunately, a closer look at the data shows otherwise.
Many Americans have dropped out of the labor force, bringing the labor-force participation rate to a 40-year low of 62.6 percent after declining by 2.4 percentage points since October 2009. The broader "U-6" unemployment rate — which includes involuntary part-timers and those "marginally attached to the labor force" — now stands at 10.5 percent. It was below 9 percent for all of 2006 and 2007. These weak labor-market signals more accurately reflect Americans’ attitudes: A majority believe the economy is "getting worse," according to a recent Gallup poll.
Historical context is also important. Consider that after a severe recession in the early 1980s, the unemployment rate peaked at 10.8 percent in December 1982. But during the next six years, average annual job creation was 2.8 million, for a total of about 17 million — 5 million more than during the last six years, despite the fact that the U.S. population was only 80 percent of what it is now — while the participation rate increased by 2.4 percentage points, to 69 percent.
Austan Goolsbee, former chairman of the Council of Economic Advisers, and other liberal policy wonks claim that today's declining labor-force participation rate is simply a natural demographic phenomenon from an aging population. However, the share of the labor force that is at least 55 years old has not changed, and the share of the total civilian non-institutional population in that demographic has actually increased 1.5 percentage points since October 2009.
Further, the participation decline since October 2009 is not limited to the aging. The 16-19 age group's participation rate has fallen by 1.6 percentage points, the 20-24 age group's by 0.7 percentage points. Declining labor participation reduces the on-the-job training that is vital to increasing these groups’ lifetime earning potential.
The federal minimum-wage increase in 2009 and a host of other liberal policies arbitrarily increased the cost of employing the least educated and least skilled, and pushed many of them out of the labor market. This in turn forced many of them into government assistance, which starts a downward spiral of dependency that’s difficult to escape.
Perhaps an even greater threat to the nation’s future prosperity is seen in those in their prime earning (and childrearing) years: 25 to 54. Their labor-force participation rate declined 1.7 percentage points. While some rationally choose to stay home or go to college after unsuccessful job searches, loss of lifetime earnings and students loans (which are made artificially attractive by federal assistance) could have long-term consequences for many.
The increased cost of doing business from higher income-tax rates, Obamacare, stifling banking and environmental regulations, and other big-government policies have contributed to many Americans living in their parents' garage. This is in stark contrast to the Reagan administration’s pro-growth policies of lowering taxes and lessening regulation when people started businesses in their garage.
Variations of these policies, along with a sensible lawsuit climate, have led to the successful model in Texas that has created 40 percent of all U.S. net nonfarm jobs since the start of the Great Recession, with a 64.4 percent participation rate.
It’s time to implement time-tested, pro-growth policies that will invigorate the economy by getting government out of the way so Americans have the opportunity to fulfill their hopes and dreams.
Vance Ginn is an economist in the Center for Fiscal Policy at the Texas Public Policy Foundation, a non-profit, free-market research institute based in Austin. He may be reached at firstname.lastname@example.org.
Have you ever wondered where your state ranks in terms of its fiscal outlook? Eileen Norcross, a senior research fellow at the Mercatus Center at George Mason University, recently compiled just such a ranking.
We spoke with Norcross to learn more. The conversation has been edited for clarity and brevity.
In general, what gets states into financial trouble?
The states that were at the bottom of the ranking are generally states that have a high amount of debt or unfunded pension obligations, and OPEB [other post-employment benefits] liabilities, relative to state personal income. Also, their funding ratios are pretty weak, and they have a high level of long-term liabilities as a percent of total assets. There are flags in there that in the long term, there's a lot of debt on the books.
And in the short term, some of those states at the bottom also in 2013 had a weak cash position or a weak budgetary position, so they had insufficient assets to cover short-term liabilities, or revenues were less than expenses by a small factor. I do stress, though, that those short-term figures really do reflect one year. Going forward, you'd expect that probably to change. It might still be picking up from the recession as well.
So, for the states in poor fiscal health, it's mainly debt and unfunded liabilities that are weighing them down?
Absolutely. All states, I argue, use accounting measures that have underestimated their full pension liability, and that has caused a systematic underfunding — but some states went beyond that, and they decided to skip on payments because they thought they were over-funded, or they issued a bond to cover a pension payment. States that did that habitually over a period of decades are the ones with bigger holes.
Why is Alaska such an outlier in terms of its fiscal health?
Alaska just had a very large amount of cash coming in that year. If you go back and look at the numbers, they're just an order of magnitude away from everyone else in terms of their cash and in terms of their revenues in that year. Going forward, as oil prices drop, so do those numbers. When we update the study you're going to see those numbers change, because oil prices affected their revenues and have put some stress on the budget.
But that's why. It was just a windfall year for them, and that's reflected in the rankings. The short term is given more weight than the long term. And it's a relative ranking, so North Dakota is second relative to Alaska.
What effect did the 2008 recession have on states' fiscal health?
The 2008 recession led states to have to undertake all kinds of actions to balance the books, so you're seeing drawing down on rainy-day funds, the moving around of assets to cover liabilities, issuing bonds to cover short-term spending or longer-term commitments, so a lot of that factored into the results you see in 2013 — these kind of measures to cover holes and getting to the short run can have a medium-term impact on the finances.
Is there any correlation that you noticed between red or blue states having poor or healthy fiscal strength?
I didn't look at that as a factor, though there's literature out there that looks at the relationship between what party is in control of what branch of government over time and what's the impact. But I think you can see regional differences. And I would stress that states with a long-term position that's weak, that position was built over a period of years. That can cover many administrations or many legislatures' decisions over time.
Do you have any policy recommendations for states that are in financial trouble?
I'd say this for any of the states — you have to look at the long-term liabilities and assess how resources need to be applied to those debts. Pensions are considered guaranteed in many states, by either statute or constitution. And they know this. It's a big fiscal obligation. It's starting to consume more resources going forward, so I'm just hoping this draws attention to the difficulty of keeping that all in balance in the short term and the long term when your pensions or your OPEB get to that point — that sort of crisis point.
It becomes very difficult to make policy changes. And I'd say to the other states, don't let the pension or the OPEB get to that point. If you've got these liabilities growing on the books and it's still manageable, then it's important to make sure, if you're getting windfall revenues, that they're being put toward the long term, not the short term.
So you think it's beyond politics?
I think a lot of it is. Certainly politics figures into the whole thing — how legislatures and governors make decisions over time, how do they prioritize. Politicians tend to focus on the short-term, and I think that's the story here. But it's a 30-year story that we've got these pension benefits we can promise them, we can issue debt, we don't have to think about that until tomorrow. Tomorrow is now in these states. And you see that in cities, too — Chicago, or you see that in Puerto Rico. That's the past coming to you today, past decisions coming to you now.
Andrew Desiderio is a RealClearPolitics intern.
By this point there have been several attempts to fact-check Donald Trump's comment (and Amy Schumer's joke) about Hispanic rapists. But to my knowledge, none have presented some of the best data we have. The Justice Department's National Crime Victimization Survey asks Americans about their experiences with crime and collects data about the characteristics of violent offenders, including race and ethnicity. Unlike arrest or conviction data, it picks up many unreported crimes.
Unfortunately, these breakdowns are not published routinely, so they must be requested from the Bureau of Justice Statistics. I just received them:
For comparison, Hispanics are about 17 percent of the population, non-Hispanic blacks 12 percent, and non-Hispanic whites 63 percent. It seems that Donald Trump was wrong. In general, at least from these numbers, sexual-assault offending doesn't have a strong racial skew at all, and if anything Hispanic offenders are underrepresented.
Some assorted notes below my signature.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
Notes: A Politico story claimed to have this data, but the author mistakenly used numbers on victims instead of offenders. That piece has been corrected; a New York Times piece with the same numbers has not.
Victims who couldn't identify their attackers by race are excluded. 2012 and 2013 were averaged to give higher sample sizes, though BJS also gave me the years separately. (The survey reaches 160,000 people every year, but for rare crimes like sexual assault, broken down by race, this isn't always enough. In fact, even two years of data isn't enough to make the estimate for "other race, non-Hispanic" reliable, so I left it off.)
Further, BJS's cutoff for reliability is just ten cases, so even these estimates can vary a lot from year to year. From 2012 to 2013, the share of sex offenders who are white rose from 52 to 62 percent, while the black share rose from 13 to 17 -- and the unreliable "other" share fell all the way from 22 to 10. The survey didn't allow interviewees to identify attackers as Hispanic until 2012, so it's not possible to combine data for more years yet. (You can see 1995-2013 numbers for college-age victims, with more limited categories for offender races, here.)
Finally, as I also noted here, Hispanic is an ethnic rather than a racial category, so it overlaps with racial groupings— e.g., many Hispanics have a lot of Spanish or African ancestry, and thus might be perceived as white or black.
The Supreme Court's recent decision that gay couples have a right to marry has reinvigorated a nationwide conversation about the effects same-sex parents have on children. Shortly before the decision was handed down, Jimi Adams, an associate professor at the University of Colorado Denver, released a paper that rounded up the research on this question.
We took a few minutes to discuss his findings. The interview has been edited for length and clarity.
What is the consensus today about the effect of same-sex parents on children?
Well, I'd like to back up and say our work is actually two-fold. One is asking, is there consensus, and our primary finding is that there is consensus. What that consensus is, is broadly that there really are no disadvantages to kids with same-sex parents compared to kids who come from other parental configuration.
Can you tell me a little bit about any studies that reach different results?
I wouldn't so much say that there are studies that do, but I would say there are scholars who do, which is a fine distinction I'm making. The main reason I would say this is that most of the studies that have tried to report any differences have been based on methodological flaws or samples that really don't sustain the claims that they're making. The kinds of claims that have been popping up in the last handful of years have been relatively easily resolved.
What role does scientific consensus play in the broader political debate?
This is a question I leave to others to evaluate to some degree, but I would point out that for our study in particular, we were motivated by a claim that was made in the 2013 cases, where Justice Scalia stated that there was considerable disagreement among sociologists about the outcomes of same-sex parents, and I just frankly didn't believe that statement — I didn't believe there was disagreement. I knew there was a method out there that would allow us to ask that empirical question, so we went to ask it.
This ties in to my next question. How does your research influence the courts?
Well, for this particular case, I don't think we had much influence, to be frank. I think it's possible that if we're going to make policy, if we are going to make decisions based on empirical evidence, this approach has the means to do so. The consensus can be used to inform the development of policies. But with this paper we weren't going to have much influence over this particular case, because we came out too late for that.
I do think that the fact we found consensus is reflective of the way research was used in the development of the court decision. Not our particular piece, but the research that we were basing the paper on.
What else remains to be studied, despite the consensus?
This is one of the things that I think is opened. One thing we know about same-sex parenting — outcomes of same-sex parenting in particular — is that most of the kids who have been studied are kids who have same-sex parents who are not married. So as policies have changed over the last handful of years, and more explicitly over the last couple of weeks, in some places, we are going to see a shift in the background in which these kids are studied.
More kids who will be studied will be coming from families where their parents are potentially married. By all the evidence we have on outcomes for kids, this should lead to beneficial outcomes. This would lead toward the increased robustness of the findings we already have; it's not something that would subsequently change it.
Could further information change the consensus?
That is always possible. You never say never in science. However, I would say that the evidence right now is pretty strong. There is pretty strong consensus. It would take some pretty dramatic changes in the existing evidence before it would really change what that consensus looks like. I don't suspect it would change anytime soon, but it could.
Courtney Such is a RealClearPolitics intern.