Protecting Privacy, Preserving Innovation

Protecting Privacy, Preserving Innovation

To protect consumer privacy online, we should try bottom-up approaches before resorting to government regulation, writes Adam Thierer, a senior research fellow with the Technology Policy Program at George Mason University's Mercatus Center, in a new paper.

How would this work? We recently spoke with Thierer to learn more. The conversation has been lightly edited.


What is the precautionary principle, and how would it be used to protect consumer privacy in the age of "big data"?

Generally, the precautionary principle refers to the idea that new forms of innovation should be curtailed, or even disallowed, until those innovations are proven to be safe, secure, or private in nature. Many policymakers and regulatory advocates would like to see precautionary-principle-based regulation based on the theory "it's better to be safe than sorry later on down the road."


You write that there are significant repercussions to the adoption of precautionary privacy laws -- what are the main problems in adopting the precautionary principle as a means of protecting consumer privacy?

The problem with precautionary-principle-based regulation is that if we spend all our time living in fear of hypothetical worst-case scenarios, and basing public policy upon those scenarios, then best-case scenarios will never come about. That is, innovation and human progress only happens through trial-and-error experimentation and risk-taking. That's the danger of instituting a precautionary-principle-based mindset -- it disallows beneficial forms of experimentation, which yield greater innovation and progress.


In the simplest terms, how can we protect consumer privacy without restricting digital innovation? Essentially, if the precautionary principle can't work, what principle can?

We need a bottom-up, flexible approach to privacy, safety, and security concerns in the information age. That is, we need to be able to protect the types of innovations that enhance consumer welfare, while also finding constructive solutions to our privacy- and security-related concerns. So, in the report, I discuss several of these so-called "bottom-up," organic, evolutionary-type solutions, and specifically I borrow a page from the ongoing battle that's taken place over the last 15 years over how best to protect online child safety in the digital age.

I basically say it comes down to a couple of key things: better education and so-called digital citizenship efforts; empowerment solutions, such as new technologies that can better help us protect our privacy, such as better encryption; greater reliance upon privacy professionals within the corporations who develop these technologies, to essentially bake in privacy and security by design from the start; new social norms that will allow us to pressure developers or people who use technologies in inappropriate ways; and finally, at the margin, it will require some law -- laws or regulations, that is -- targeted to address very specific concerns or harms that are not addressed by the rest of this toolbox that I discussed. But we should be clear that we should exhaust these alternative solutions before we resort to heavy-handed regulatory control.


How can we keep online innovators and producers accountable for their actions in terms of protecting the privacy of their consumers?

The other part of it goes back to making sure that we hold companies to the promises that they make to their own consumers, and that can be both through market-based mechanisms or norms, or that can be through legal mechanisms. Specifically, the Federal Trade Commission has the power already to fine and to punish companies for engaging in unfair or deceptive practices that potentially harm consumer privacy or security. They have already exercised that authority extensively with dozens of actions against high-tech companies.

So, there isn't a lack of authority, it's a question of how that authority is used and if it's enough. I argue that in conjunction with these other types of solutions, we can get by. Other people who take a more precautionary approach think we need a more heavy-handed, sweeping regulatory solution to these problems.


What measures, specifically, is the government taking to create "educational and empowerment-based" solutions? Which of these are the most effective and why?

I believe that educational efforts, including media literacy and digital citizenship lessons, are absolutely essential from the early years on, because the fundamental, most serious problem I think we face is a lack of respect for each other's privacy, safety, and security. It's not just corporations who are a problem -- it's a problem among ourselves and among our peers, which is that we all could do more to understand the ramifications of our actions for ourselves and for others around us.

This means we need a societal and cultural type of discussion about proper online ethics and etiquette, or what some experts call "netiquette." Netiquette basically refers to the idea of getting people, first youngsters and then adults as well, to think before they click -- to think about the ramifications of their actions, about how much they share about themselves or about others, and how it could result in unintended consequences.

That is probably the most important thing that could be done to address our modern concerns about privacy and security, and yet it's probably the issue that receives the least amount of attention and resources. Instead, we're putting most of our eggs in this legalistic, regulatory basket; we're focusing on top-down controls as opposed to bottom-up solutions. Ultimately, I suggest that's not going to solve our problems; we have to change the cultural norms and attitudes.


In cases in which privacy is violated and worst-case scenarios become reality, how would the government handle the perpetrators and the victims?

To reiterate, the Federal Trade Commission already possesses broad authority to deal with unfair and deceptive practices, and it has been engaged in several actions against companies who fail to live up to the promises they make the public or their consumers with regards to their privacy and security practices.

That's not the end of it -- the other part could be class-action activity outside of the federal regulatory world. Anytime there's any sort of a screw-up by a major tech company, there inevitably is a lawsuit or several that follow, with trial lawyers lining up to try to go after some of these corporations.

Sometimes, there are also individual lawsuits, or individual torts. That's another important part of this story: the tort system, the common law. There basically are actions that can be taken under existing privacy torts to address things like intrusion upon seclusion, or other types of privacy harms that have been forced by the courts over the past century.

Finally, there may be targeted legislation. There may be targeted statutes that deal with specific types of user privacy or security issues. For example, we already have an extensive body of law dealing with health care privacy and financial privacy, because those tend to be more sensitive, and there tend to be more serious harms when you talk about privacy. We already have a huge body of regulations that deal with those problems.

I've just named three or four layers of things the government already does or could do, and this doesn't even begin to scratch the surface of what they could do with we shift the focus and the resources into more of an educational or an empowerment-based approach to solving these problems as opposed to an enforcement-based approach. That's obviously what I prefer as the government's approach to these problems.


Christina Breitbeil is a RealClearPolitics editorial intern.

Comment
Show commentsHide Comments

Related Articles