AI Is a Tool, Not a Villain

X
Story Stream
recent articles

Both local policymakers like the DC city council and federal policymakers like the Federal Trade Commission are proposing to turn their concerns into policies that could severely limit the use of newer technology, known as artificial intelligence (AI). This has happened time and again, with everything from the telephone to the internet. However, our fears could hold us back from the very real benefits of the AI tool.  

Some policymakers are quick to react to new technologies with proposed regulations to “protect” their fellow citizens. This is typically done with the assumption that more regulation is always the answer to these fears. And the concept of AI in particular may conjure up dystopian ideas like the pre-crimes in Minority Report or Skynet in the Terminator movies. However, not only is it a very beneficial, benign technology that already improves our lives, but AI is far easier to respond to and reprogram than to reform the implicit bias that may exist in a human being.  

There are a lot of debates about how to specifically define what artificial intelligence actually is. But that’s because there isn’t a good definition. We may say it’s “the use of technology to perform ‘human’ tasks such as translation, decision making, or creating a picture.” Well, then a calculator is an AI as it’s performing the “human task” of addition and subtraction. If we try to have a more limited definition by labeling it as a “learning algorithm,” then AI requires machine learning which itself is not clearly defined and dissuades technological advancements that could improve existing system problems. In fact, the simple goal of defining AI is so difficult that when trying to regulate it, the DC Government didn’t even bother to define the key terms like “AI” or even what an “algorithm” is, either.  

Before making laws around a new concept, tool or practice, we should probably at least agree how to define what policymakers are trying to regulate.  

When looking at the bigger picture, AI is already present in many aspects of our lives, and it’s already helping us out daily. AI helps us via Google translate when we visit a foreign country where we don’t speak the language or when Waze tells us the fastest routes in traffic. It also helps us when Siri or Alexa answers our questions from just our voice about everything from the weather to how many tablespoons are in a cup. This technology so many claim to be afraid of can even call for help when we fall or suffer a heart attack.  

AI literally saves lives by helping us develop life-saving medicines and better identify cancer and appropriate treatments

There is so much good made possible through this technology. While critics express concerns about the way it could be misused and abused, many of these fears are already addressed by existing laws and don’t require more regulations.  

When policymakers only focus on potential harms, they are missing the broader discussion – AI is just a tool, and we should treat it as such. 

Like any tool, whether AI or a hammer, there is always a risk that individuals abuse it. But we don’t outlaw hammers because of potential misuse. Instead, we make it illegal to engage in a specific action, like intentionally hurting another person with the hammer, rather than criminalizing the tool itself.  

To criminalize tools like AI focuses on the method, not the potential misuses. Suggesting that a crime is not criminal just because the perpetrator used a computer is absurd. Theft is still illegal even when someone uses a computer.  

Perhaps lawmakers can take a step back from the knee-jerk reaction to the existence of AI and see it for what it is – a tool. The concerns raised around issues like racial discrimination, harassment and credit limitation may be real, but existing laws already address those bad actions. Discrimination laws around housing and hiring can still apply even when algorithms are used. In many of these cases, AI may even have advantages. After all, it would be far easier to correct a discriminatory programming impact than to reprogram the implicit (or even explicit) bias that may exist in a human decision maker

Policymakers should remember that like other technologies, AI is merely a tool for humans to use. We must not let our fears make it the scapegoat for underlying societal concerns. 

Jennifer Huddleston is an adjunct professor at George Mason University’s Antonin Scalia Law School and policy counsel with NetChoice, an industry group that includes members such as Amazon, Google, Meta and Twitter. 



Comment
Show comments Hide Comments