When the Supreme Court overturned the Chevron doctrine in its 2024 Loper Bright decision, it fundamentally realigned the relationship between agencies, regulated entities, and the courts. The era where judges deferred to the relevant agency's reasonable interpretation when statute was ambiguous is over. While Loper Bright restores judicial oversight of administrative actions and curtails arbitrary abuses of agency discretion, Loper Bright may have the unintended consequence of imposing new burdens on an already over extended judicial system.
In the Loper Bright ruling, the Supreme Court concluded that lower courts must exercise their independent judgment to determine if an agency has acted within the scope of its delegated authority and courts cannot defer to an agency interpretation of the law simply because a statute is ambiguous. However, courts are now expected to independently interpret complex regulatory frameworks, such as tax codes, healthcare reimbursement rules, environmental standards, telecommunications policy at times without the benefit of time and subject matter expertise.
Human institutions, even the best-intentioned ones, struggle to deliver that ideal at scale and at speed. Fortunately, we now have a tool that can assist with some of the more time intensive, preliminary processes. Artificial intelligence (AI), when properly deployed, can read the text of a statute, compare it against an agency rule or a regulated entity's conduct, and develop a comprehensive, assessment on whether the action is permissible under the law. But the prompts for the AI review need to be carefully evaluated so that they are designed to be neutral and not seeking a particular outcome.
AI will never replace the courts, but arguably AI systems today can be trained on the full body of federal and state law, agency guidance, regulatory history, and relevant precedent. They can evaluate whether a proposed rule exceeds the statutory authority it claims. They can analyze a contract or Memorandum of Understanding against a baseline of comparable agreements to flag terms that are out of range or legally dubious. They can assess, in real time, whether an agency's interpretation of a regulation in an enforcement action is textually defensible — or whether it represents exactly the kind of overreach that Loper Bright was meant to check. And they can streamline dockets and can evaluate outcomes from Administrative Law Judges (ALJs) and administrative appeals boards within state and federal agencies.
For regulated entities navigating administrative appeals, contract negotiations, or regulatory comment periods, this creates an entirely new kind of resource. Today, building a credible legal and factual baseline for a negotiation or challenge requires significant time and expense. AI can compress that process dramatically, surfacing relevant precedents, identifying comparable agreements, and flagging statutory tensions before a single brief is filed or a single hearing is scheduled.
Fraud detection is one vivid illustration of AI’s broader potential in the regulated arena. The Trump administration's newly established Task Force to Eliminate Fraud, chaired by Vice President Vance, reflects a genuine and legitimate concern: federal benefit programs have suffered real, documented abuse, and the mechanisms to catch it have lagged badly behind the scale of the problem. AI-driven pattern recognition can identify billing anomalies, flag ineligible providers, and distinguish good actors from bad ones faster and more consistently than any human audit team. Dr. Oz’s efforts to eliminate fraud and abuse by using AI is just the first step in how agencies can improve the regulatory process. The next step, however, is assessing how agencies can incorporate AI into its own management and regulatory enforcement process.
But the deeper challenge is political as much as technical. Both parties have a long habit of labeling spending they dislike as “fraud.” The word has become as much a rhetorical weapon as a legal standard. AI analysis helps cut through that noise. If a billing pattern is consistent with applicable rules, the analysis will say so. If an agency's enforcement theory strains the statutory text, that too will become apparent. Objective analysis is a check on everyone, including agencies and regulated entities alike.
AI will not replace courts, and it will not eliminate the need for experienced legal judgment. But in a post-Loper Bright world—where courts are expected to shoulder greater interpretive responsibility across increasingly complex regulatory regimes—it is no longer sufficient to rely on traditional processes alone. The question is not whether AI should play a role, but whether our institutions can afford to ignore a tool capable of bringing speed, consistency, and analytical rigor to an overburdened system. If deployed thoughtfully, AI can help restore balance, checking agency overreach, reducing unnecessary litigation, and equipping courts with clearer, more comprehensive records. The opportunity is immediate, and the cost of inaction is measurable. The legal system has been handed a powerful new instrument; the only remaining question is whether it has the will to use it.