Who Should Be Responsible for Election Content Authentication?
As forthcoming elections loom large, the question of artificial intelligence (AI) generated deepfakes disseminating misleading messages purportedly from or about political candidates has become pressing. As illustrated in a recently published academic study on the use of election deepfakes in eleven countries in 2023, it’s an international challenge of Herculean proportions that has the potential to threaten democracies across the world.
Unsurprisingly, enforcement agencies and legislatures, notably the US Congress and the European Union are zeroing in on the use of generative AI to create fake videos, sounds, and pictures of individuals. Members of the US House of Representatives proposed legislation targeting this issue in January with a bipartisan bill called the No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act. States are also advancing deepfake AI legislation, including Tennessee's Ensuring Likeness Voice and Image Security (ELVIS) Act.
These legislative endeavours require AI developers and disseminating platforms to be responsible for identifying and removing fake content after it has been created using their tools and uploaded onto their platforms. However, developers and platforms are inherently limited by the fact that their tools are trained on existing deepfake algorithms. The algorithms are not very good at identifying things created by tools they’ve never seen before. And the deepfake creators are continually developing new algorithms specifically to fool the detectors set up to catch them. It’s the ultimate 21st century Whac-A-Mole - the futile Sisyphean task where successfully dealing with one challenge just causes another to pop up elsewhere.
Legislation requiring developers and platforms alone to bear the responsibility for deepfake management is also incredibly costly. Inevitably, such laws encourage developers and platform operators to take a conservative approach to minimize the risk of litigation, necessarily constraining what is made available. At the margin, authentic content that would otherwise inform political debate (false positives) will be withheld, while deepfake content that cannot be detected (false negatives) will get through. In the effort to reduce the spread of fake news, debate will be artificially constrained by the absence of the false positive content.
Is there not a better way of managing the provenance of content in a world where fake news is also circulating?
Sam Altman raised this very issue at a recent Brookings Institution event. However, this requires a change in approach—both for the content consumers and creators. He suggested a variant of tamper-resistant AI watermarking (which OpenAI is currently developing) as a means of identifying content that is explicitly NOT AI-generated. But that may not be helpful for legitimate content creators using AI tools to create election content that is not deepfake, and not intended to mislead.
The question of how to distinguish credible from fake content is not new. While digital technologies dating from the emergence of the internet have been associated with a “decentralization” and “democratization” of content creation and distribution (“on the internet, nobody knows you are a dog”), in the past it was always presumed if one wanted authentic content, one had to get it from sources where it had gone through some sort of quality assurance. It costs the creator to prove the content is credible, so the consumer gains some assurance.
In the research community, snake oil is distinguished from credible science using the peer reviewed journal publication process. Peer review is costly, but it serves to maintain the credibility of both individuals and their academic community. The reputational loss of being found to be posting “fake science” via this system can be substantial – thereby deterring mendacious actors. The system is not perfect, but it aligns the incentives of scientists and users of their work more cost-effectively than allowing the fakes to be created without cost and then imperfectly scanned by a very costly system and holding the scanners (not authors) responsible for all errors, regardless of their source.
So how might this work for an electoral system?
Maybe electoral commissions could require all authentic advertising and other material for a specific election to be lodged in a single tamper-resistant repository (blockchain technology would seem to be useful here). Electors are assured that content in the repository is authentic. Aggregators would be able to collate material for commentaries from the repository, and potentially even lodge this too. The official repository operator could charge a deposit for the privilege of posting, and content subsequently found to be false or misleading (regardless of whether it was created using AI or any other method) could lead to the lodger forfeiting the deposit.
Of course, this would require some confidence in the neutrality of the electoral commission. But this should not be so big problem for genuine democracies requiring open and credible debate before elections. Should it?
Bronwyn Howell is a Nonresident Senior Fellow at the American Enterprise Institute (AEI).