A lot of people are rightly worried about the use of artificial intelligence (AI) to generate deepfake political campaign ads. When you can’t trust the video and audio that’s right in front of you, it’s definitely going to make an impact on elections. The question is, how do you regulate something like this, and whose job is it to regulate AI? That problem has the Federal Election Commission (FEC) in a deadlock right now.
The problem of AI has already come up in the 2024 Republican presidential primary. The Ron DeSantis campaign has been caught using AI-generated fake images of President Trump hugging Tony Fauci—which is something that never happened. More recently, the DeSantis campaign was caught using AI to mimic President Trump’s voice in an ad.
Lots of bad things are coming in the near future if someone doesn’t start to regulate this new technology. Imagine if the FBI had deepfake AI back in 2016. They definitely would have leaked a fake pee tape to CNN.
The FEC is a six-member commission that consists of three Democrats and three Republicans. The Democrats on the commission are eager to regulate AI by clarifying that the law on fraudulent misrepresentation by campaigns applies to the new technology. After all, what liberal bureaucrat isn’t eager to seize new power for their agency?
The three Republicans on the FEC disagree. They say it’s the job of Congress to regulate the use of AI by political campaigns. The problem with that is that by the time Congress actually gets around to passing a bill, it will probably be too late. The use of this technology is about to get crazy. What do you think? Should the FEC regulate AI right now? Or is it Congress’ job?