Many pieces of AI-generated content were used to express support for or fandom of certain candidates. For instance, an AI-generated video of Donald Trump and Elon Musk dancing to the BeeGees song âStayinâ Aliveâ was shared millions of times on social media, including by Senator Mike Lee, a Utah Republican.
âIt’s all about social signaling. It’s all the reasons why people share this stuff. It’s not AI. You’re seeing the effects of a polarized electorate,â says Bruce Schneier, a public interest technologist and lecturer at the Harvard Kennedy School. âIt’s not like we had perfect elections throughout our history and now suddenly thereâs AI and it’s all misinformation.â
But donât get it twistedâthere were misleading deepfakes that spread during this election. For instance, in the days before Bangladeshâs elections, deepfakes circulated online encouraging supporters of one of the countryâs political parties to boycott the vote. Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights and runs a rapid-response detection program for civil society organizations and journalists, says that his team did see an increase in cases of deepfakes this year.
âIn multiple election contexts,â he says, âthere have been examples of both real deceptive or confusing use of synthetic media in audio, video, and image format that have puzzled journalists or have not been possible for them to fully verify or challenge.â What this reveals, he says, is that the tools and systems currently in place to detect AI-generated media are still lagging behind the pace at which the technology is developing. In places outside the US and Western Europe, these detection tools are even less reliable.
âFortunately, AI in deceptive ways was not used at scale in most elections or in pivotal ways, but it’s very clear that there’s a gap in the detection tools and access to them for the people who need it the most,â says Gregory. âThis is not the time for complacency.â
The very existence of synthetic media at all, he says, has meant that politicians have been able to allege that real media is fakeâa phenomenon known as the âliarâs dividend.â In August, Donald Trump alleged that images showing large crowds of people turning out to rallies for Vice President Kamala Harris were AI-generated. (They werenât.) Gregory says that in an analysis of all the reports to Witnessâ deepfake rapid-response force, about a third of the cases were politicians using AI to deny evidence of a real eventâmany involving leaked conversations.