Safeguarding Elections Amidst the Deepfake Deluge

As we gear up for the 2024 U.S. presidential election, deepfake technology looms large, presenting a formidable challenge to the integrity of our democratic electoral process. 

We’ve already witnessed incidents like the fake Biden robocall in New Hampshire, in which “he” urged people not to vote. Fortunately, Nomorobo unraveled this deceptive scheme. Nomorobo is a service renowned for its prowess in combating robocalls. With a database monitoring over 350,000 mobile numbers, Nomorobo detected 41 fake Biden calls, estimating that 5,000 and 25,000 such calls had been made.

Most of the calls were directed at registered Democrats, and they featured Biden’s trademark phrase, “What a bunch of malarkey.”

The technology behind the fake Biden call was reportedly sourced from AI startup ElevenLabs, prompting ongoing investigations by New Hampshire officials.

When it comes to image and video deception, the concept of watermarking has emerged as a promising countermeasure. By embedding unique signals into AI-generated content, watermarking aims to identify its origin. This could include visible watermarks like logos or text or invisible techniques using steganography.

During the recent Washington Post event, Anne Neuberger highlighted watermarking as a promising tool, particularly for platforms that comply with regulatory mandates. For example, Facebook could label AI-generated content to alert users to its artificial nature.

But watermarking alone may not suffice. Detection technologies and AI-driven deepfake detectors will also play a crucial role in our defense against this evolving threat. Confronting the deepfake dilemma demands a comprehensive, multi-faceted approach, combining technological innovation, regulatory measures, and heightened public awareness.

Skip to content