AI-generated deepfakes threaten the electoral system

IN a monthslong investigative research, the British Broadcasting Corp. (BBC) has uncovered a worrying trend: artificial intelligence (AI) is being used to automate the creation and spread of fake news on social media that target American voters.

BBC has traced deepfake sites to a former US Marine who is suspected to have switched allegiances and now does Russia's bidding — an allegation he denies.

Some of the websites have usurped the names of actual media entities or disguise themselves as "whistleblowers" or "independent journalists" to gain credibility.

"The sheer number of stories — thousands each week — along with their repetition across different websites, indicates that the process of posting AI-generated content is automated," the BBC said. "Casual browsers," who do not have the inclination to fact-check, "could easily come away with the impression that the sites are thriving sources of legitimate news about politics and hot-button social issues."

Computer scientists saw it coming — the rise of AI as an engine of political manipulation and deceit. What they failed to predict is the amazing speed with which AI developed.

Industrialization took generations to take hold. Digitalization took decades. AI? Just years. And it's still in its infancy.

AI tools have achieved a sophistication that enables them to "create cloned human voices and hyperrealistic images, videos and audio in seconds, at minimal cost," observed an article on the PBS News site. "When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast, and target highly specific audiences, potentially taking campaign dirty tricks to a new low."

The chilling reality is that we are not prepared to handle the threat of AI-generated election deepfakes.

In the not-too-distant past, troll farms or "keyboard armies" were recruited for the sole purpose of weaponizing election disinformation. Trolls developed positive themes for a candidate and defamatory narratives for the candidate's opponents that were posted on popular social platforms like Facebook and TikTok.

The deepfake content churned out by trolls contaminated political discussions and "stoked political fandoms' biases and aggravated tendencies for affective polarization," one study said.

Generative AI does a troll's job faster, more efficiently and at less cost.

The PBS article presented several ominous scenarios on the spread of what it calls "synthetic media":

"Automated robocall messages, in a candidate's voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race."

Generative AI is already being harnessed to influence the outcome of the US elections in November. Last May, a fake video of President Joe Biden purportedly giving a speech attacking transgender people made the rounds on social media. There were also AI-generated images of children supposedly learning satanism in libraries, noted PBS.

A tsunami of posts feasting on Biden's recent debate debacle has also engulfed the social internet, and there is a good chance that a number of them were AI-authored.

The threat from synthetic media looms even larger against the background of the continuing hacking of government websites. Among the recent victims is the state's foremost IT agency, the Department of Information and Communications Technology.

Previously hacked were the sites of the Bureau of Customs, the Commission on Elections, and the Department of Science and Technology.

In a democracy, elections are a vital process in governance. Purveyors of disinformation thrive in the electoral environment because they dangle the promise of winnability to candidates.

A May 14, 2024 article on the Forbes site suggests the need for humans and AI to team up in fighting disinformation.

"While AI can sift through data, the nuanced understanding, ethical foresight and critical thinking of humans fill the gaps AI cannot perceive. This collaborative dynamic is fundamental in forging systems robust enough to identify and counteract the threats deepfakes present."

Unless countermeasures against deepfakes are firmly put in place, elections, one of the pillars of governance, will continue to be compromised.

Read The Rest at :