Disinformation is expected to be among the top cyber risks for elections in 2024.
Andrew Brookes | Image Source | Getty Images
Britain is expected to face a barrage of state-backed cyberattacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC.
Brits will vote on May 2 in local elections, and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a date.
The votes come as the country faces a range of problems including a cost-of-living crisis and stark divisions over immigration and asylum.
“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd McKinnon, CEO of identity security firm Okta, told CNBC via email.
It wouldn’t be the first time.
In 2016, the U.S. presidential election and U.K. Brexit vote were both found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups, although Moscow denies these claims.
State actors have since made routine attacks in various countries to manipulate the outcome of elections, according to cyber experts.
Meanwhile, last week, the U.K. alleged that Chinese state-affiliated hacking group APT 31 attempted to access U.K. lawmakers’ email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology firm in Wuhan believed to be a front for APT 31.
The U.S., Australia and New Zealand followed with their own sanctions. China denied allegations of state-sponsored hacking, calling them “groundless.”
Cybercriminals utilizing AI
Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways — not least through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence.
Synthetic images, videos and audio generated using computer graphics, simulation methods and AI — commonly referred to as “deep fakes” — will be a common occurrence as it becomes easier for people to create them, say experts.
“Nation-state actors and cybercriminals are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions,” Okta’s McKinnon added.
“We’re also sure to see an influx of AI and bot-driven content generated by threat actors to push out misinformation at an even greater scale than we’ve seen in previous election cycles.”
The cybersecurity community has called for heightened awareness of this type of AI-generated misinformation, as well as international cooperation to mitigate the risk of such malicious activity.
Top election risk
Adam Meyers, head of counter adversary operations at cybersecurity firm CrowdStrike, said AI-powered disinformation is a top risk for elections in 2024.
“Right now, generative AI can be used for harm or for good and so we see both applications every day increasingly adopted,” Meyers told CNBC.
China, Russia and Iran are highly likely to conduct misinformation and disinformation operations against various global elections with the help of tools like generative AI, according to Crowdstrike’s latest annual threat report.
“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia or China or Iran can leverage generative AI and some of the newer technology to craft messages and to use deep fakes to create a story or a narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, it’s extremely dangerous.”
A key problem is that AI is reducing the barrier to entry for criminals looking to exploit people online. This has already happened in the form of scam emails that have been crafted using easily accessible AI tools like ChatGPT.
Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.
“You can train those voice AI models very easily … through exposure to social [media],” Holmes told CNBC in an interview. “It’s [about] getting that emotional level of engagement and really coming up with something creative.”
In the context of elections, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted to the social media platform X in October 2023. The post racked up as many as 1.5 million views, according to fact correction charity Full Fact.
It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the U.K. approaches elections later this year.
Elections a test for tech giants
Deep fake technology is becoming a lot more advanced, however. And for many tech companies, the race to beat them is now about fighting fire with fire.
“Deepfakes went from being a theoretical thing to being very much live in production today,” Mike Tuchen, CEO of Onfido, told CNBC in an interview last year.
“There’s a cat and mouse game now where it’s ‘AI vs. AI’ — using AI to detect deepfakes and mitigating the impact for our customers is the big battle right now.”
Cyber experts say it’s becoming harder to tell what’s real — but there can be some signs that content is digitally manipulated.
AI uses prompts to generate text, images and video, but it doesn’t always get it right. So for example, if you’re watching an AI-generated video of a dinner, and the spoon suddenly disappears, that’s an example of an AI flaw.
“We’ll certainly see more deepfakes throughout the election process but an easy step we can all take is verifying the authenticity of something before we share it,” Okta’s McKinnon added.