Top UK election cyber risks

Top UK election cyber risks


Disinformation is expected to be among the top cyber risks for elections in 2024.

Andrew Brookes | Image Source | Getty Images

Britain is expected to face a barrage of state-backed cyberattacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC. 

Brits will vote on May 2 in local elections, and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a date.

The votes come as the country faces a range of problems including a cost-of-living crisis and stark divisions over immigration and asylum.

“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd McKinnon, CEO of identity security firm Okta, told CNBC via email. 

It wouldn’t be the first time.

In 2016, the U.S. presidential election and U.K. Brexit vote were both found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups, although Moscow denies these claims.

State actors have since made routine attacks in various countries to manipulate the outcome of elections, according to cyber experts. 

Meanwhile, last week, the U.K. alleged that Chinese state-affiliated hacking group APT 31 attempted to access U.K. lawmakers’ email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology firm in Wuhan believed to be a front for APT 31.

The U.S., Australia and New Zealand followed with their own sanctions. China denied allegations of state-sponsored hacking, calling them “groundless.”

Cybercriminals utilizing AI 

Top election risk

“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia or China or Iran can leverage generative AI and some of the newer technology to craft messages and to use deep fakes to create a story or a narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, it’s extremely dangerous.”

A key problem is that AI is reducing the barrier to entry for criminals looking to exploit people online. This has already happened in the form of scam emails that have been crafted using easily accessible AI tools like ChatGPT. 

Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.

“You can train those voice AI models very easily … through exposure to social [media],” Holmes told CNBC in an interview. “It’s [about] getting that emotional level of engagement and really coming up with something creative.”

In the context of elections, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted to the social media platform X in October 2023. The post racked up as many as 1.5 million views, according to fact correction charity Full Fact.

It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the U.K. approaches elections later this year.

Elections a test for tech giants

Measures to tackle cyber threat may be implemented before midterms: Analyst



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *