Forget Cambridge Analytica — Here’s How AI Can Threaten Elections

In 2018, the world was shocked to learn that British political consultant Cambridge Analytica had collected the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.

An undercover investigation by Channel 4 News resulted in footage of the company’s then CEO, Alexander Nix, suggesting that it had no problem with deliberately misleading the public to support its political clients:

“It sounds awful to say, but these are things that are not necessarily true. As long as they are believed”

The scandal was a wake-up call to the dangers of both social media and big data, and how fragile democracy can be in the face of rapid technological change around the world.

artificial intelligence

How does artificial intelligence (AI) fit into this picture? Can it also be used to influence elections and threaten the integrity of democracy around the world?

According to Trish McCluskey, an associate professor at Deakin University, and many others, the answer is an emphatic yes.

McCluskey told Cointelegraph that large language models like OpenAI’s ChatGPT “can generate content that is indistinguishable from human-written text,” which could contribute to misinformation campaigns or the spread of fake news online.

In other examples of how AI could potentially threaten democracy, McCluskey highlighted AI’s ability to produce deep fakes, which could create videos of public figures such as presidential candidates and influence public opinion. can manipulate.

While it is generally still easy to tell if a video is a deepfake, the technology is rapidly evolving and will eventually become indistinguishable from reality.

For example, a deepfake video of former FTX CEO Sam Bankman-Fried linked to a phishing website shows how lips often don’t match the words, giving viewers the sense that something is amiss.

Gary Marku, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust McCluskey agrees with the assessment, telling Cointelegraph that in the short term, the main risks of AI are:

“The threat of massive, automated, plausible deniability is overwhelming democracy.”

A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The Role of Artificial Intelligence in Disinformation” also highlighted the potential for AI systems to contribute to misinformation and suggested that it could Occurs in ways:

“before he [AI] Can be used by malicious stakeholders to manipulate individuals in a highly effective manner and on a large scale. Second, they directly increase the spread of such content.”

Furthermore, current AI systems are only as good as the data they are fed, which can sometimes lead to biased responses that can sway users’ opinions.

how to limit the risks

While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth noting that AI can also play a positive role in democracy and combating disinformation.

For example, McCluskey said AI could be used to “detect and flag disinformation, facilitate fact-checking, monitor election integrity,” and educate and engage citizens in democratic processes.

“The key,” says McCluskey, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulation and safeguards.”

The European Union’s Digital Services Act (DSA) is one example of regulation that could help reduce the potential for AI to create and spread disinformation.

Connected: OpenAI CEO to testify before Congress with ‘AI pause’ advocate and IBM executive

If the DSA goes into full effect, major online platforms such as Twitter and Facebook must comply with a list of obligations that include curtailing misinformation or face fines of up to 6% of their annual revenue.

The DSA is also introducing enhanced transparency requirements for these online platforms, requiring them to disclose how it recommends content to users – often done using AI algorithms – and how it Moderates the content.

Bontrider and Poullet note that companies are increasingly using AI to moderate content, which they say can be “particularly problematic” because AI has the potential to overwhelm and compromise free speech.

DSA only applies to operations in the European Union; McCluskey noted that as a global phenomenon, international cooperation will be necessary to regulate AI and combat disinformation.

Magazine: $3.4 Billion in Bitcoin in a Popcorn Can – The Story of the Silk Road Hacker

McCluskey suggested that this could be through “international agreements on AI ethics, data privacy standards, or joint efforts to detect and combat misinformation campaigns”.

Ultimately, McCluskey said that “counteracting the risk of AI contributing to disruptive information requires a multifaceted approach,” which includes “government regulation, self-regulation by tech companies, international cooperation, public education, technology solutions, media literacy and ongoing research.”

Stay connected with us on social media platforms for instant updates, Click here to join us Facebook

Show More
Back to top button

disable ad blocker

please disable ad blocker