Midterm elections will likely see increased effects of misinformation, reduced federal security activity, experts say

0
1
This post was originally published on this site

Tech experts say moves by the Trump administration and social media platforms to reduce cybersecurity and content moderation online will make it easier for misinformation to spread ahead of the 2026 midterms. (Photo by Seth Tupper/South Dakota Searchlight)

Tech experts say moves by the Trump administration and social media platforms to reduce cybersecurity and content moderation online will make it easier for misinformation to spread ahead of the 2026 midterms. (Photo by Seth Tupper/South Dakota Searchlight)

A year after the 2024 presidential election, technologists and election experts are wrestling with their new reality; tech-aided misinformation and disinformation campaigns are and will continue to be a part of the United States’ democratic process.

Technology has always played a role in information dissemination in elections, said Daniel Trielli, an assistant professor of media and democracy at the University of Maryland. Mass use of the internet in the early 2000s gave everyday people the ability to be “publishers,” which increased the amount of misinformation, he said.

But the rise of social media platforms and the evolving technologies, like generative artificial intelligence, in the last five years have brought it to new levels.

“We have had much more volume of misinformation, disinformation grabbing the attention of the electorate,” Trielli said. “And quickly following through that, we see a professionalization of disinformation … The active use of these social media platforms to spread disinformation.”

That professionalization of disinformation, or targeted attacks to spread inaccurate information about candidates or elections, was a major concern ahead of the 2024 election. Through various means — false information spread by bots, AI-generated text messages and AI-generated photos and video likeness of candidates, among others — bad actors attempted to bring apathy and confusion to voters.

In 2025, experts say technology is only getting better at aiding misinformation campaigns, and that such campaigns are embedded in the fabric of our society. The midterm elections in 2026 will face existing and new challenges, they say, some thanks to a rollback of security-focused programs by the Trump administration.

“We’ve seen kind of reporting that the goal of those sorts of attacks is to seek to influence, not only individual electoral processes, but to scale it in a way that makes it much more difficult to detect that they are seeking to influence (our) election activity,” said Tim Harper, project lead for Elections and Democracy at the Center for Democracy and Technology.

Misinformation vs. Disinformation

The difference between misinformation and disinformation, Trielli said, is intent. Misinformation is false information that is incidentally shared, often without the sharer realizing it is false.

“All of us are subject to seeing or even sharing misinformation because we might share something that we’re not careful with,” Trielli said. “Disinformation, however, usually describes a more concerted effort related to propaganda, and sometimes even international political communication to intentionally spread lies, to either favor a side in an election or a political process or just cause chaos.”

Though technology like generative AI makes it easier to produce disinformation, and social platforms make it easier to spread, there wouldn’t be such an issue if people weren’t so primed to receive and believe it, Trielli said.

Adam Darrah, vice president of intelligence at cybersecurity platform ZeroFox, said one of the biggest takeaways from the 2024 election was how the general public was unintentionally involved in spreading misinformation. Darrah, who joined the private sector after working as an intelligence analyst for the Central Intelligence Agency, said much of the mis- and disinformation we spread is based on feelings or perceptions stirred up by an already divided political landscape.

Division is one of the best tools of our foreign adversaries, like Russia, Darrah said.

“They’re very good at finding niche societal fissures in any civilized government,” Darrah said. “They’re like, ‘Okay, let’s have another meeting today about things we can do, to just like, keep Americans at each other’s throat.’”

Darrah added that a lot of misinformation plays to longstanding tropes or stereotypes. Paying attention to your own reactions to political content is an important step in identifying misinformation and not furthering its spread, he said.

“If I see something that’s obviously crazy and it’s trying to manipulate me to either dig in harder on my political position at the expense of maybe, my next door neighbor, who likely holds a different opinion than me, then I need to take a deep breath and know I’m being manipulated and just move on,” Darrah said.

U.S. elections

In the 2024 presidential election, Russia reportedly hired right-wing influencers to spread Kremlin talking points on TikTok, that they created and spread AI-generated videos alleging ballot tampering and election fraud, and created hoax bomb threats. The Chinese government has also been found to have produced AI-generated content stoking conspiracy theories about the U.S. government, and to have targeted down-ballot races in the 2024 presidential election.

Though the American electoral process garnered so much attention last year, tech-aided and targeted disinformation was a global problem last year, said Ken Jon Miyachi, founder of deepfake detection tool BitMind.

AI-generated content played huge roles in India’s general electionin Taiwan’s, and in Indonesia, where the political party Golkar used AI to reanimate Suharto, the longtime dictator who died in 2008, to make political endorsements.

In the earliest days of generative AI, fake content was easier to spot, Miyachi said. Everyone knew that an extra finger or unrealistic background meant you were probably looking at a deepfake. But with better technology, generated content is spreading undetected like wildfire across various social platforms.

Miyachi founded BitMind in January 2024, anticipating how big a role deepfakes and synthetic AI content could play in the election. The platform works as an app or browser extension, and allows a user to review content for a real-time assessment of AI-generated material.

“I think it’s more important than ever, especially, with the midterms coming up and then the next election cycles, and even just, world conflict, world news,” Miyachi said. “You really need a more proactive, real-time strategy to be able to combat misinformation and be able to identify it.”

Trielli said some aspects of the U.S.’ electoral system make it more vulnerable to small shifts in voter behavior. Not making voting mandatory, as in some countries, leaves the door open for people to choose to be uninvolved, he said.

A lack of competitiveness in congressional elections, or the mechanism of the electoral college, which can allow for a presidential candidate to win the popular vote but lose the general election, can create apathy in voters.

“All of those things are hard to manipulate,” Trielli said. “But if you have small numbers that are willing to do just a couple of those things, you can sway an election.”

Changing content moderation rules

Evolving content moderation rules on social media platforms were one of the biggest factors that allowed misinformation to spread during the 2024 presidential election, Harper said.

Many platforms feared being seen as “influencing the election” if they flagged or challenged misinformation content. In 2023, Facebook and Instagram’s parent company Meta, as well as X, began allowing political advertisements that perpetuated election denial of the 2020 election. X and YouTube both stripped back flagging of misinformation, and Meta got rid of fact checking and rules around its hate speech policy in January, after Donald Trump’s win.

Before and after the 2024 election, the Trump administration has been trying to link the identification and eradication of misinformation to attempts to suppress conservative speech.

Conservatives pushed that narrative in a Senate Commerce Committee hearing on Sept. 29, in which chairman Ted Cruz, the Republican from Texas, said “the Biden administration used (the Cyber and Infrastructure Security Agency) to strong arm social media companies into taking action against speech protected by the First Amendment.”

But Harper said he believes Sen. Ed Markey, a Democrat from Massachusetts, did a good job of questioning the witness to show that the government does have a role in moderating and stopping foreign interference on social media platforms.

“There is a distinction between the legitimate free speech that should be protected and must be protected, and the Cyber and Infrastructure Security Agency conducting operations to counter foreign interference,” Harper said.

Up until 2024, social media users had a general idea of what was considered approved and appropriate content for social platforms, Darrah said. But the upheaval of content moderation policies have left the door open for misinformation to spread more easily.

“It looks like we’re still kind of figuring out the new deal, the new contract between user and content moderators, technology, and free speech,” Darrah said. “It seems to be we’re renegotiating the contract about what’s free, what’s hateful, what’s harmful. And it seems to be platform agnostic.”

Will 2026 midterms be different?

Since Trump took office, his administration has taken a step back in protecting the country from foreign interference campaigns. The Office of the Director of National Intelligence is looking to make reductions to the National Counterintelligence and Security Center and the National Counterterrorism Center, and the White House is massively downsizing CISA, which could shrink the U.S.’s already weakened cyber-defense force.

The administration also cut funding for the Elections Information Sharing and Analysis Center, and the Election Assistance Commission is moving towards making modifications to the voluntary voting standards and guidelines, setting more requirements on Americans in order to cast a ballot.

“There are a number of ways across the federal government where resourcing and capacity for cybersecurity and information sharing has been depleted this year,” Harper said. “All that is to say we’re seeing that AI-based and boosted mis- and disinformation campaigns may take off in a much more serious way in coming years.”

Harper said he’s seen state election officials losing trust in the diminished federal agencies.

In June, Iran successfully hacked Arizona’s Secretary of State website, changing candidate profile photos to an image of Ayatollah Ruhollah Khomeini, leader of the 1979 revolution that established Iran as an Islamic republic.

Secretary of State Adrian Fontes didn’t report the incident to CISA, and both Arizona senators later sent a letter to Homeland Security Secretary Kristi Noem telling her it was “deeply troubling” to hear from Arizona officials that they no longer trust the department’s CISA to help them during cyberattacks.

State laws focusing on AI in elections have been passed over the last three years and primarily either ban messaging and images created by AI, or at least require specific disclaimers about the use of AI in campaign materials. But Miyachi says digital problems like misinformation need some sort of global agreement in place to properly regulate them.

Looking at midterms next year, Harper said he believes they’re less likely to look like the 2024 election, but rather the 2016 election.

In 2024, the state and federal governments and law enforcement agencies were receiving a lot of support from the Biden administration on digital security, Harper said. Information about threats was shared quickly, and there were coordinated efforts across agencies to secure a safe election.

The Trump administration has withdrawn many of those safeguarding federal resources, which may make bad actors “feel more empowered to meddle,” in 2026, Harper said.

Miyachi believes faster advancements in AI may mean the midterms suffer from more advanced attack strategies that hadn’t fully developed in 2024. He emphasized that individuals will need to take on more of the burden of identifying and stopping the spread of misinformation.

“Bad actors have understood what works and what doesn’t work,” he said. “Yeah, it will be much more sophisticated going into the 2026 midterms and then the 2028 election.”

This story was originally produced by News From The States, which is part of States Newsroom, a nonprofit news network which includes Georgia Recorder, and is supported by grants and a coalition of donors as a 501c(3) public charity.

This site uses Akismet to reduce spam. Learn how your comment data is processed.