In June 2025, hours after the outbreak of war between Israel and Iran, social media platforms were flooded with images and videos of widespread destruction in Tel Aviv, Israeli F-35 fighter jets shot down and surrounded by cheering Iranian crowds, a long convoy of Pakistani trucks carrying ballistic missiles to support Iran’s war effort, and large crowds in Tel Aviv denouncing Israel’s attack and calling for peace.

Images originating on social platforms were shared by Iranian media outlets. Others, published by semi-official Iranian media, were shared across TikTok, Instagram, X, Facebook, and multiple WhatsApp groups.

All of them had one thing in common: They were fakes, created using generative AI.

“Nothing prepared us for the scale and speed of misinformation we witnessed during the Iran-Israel conflict,” reported Factnameh, a Canada-based platform dedicated to fact-checking Persian news and social media. “This was the first major military conflict where generative AI played a central role in shaping public perception, and it might just be a preview of what’s to come.”

News consumption

As news consumption shifts from legacy media to social platforms, especially among younger consumers, anyone with a smartphone can use AI to create realistic-looking videos and images and publish them globally via social media. This new phenomenon threatens to upend our perception of the world.

“This democratization of deception is both terrifying and fascinating. On the one hand, it empowers citizen storytellers. On the other hand, it gives propagandists and malicious actors the tools to muddy the waters like never before,” Factnameh reported.

“Even more troubling is the public’s reaction,” the platform noted. ”As AI fakes become more common, trust erodes. People begin to question everything, even real videos and images that contradict their biases… What we’re seeing now is a sort of hyper-cynicism, where even truth is suspected of being fiction.”

Most people think they can spot deepfakes, but they can’t, according to research from iProov, an AI-based identity verification platform based in the UK that protects borders, governments, financial institutions, and other major organizations against deepfakes and other types of identity fraud.

Only 0.1% of 2,000 UK and US consumers surveyed by iProov could accurately distinguish real from fake content. More than 30% of interviewees aged 55 and over were unaware that deepfakes even existed. Even so, more than 60% of interviewees were confident they could detect deepfakes even when they couldn’t.

“This false sense of security is a significant concern,” the platform says.

AI fakery

The US government says AI fakery could impact public policy and even elections.

“Generative AI is helping actors generate, manipulate, and disseminate synthetic media, such as deepfakes, making it a particularly useful tool for malign influence campaigns,” warned the US Department of Homeland Security’s Homeland Threat Assessment for 2025. “AI-generated and manipulated content – such as videos, audio, images, and text, all of which can include disinformation – have permeated the Internet, impacting US search engine algorithms and video-streaming platforms.”

US officials identified major deepfake operations designed to sow discord in America, mounted by foreign powers such as Russia and China.

Time to respond

One of the companies leading the response is Cyabra, a start-up founded by graduates of the 8200 cybersecurity unit of Israeli Military Intelligence that tracks online fakery for governments, law enforcement, and major corporations. (I worked alongside Cyabra as an executive at OurCrowd, the Jerusalem-based venture investor).

Cyabra advised Elon Musk on his purchase of Twitter, revealing that 11% of Twitter users were fake. 

Recently, the company has tracked the impact of AI-fueled disinformation campaigns behind protests in Nepal in September that left 72 dead; the furor over Cracker Barrel’s decision to change its logo that shaved 10% off its stock price; and the bots that branded Sydney Sweeney’s “Great Jeans” campaign for American Eagle as racist.

“The biggest real-world threat isn’t just that generative AI is flooding our feeds with fake or low-quality content – it’s that it’s eroding our shared sense of truth,” Dan Brahmy, CEO and co-founder of Cyabra, told The Jerusalem Report.

“When everything online can look real, society loses its ability to tell what’s authentic. That makes us more vulnerable to manipulation, polarization, and loss of trust in democratic institutions,” he said.

Andrew Bud, founder and CEO of iProov.
Andrew Bud, founder and CEO of iProov. (credit: Courtesy)

AI has drastically lowered the barrier to creating fake accounts, narratives, and deepfakes, Cyabra’s CEO said. “Coordinated disinformation campaigns powered by AI are spreading false narratives faster than organizations or institutions can identify and contain them. What was once a manual effort to mislead has become automated and scalable for many bad actors.”

The danger is political, commercial, and social, Brahmy said. “Brands are attacked with AI-generated boycotts, CEOs are impersonated, and false narratives go viral in hours. Generative AI has turned disinformation from a sporadic problem into a systemic risk to public discourse.”

A new era

AI’s looking-glass world generates billions of dollars for social media giants by driving user engagement. In a Meta October earnings call, Chief Executive Mark Zuckerberg told shareholders that social media was entering a new third age driven by AI-generated content.

“Social media has gone through two eras so far. First was when all content was from friends, family, and accounts that you followed directly. The second was when we added all of the creator content. Now, as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content on top of those,” the social media mogul said.

“Recommendation systems that understand all this content more deeply and show you the right content to help you achieve your goals are going to be increasingly valuable,” Zuckerberg said, promising more AI in posts and the algorithms that keep users hooked.

Over 3.5 billion people use one of Meta’s platforms daily, driving revenues of more than $200 billion a year. Zuckerberg’s enthusiasm is shared by Elon Musk, whose X platform has 600 million daily users. Two billion mainly younger people are on TikTok, where generative AI imagery is wildly popular.

But we are beginning to see signs of a backlash.

In July, YouTube announced it would reduce the ability to monetize the “inauthentic” AI-generated content, threatening to swamp the video-sharing platform.

An October survey by Billion Dollar Boy, an agency that represents online influencers, revealed “signs of fatigue.” Enthusiasm for AI-generated work plummeted from 60% in 2023 to 26% in 2025.

“The novelty has worn off, and mass-produced, unlabeled, and poorly conceived AI ‘slop’ is driving the negative sentiment we are seeing,” said Thomas Walters, the agency’s chief innovation officer.

Winsome Marketing, a Philadelphia-based consultancy, encourages the use of AI but warns of its downside.

“The scale of synthetic media contamination is staggering,” the company reported. “Government agencies project that eight million deepfakes will be shared in 2025, up from just 500,000 in 2023. The global content detection market, valued at $19.98 billion in 2025, is projected to reach $68.22 billion by 2034. That’s not growth – that’s a full-scale digital emergency response.”

Dan Brahmy, CEO & co-founder of Cyabra.
Dan Brahmy, CEO & co-founder of Cyabra. (credit: Courtesy)

Finding the remedy

Cyabra’s Brahmy says that AI, correctly deployed, can be the remedy as well as the problem.

“We need to fight malicious AI with AI for good,” he said. “The same technology that generates disinformation can also be harnessed to detect and dismantle it. Cyabra’s platform uses advanced machine learning to uncover fake profiles, coordinated bot networks, and generative AI content at scale, distinguishing the authentic from the artificial. Protecting against AI misuse, while a technology challenge, is very much a trust challenge. We must rebuild trust in digital spaces by making authenticity measurable again.”

This summer, Cyabra launched an advanced deepfake detection tool designed to help corporations and governments counter the growing threat of AI-generated fakes. It uses AI to analyze images and videos for signs of manipulation, providing rapid verification of authenticity.

Brahmy said the tool “acts as a digital magnifying glass, revealing the invisible fingerprints of even the most convincing deepfakes.”

“As digital manipulation evolves, our defenses must keep pace. This new tool gives our customers the forensic clarity needed to help them preserve trust, safeguard discourse, and defend democratic institutions,” he said.

But Andrew Bud, founder and CEO of iProov, is not convinced that AI can remedy itself.

“We should jettison the illusion that it will be possible in the future to recognize fake AI content. That boat has sailed,” Bud told the Report.

“A year or two ago, it was still possible to spot AI imagery because it had visual inaccuracies – six fingers, missing ears, eyes in strange positions – but those have gone. The idea was that you could train the AI detector models so they would recognize generated AI, but that technology is obsolete almost as soon as it’s been produced. By definition, it won’t detect the next generation of AI output.

“I think there will be an inversion,” Bud said. “I think people will assume you can’t trust the evidence of your naked eye unless it’s attested. The cost of generating synthetic imagery will fall to practically zero, and there will be so much AI-generated slop that 90% of all content will be bogus. It will be like luxury goods, where 90% of the Rolex watches on sale on the beach are bogus.”

He thinks that future online content will come equipped with a payload of author credentials, so “it should be possible to determine the supply chain that made any piece of content and the authors who take responsibility for it in a way that cannot be forged. Those tools do not yet exist and need to be developed,” he said.

“There will be a difference between entertaining content that may not be real, and content that you can trust. You will be able to choose between the Rolex on the beach or the real thing, based on the author and publisher.”■