A new study by the Israeli Internet Association reveals the extent of fake news and disinformation spread by Iran using artificial intelligence during the war with Iran in June.
This is a wide-ranging analysis, based on 592 fact-checks carried out by 50 different organizations from 23 countries in 17 languages, and it provides the first comprehensive picture of the nature of the spread of misinformation during the 12 days of fighting.
More than 70% of the false content uncovered in the study was based on real documentation that was taken out of context in time, place, or framing, while about a fifth of the content was created using artificial-intelligence tools.
Most of the fake content served to amplify the sense of destruction and chaos, primarily through videos and images distributed on social media and occasionally broadcast in traditional media.
The picture emerging from the study presents disinformation as an integral part of modern battlefields – no less than missiles and bombs.
The findings indicate that social media has become a psychological battlefield, where false content is intended to sow confusion, incite panic, convey military power, and damage the public morale of the other side.
The scope of the phenomenon is impressive: Eighty-four percent of the examined false content was video, 13% was images, and only 3% was text. The implication is clear:
The propaganda made use primarily of visual material, which social-media algorithms promote and to which users are exposed quickly and directly. Most of the videos featured explosions, physical damage, and images of public panic.
The main patterns identified in the study illustrate how fake content was used: Forty-four percent of the content depicted physical damage to buildings and infrastructure, such as Iranian missiles allegedly striking the Azrieli Towers in Tel Aviv or a destroyed airport in Iran.
Thirty-nine percent showed explosions and fire/smoke simulations attributed to various attacks, including false claims about an explosion at the Fordow nuclear facility or a strike on the Haifa Port.
Fifteen percent focused on displays of military power, showing missile trucks on their way to launch or stockpiles of Israeli bombs. A further 14% dealt with public panic, such as videos of masses fleeing city centers.
Beyond the scope, the study found that many videos and images were entirely recycled from other wars or unrelated events. For example, a 2015 video of a firefighting drill in China was presented as an Iranian attack on Haifa oil refineries, and a 2009 photo of a hotel fire in China was circulated as a missile strike-induced blaze in Tel Aviv.
One-fifth of fake news was AI-generated
About one-fifth of the fake content was identified as AI-generated. Examples include a video of an Israeli soldier begging Iran for mercy, an image of an Israeli F-35 allegedly shot down in Iran, and videos showing destruction at Ben-Gurion Airport.
In many
cases, the fakes could be spotted through watermarks left on the files, specialized synthetic-content detection tools, or reverse searches for earlier sources.
The study also aimed to investigate the motivations behind the dissemination of disinformation. It found that in 72% of the cases, the false content mainly served the Iranian side, while only 24% could have served Israel. Looking only at AI-generated content, the figure is even starker: Ninety percent served Iran.
The study highlights a unique phenomenon: While in other conflicts, such as the Israel-Hamas War, propaganda often focused on portraying the other side as immoral or guilty of war crimes, in this case, most of the disinformation aimed at amplifying perceptions of military power. In other words, less victimhood and more “ballistic fakes” designed to present the rival as weak and oneself as strong.
Israel lacks sufficient real-time fact-checking methods
The study further found that Israel suffered from systemic weaknesses in fact-checking. While globally, hundreds of pieces of content were reviewed by dozens of professional organizations, in Israel, only two groups were active: FakeReporter and Bodkim. Together, they published only 58 checks, mostly in Hebrew, and focused on content spread in local media. The Whistle (HaMizrokit) of Globes, Israel’s only member of the international IFCN network, was barely active.
These findings reveal a significant gap: Israeli society, which was exposed to a massive flood of disinformation, lacked sufficient real-time fact-checking. Moreover, no checks were published in Arabic at all, despite the Arab population in Israel comprising about one-fifth of the population. This means entire communities were left without access to verified information in their language.
On the global front, however, the Jordanian organization Misbar stood out, conducting 134 checks and topping the list, followed by France’s AFP with 64, along with groups from India, Turkey, and Spain.
This finding challenges the tendency to focus only on Western fact-checkers and underscores the central role of South Asia and the Middle East in the field.
In analyzing types of manipulation, the study found that 76% involved taking material out of time context, 71% used “mis-framing,” and 63% took material out of place context. Only 15% were entirely fabricated, and just 2% impersonated official entities. In other words, most manipulations relied on real footage taken out of context.
Alongside these findings, the study points to systemic limitations. Fact-checking organizations worldwide face economic, political, and technological pressures and struggle to keep pace with the flood of content.
In Israel, the situation is even more severe, as the lack of diverse organizations and multilingual publication mechanisms denies the public sufficient access to verified information.