The antisemitism that is skyrocketing around the world, from America to Australia, is not only the explicit, aggressive kind that is impossible to miss. 

There is also more subtle antisemitism that operates quietly, through repetition, omission, and framing, through what is emphasized, what is softened, and what is left unexplained.

You may be frustrated that not enough is being done about the first kind, which governments must tackle better. But the good news is you can have an impact on the second kind from the comfort of your own home.

Once you understand how Artificial Intelligence (AI) actually works, it becomes difficult to read its answers the same way again. What often appears as balance or neutrality is the product of accumulated bias, learned quietly from the world AI was trained on.

It is this quieter, structural layer that increasingly shapes how events are narrated and understood. And those narratives no longer circulate only through headlines or social media. They now feed AI systems that millions of people rely on to make sense of the world.

ChatGPT
ChatGPT (credit: PR)

Students ask AI to explain Israel. Journalists consult it for background. People turn to it to make sense of terrorism, war, Israel, and other global events. 

The answers often sound calm, composed, even authoritative. But that tone can be misleading.

AI does not evaluate truth. It reflects the information environment it is trained on.

AI makes statistical guesses

When people interact with AI platforms, they often believe the system possesses higher wisdom or a more objective form of knowledge. But actually, the responses are generated through learned statistical patterns rather than through any understanding of the subject itself.

Large language models (LLMs), the systems powering tools such as ChatGPT, Gemini, Claude, Grok, and Perplexity, generate responses by calculating probabilities from what they see online. These include news articles, blogs, social media content, academic writing, and other publicly available sources.

They do not distinguish between credible and non-credible sources.

They do not identify ideological framing.

They compress patterns.

When distorted narratives dominate the data environment, distortion becomes the most probable answer you get from your AI.

When media distortion becomes training data

For more than 25 years, HonestReporting has documented how global media coverage of Israel is shaped not only by factual errors, but by habits: selective omission, misleading terminology, lack of transparency, and in many cases, emotional manipulation.

For example, HonestReporting’s analysis of media coverage of the Bondi Beach terror attack shows how many outlets initially failed to describe the incident as antisemitic terrorism, softening or obscuring the targeted nature of the violence in a way that normalized ambiguity rather than reporting bigotry. 

Another recent investigation revealed how media outlets repeated claims of a looming famine in Gaza, only to quietly walk them back once the evidence collapsed, leaving the initial narrative much more prominent than its correction. Narratives can persist even when they are no longer supported by facts. 

The same dynamics appear in visual reporting. HonestReporting recently documented how flood imagery from Gaza circulated, presenting it as a crisis while omitting broader context, shaping sympathy through carefully framed visuals, and consequently impacting global perceptions. 

These are precisely the kinds of narratives that become the raw material that feeds into the data AI systems learn from.

When AI systems later summarize events, they reflect that same framing, not because they chose to obscure antisemitism, but because that was the version of the story most available to them.

This is how antisemitism becomes embedded in systems that claim neutrality: not through intent, but through repetition.

Users are not powerless

One of the most persistent myths about AI is that users are passive recipients.

They are not.

AI systems continue learning through interaction. Every time a user challenges an incomplete answer, introduces missing context, or asks for clarification, the system receives a corrective signal. Every time a flawed response is accepted without pushback, the imbalance is reinforced.

This is not censorship. Nothing is removed. No viewpoint is silenced.

It is participation.

Accountability as engagement

Understanding how AI systems generate answers changes the role of the user. If AI reflects the information environment it is trained on, then engagement becomes part of that environment.

Accountability, in this context, does not mean removing viewpoints. It means refusing to treat partial narratives as neutral, and recognizing that omission, framing, and imbalance shape what your AI tells you just as much as explicit falsehoods.

Once this is understood, responsibility shifts. AI responses are no longer something to be passively accepted or rejected, but something to be engaged with: questioned, contextualized, and, when necessary, challenged.

This is where users move from awareness to action.

Your toolkit for getting honest reporting from your AI:

  1. Being aware of how the system operates. AI systems are trained on existing data environments, which often contain bias, omission, and imbalance. Learn HonestReporting’s Eight Categories of Bias to identify not just what is wrong, but how it's biased. Use these categories as a lens when reviewing AI responses.
  2. Asking for source diversity and establishing standing instructions. Don't just inquire, "What happened?" Ask: "What do multiple sources across the political spectrum say about this?" Structure your AI conversations to require citations, transparency, and source diversity. Ask for clear identification of sources and context.
  3. Introducing stronger evidence yourself and asking for the strongest counterargument: Feeding your AI expert analysis, statements, or HonestReporting investigations and asking it to reassess its response. This helps shape the models with more accurate responses.
  4. Reporting patterns you observe. When you see AI repeating media falsehoods, screenshot it, report it to the platforms, and you can email action@honestreporting.com. HonestReporting not only monitors bias trends but also engages directly with top media outlets.
  5. Sharing examples of successful engagement with AI helps others understand how they, too, can influence and become more informed users. This also helps build a community challenging algorithmic bias.

Why this matters now

AI is already influencing how issues such as antisemitism, terrorism, and Israel are understood. That influence accumulates through repeated patterns of use.

Every time a distorted narrative goes unchallenged, it becomes easier to repeat. Every time context is reintroduced, the system adjusts, not because it has values, but because it responds to participation.

The question is no longer whether AI shapes public understanding, but whether users choose to engage with that process, or allow distorted patterns to continue unchecked.

Just as HonestReporting's engaged readers have changed how major media outlets have covered Israel for 25 years, today's AI users can shape how algorithms understand and present information, and they can really make a difference.

Your voice matters. Use it!

Didi Shammas-Gnatek is the AI Project Leader at HonestReporting, spearheading the development of BiasBreaker, an AI-based revolutionary platform for real-time detection and analysis of media bias. She is a former diplomat, organizational consultant, and business development strategist.