Artificial intelligence is changing how we communicate, learn, and make decisions. Many hope it will free us from human biases, creating a world where machines make fair, neutral choices.
But that hope overlooks a hard truth: AI inherits the prejudices baked into the data from which it learns. Among the most persistent and dangerous of these biases is antisemitism – a hatred as old as history itself, now finding a new home in our digital age.
The reality is stark. AI systems, especially large language models like ChatGPT or LLaMA, are trained on vast amounts of internet data, much of which contains antisemitic content. This isn’t any fringe problem. Web archives, social media, and even historical documents feed these models without filtering out hateful or false narratives.
Antisemitic content in the data
From medieval blood libels to modern coded language about “globalists” or “banking elites,” these ideas are embedded in the datasets. When AI tools absorb this material, they can reproduce and amplify it, sometimes without any obvious warning signs.
Some attempts to fix this AI-embedded hatred have been made. AI companies use filters and safety measures and apply human feedback to steer models away from harmful outputs. But these fixes are patchwork at best. The problem isn’t a glitch to be repaired; it’s built into the foundation of how AI learns.
Even worse, the way social media algorithms work feeds this cycle. Content that sparks outrage, often including antisemitic tropes, engages users more, even those against it, which means it gets pushed harder and further, reinforcing hate instead of quelling it.
Antisemitism today is never just overt; it is also often quite subtle. It’s disguised in euphemisms like “deep state” or “shadow elites,” coded words that AI struggles to flag as harmful. The models do not understand history or morality; they just see patterns and repeat them. This means stereotypes about Jews and power can slip through almost unnoticed, making AI an unwitting amplifier of dangerous myths.
The danger for the future
The future is even more worrying. AI-generated deepfakes, fake news, and propaganda can spread antisemitic conspiracies faster and more convincingly than ever before. Open-source AI models can be easily tweaked by anyone, including bad actors, to churn out hateful content at scale. We have already seen how these tools are weaponized, flooding social platforms with sophisticated falsehoods that are hard to counter.
So, where does this leave those of us troubled by the rise of antisemitism? There is no silver bullet. Efforts to filter AI content, fine-tune models, or establish ethical guidelines are all necessary but mostly insufficient. The scale and complexity of AI systems make it nearly impossible to root out antisemitism completely.
This is not just a technical challenge; it’s a human one. If AI reflects humanity’s darkest biases, it’s because we have yet to fully confront them ourselves.
In building these digital tools, we have handed over a powerful legacy, both good and bad. Antisemitism is not a bug in AI; it’s an inheritance from centuries of prejudice. Recognizing this is the first step toward combating it. The digital age demands renewed vigilance, education, and action to ensure that technology serves as a force for truth and tolerance, not hatred and division.
The digital golem is awake. It’s up to us, not the machines, to keep it in check.
Louis Libin is an expert in military strategies, wireless innovation, emergency communications, and cybersecurity. Dr. Michael J. Salamon is a psychologist specializing in trauma and abuse and director of ADC Psychological Services in Netanya and Hewlett, NY.