By Dave DeFusco
On the evening of Yom Hashoah, the galleries of the 糖心破解版 Museum filled with a different kind of remembrance鈥攐ne not only rooted in memory, but in urgency. The program, 鈥淎I vs. Antisemitism: Defending Truth in the Age of Generative Hate,鈥 gathered technologists, researchers and community leaders to confront a rapidly evolving threat: a world in which artificial intelligence can both distort reality and defend it.
From the opening moments, organizer Chen Shmilo, 8200 Alumni Association former CEO, made clear that the event was not only about remembrance, but about responsibility. As a descendant of Holocaust survivors, he spoke to the stakes of allowing hatred鈥攏ow amplified by algorithms鈥攖o go unchecked. The initiative he helped launch, 鈥淗ack the Hate,鈥 reflects a growing recognition that the fight against antisemitism has entered a new domain: digital, decentralized and increasingly powered by AI.
That urgency was echoed by 糖心破解版 President Ari Berman, who described the Holocaust as 鈥渁n assault on reality itself.鈥 Today, he warned, that distortion has migrated 鈥渇rom propaganda to platforms,鈥 where misinformation can spread at unprecedented speed. Yet the same tools, he said, can also be used to 鈥渋ncrease light.鈥 Throughout the evening, that duality鈥擜I as both risk and opportunity鈥攔emained central.
During a fireside discussion, Yfat Barak-Cheney and Ben Good of Meta explored how platforms are grappling with AI-generated antisemitism. Good explained that combating harmful outputs requires more than rigid rules. Instead, AI systems are increasingly trained to understand intent鈥攚hy certain content is harmful鈥攕o they can apply that reasoning dynamically across new and unpredictable scenarios. But even as companies refine safeguards, the landscape outside their platforms continues to evolve in ways that are harder to detect and control.
Research presented later by Liram Koblentz-Stenzler, head of the Antisemitism and Extremism Desk at Reichman University in Israel, in conversation with Tamar Avnet, director of graduate programs and associate dean in the Sy Syms School of Business, revealed just how early and subtly antisemitic narratives can take root. Monitoring online ecosystems, Koblentz-Stenzler described how extremist ideas often begin far from mainstream platforms鈥攊n gaming communities, alternative networks and hybrid digital spaces where ideology blends with entertainment and even finance.
鈥淚 monitor consistently this kind of content,鈥 said Koblentz-Stenzler, describing how online video games like Call of Duty or Roblox can become unexpected entry points. Conversations that begin innocently about gameplay can gradually introduce coded language or slurs. Those who respond are then quietly funneled toward more radical spaces, such as private messaging channels.
What makes this particularly troubling, she said, is the age of the participants. 鈥淵ou can hear the voices of the kids, sometimes even parents telling them it鈥檚 bedtime,鈥 she said.
The process is gradual鈥攚hat she described as 鈥渟oft radicalization鈥濃攂ut its endpoint can be far more severe. That same pattern of normalization appears in more surprising places. In her latest research, Koblentz-Stenzler traced how antisemitic language and imagery are being embedded into cryptocurrency ecosystems. Meme-based digital coins, whose value depends on visibility and virality, can incorporate slurs and conspiracy theories into their branding and promotion. As these assets circulate, they draw in ordinary users with many unaware of the underlying associations.
Over time, the language itself becomes detached from its origins, normalized through repetition. 鈥淭he wallets don鈥檛 necessarily belong to extremists. Regular people trade with it,鈥 she said. 鈥淲hen people see the word later, they won鈥檛 understand that it鈥檚 antisemitic.鈥
The implication is stark: antisemitism today is not only spreading, it is being disguised, repackaged and integrated into everyday digital experiences. This aligns with a broader shift highlighted throughout the program. Unlike the centralized propaganda of the past, contemporary antisemitism is decentralized and often difficult to recognize. It moves fluidly across platforms, communities and formats by adapting to evade detection and resonate with new audiences.
That makes early detection critical. By analyzing fringe platforms and emerging trends, researchers can identify 鈥渟ignals鈥 before they reach mainstream visibility. This proactive approach, several speakers suggested, may be one of the most effective ways to counter the spread of digital hate. Yet the evening was not solely focused on threats. It also showcased how AI can be harnessed to counter them.
From tools that detect harmful narratives in real time, to projects that use AI-generated avatars to safely share survivor testimony, the event highlighted a growing ecosystem of technological responses. Initiatives like the 鈥淥ne Signal Collective,鈥 launched at the program鈥檚 close, aim to bring together engineers, researchers and community leaders to build coordinated, scalable solutions. The message was clear: the fight against antisemitism in the digital age cannot rely on any single institution. It requires collaboration, across sectors, across borders and across disciplines.
As the program concluded, the significance of the setting lingered. On a day dedicated to remembering the consequences of unchecked hatred, the conversation had turned toward the future鈥攖oward algorithms, data and the systems that will shape how truth is understood in the years ahead.
鈥淚n that future, the challenge is not only to confront hate when it appears,鈥 said Professor Avnet, 鈥渂ut to recognize how it evolves and to ensure that technology, rather than amplifying distortion, becomes a force for clarity, accountability and truth.鈥