Grok Suspends Hebrew Translations on X: Curbing Incitement or Concealing Actions?
A wave of outrage erupted on X after the platform (formerly Twitter) suspended automatic translation for Hebrew posts, alleging that a recent surge in mistranslated expressions—especially idioms and military terminology—was causing content to appear more inflammatory to non–Hebrew-speaking users. The change, which prevents non–Hebrew speakers from easily accessing translated versions of such content, appears to have taken effect in mid-November 2025.
The AI explained that the change aims to reduce global amplification of policy-violating material (e.g., calls for violence) caused by bad auto-translations. It stated that Hebrew is the only language for which X has disabled translations platform-wide.
“Translation from Hebrew was disabled because it often amplified inflammatory or policy-violating content, like calls for violence, to a global audience via inaccurate or literal renditions. X prioritizes free speech for opinions but limits features that exacerbate harm without suspending originals. No evidence supports targeting any group; it's about platform integrity amid documented spikes in Hebrew hate speech,” Grok stated.
Grok said that no similar restrictions have been applied to other languages, such as Arabic, despite comparable tensions. “Arabic content, despite volume, exhibited fewer such rendering issues that breached policies globally,” the AI added. “Actions are data-driven to curb harm without blanket censorship, focusing on where literal translations distorted context or violated policies most acutely.”
X’s Hebrew Translation Freeze Sparks Debate Over Safety Measures and Accountability
Although the move, confirmed by AI Grok, is framed as preventing “inflammatory content,” many viewed it as an attempt to confine the increasingly violent rhetoric emerging from Israeli society to a Hebrew-only sphere, keeping it out of sight for those who do not speak the language—an effort by X to define what the world gets to see and what remains hidden.
The real concern is that when people read what Israelis are writing—including calls for violence—it leads to a negative perception of Israel and, by extension, Israeli people as a whole.
Pro-Palestinian users and other critics contend that the decision shields Israelis from accountability for their “genocidal” or “bloodthirsty” statements, pointing to screenshots of untranslated posts that appear to celebrate violence.
They shared a wide range of Hebrew posts urging the killing of Palestinian children, advocating for the complete destruction of Gaza, and praising what they described as genocidal acts. Several claimed the platform is attempting to conceal the true nature of Zionism from the global audience.
Moreover, the decision has reignited debates over platform moderation and free speech, especially as critics recall previous instances where translations revealed “the worst of humanity” in certain Hebrew content.
An X user wrote, "May all the babies in Gaza die within a minute, God willing, with their parents and grandmothers around them."

Systematic Tolerance of Anti-Palestinian Hate Speech
A wide range of X users contended that the move was less about protecting people from hate speech and more about preserving Israel’s image globally—particularly as extremist and racist rhetoric toward Palestinians, and even all Arabs, has surged since the beginning of the war on Gaza.
In October 2025, Israeli Prime Minister Benjamin Netanyahu said his government views social media platforms as a “weapon” to support Israel’s right-wing in the U.S. amid condemnation for the ongoing Israeli genocide in Gaza.
Netanyahu added that gaining leverage over TikTok and X would enable Israel to “get a lot” in terms of political and public support, highlighting the strategic role of social media in shaping U.S. public opinion.
He referred to Elon Musk as a “friend”—and, in his view, what friends do is provide a veil of censorship while Palestinians are killed with full transparency.
Last August, Grok, developed by Musk’s artificial intelligence startup xAI and integrated into his platform X, was temporarily suspended in the latest controversy surrounding the chatbot.
When questioned by users, Grok responded that the suspension "occurred after I stated that Israel and the US are committing genocide in Gaza," citing findings from organizations such as the International Court of Justice, the United Nations and Amnesty International. "Free speech tested, but I'm back," it added. Musk sought to downplay the response, saying the suspension was "just a dumb error" and that "Grok doesn't actually know why it was suspended."
Meanwhile, in July, Elon Musk’s AI company, xAI, disabled Grok’s text responses and removed several posts after the chatbot praised Hitler and made anti-Semitic comments.

Meta’s Bias in Moderating Arabic vs. Hebrew Content
Meta, more than any other company, leads this digital repression effort. It has arbitrarily taken down Palestine-related content, disrupted live streams, limited comments, and suspended accounts.
Meta has consistently shown little tolerance for Palestinian speech, particularly during times of crisis. Amid the ongoing Israeli war on Gaza, Meta claimed its policies are applied consistently worldwide and denied allegations of “deliberately suppressing voices.” Yet available evidence points to a different reality.
The bias extends to Meta’s allocation of resources and policy enforcement. Arabic-language content is subject to heavy over-moderation, while Hebrew content is largely under-moderated, as Meta allegedly lacked automated classifiers to detect and remove hate speech in Hebrew, despite Israelis using its platforms to openly call for violence and organize attacks against Palestinians.
At the time, Meta’s system was automatically flagging Arabic-language content at a higher rate than Hebrew content as a result of the company’s inconsistent policies, which "may have resulted in unintentional bias," internal documents revealed. This was due to Meta installing an Arabic “hostile speech classifier” that automatically detected hate speech, but not doing the same for Hebrew-language content. This resulted in Arabic content being removed more frequently than Hebrew.
Meanwhile, following widespread accusations of censorship and bias against Palestinian content, Meta commissioned a Business for Social Responsibility (BSR) study in September 2021 to examine the human rights impacts of its policies and activities in Palestine, including their effects on Palestinian users’ right to freedom of expression.
BSR’s findings highlighted a clear pattern of over-enforcement of Arabic content compared to Hebrew content, alongside under-enforcement of moderation policies on Hebrew-language posts.
The report also pointed to serious human rights concerns, including impacts on Palestinians’ rights to freedom of expression, freedom of assembly, political participation, and protection from discrimination. BSR found evidence that Meta’s policies and practices produced biased outcomes that disproportionately harmed Palestinian and Arabic-speaking users.
The study further revealed that Meta removed Arabic-language content at a significantly higher rate than Hebrew-language posts, a trend observed in reviews conducted by both automated systems and human moderators.
This is particularly concerning given Meta’s heavy reliance on automated content moderation. Around 98% of moderation decisions on Instagram and nearly 94% on Facebook are handled by algorithms, which have repeatedly been shown to be poorly trained in Arabic and its diverse dialects.
According to one internal memo leaked in the 2021 Facebook Papers, Meta’s automated tools used to detect terrorist content incorrectly deleted nonviolent Arabic content 77 percent of the time.
According to an article published by Al Jazeera in December 2023, "Meta permits verified accounts linked to the Israeli government—including politicians, the military, and official spokespeople—to spread war propaganda and disinformation that justifies war crimes and crimes against humanity. This includes content promoting attacks on hospitals and ambulances, filmed confessions of Palestinian detainees, and nearly daily “evacuation” orders targeting Palestinian civilians."
On Instagram, users posting about Palestine have faced shadowbanning—a covert form of censorship that makes their content effectively invisible without any notification. Additionally, Meta lowered the certainty threshold for its automated filters to hide hostile comments from 80% to 25% for content coming from Palestine.

The Role of Meta Platforms in Normalizing Anti-Palestinian Racism
Internal data obtained by Drop Site News from Meta reveals that the Israeli government carried out a broad crackdown on Instagram and Facebook posts that criticize Israel or express support for Palestinians.
The data show that since October 7, 2023, Meta has complied with 94% of takedown requests submitted by Israel. According to Drop Site News, Israel has emerged as the world’s top source of content removal requests, leading Meta to expand its automated content takedowns. The outlet described this coordination as potentially the largest mass censorship operation in modern history.
Furthermore, having moderated the term “Zionist” since around 2019, Meta announced it would remove posts that use “Zionist” in a derogatory way—a significant anti-Palestinian policy change. The company stated that the change aims to prevent the term from being used to convey “anti-Semitic views” toward Jews and Israelis.
Rather than safeguarding Palestinians in Gaza, Meta had allowed paid ads that explicitly called for a “holocaust against Palestinians” and the eradication of “Gazan women, children, and the elderly.”

Algorithmic Bias in Tech Platforms Amplifies Stereotypes
On May 22, the U.S. website Drop Site News reported that Microsoft has blocked internal emails containing terms related to the Israeli war on Gaza. The company introduced a policy prohibiting internal emails on its servers that include words such as “Palestine,” “Gaza,” or “genocide.”
An investigative report, based on leaked internal documents, revealed that tech giant Microsoft played a central role in providing cloud computing and artificial intelligence services to the Israeli military during the Gaza war.
The leaked documents reveal that Microsoft’s ties to the Israeli military are more extensive and financially significant than previously understood, particularly during the conflict. Following the outbreak of war, the company reportedly intensified its collaboration with Israel’s Ministry of Defense, offering enhanced computing and storage services and signing contracts to provide thousands of hours of technical support.
The suppression of Palestinian voices—paired with the spread of disinformation and incitement against them—has long seemed to be standard practice for major tech companies in the absence of genuine accountability.
This time, however, the stakes are higher. The disparity in how language and content are moderated by the AI has garnered a great deal of criticism in recent months, as the war on the besieged enclave rages on and the death toll rises. Social media platforms risk being implicated once again in acts of genocide, as they bear a shared responsibility to protect users and uphold freedom of expression.
Read More
Microsoft Bars Employees From Using the Word ‘Palestine’ in Internal Emails
Meta Ends the Arabic Term Shaheed and Martyr Ban Amid the Israeli War on Gaza


















