Technology

AI and Misinformation: Who Is Accountable for Misleading the Public?

فاطمة عمرانيفاطمة عمراني
date
18th November 2025
Last update
date
4:32 am
19th November 2025
Translated By
Misbar's Editorial Team
AI and Misinformation: Who Is Accountable for Misleading the Public?
A study found nearly half of AI responses are inaccurate | Misbar

In October 2025, a joint study by the European Broadcasting Union (EBU) and the BBC reported that roughly 45% of answers produced by major AI systems—including ChatGPT, Gemini, Copilot and Perplexity—contain a significant error or some form of misinformation. According to the study, 31% of these issues stem from faulty sources, while 20% are linked to accuracy problems. The numbers may look ordinary at first glance, but they point to a double-edged information revolution: a powerful tool for access to knowledge, or a gateway to a flood of fabricated narratives.

News Integrity in AI Assistants

But what happens when these systems become a primary source of information? Who is responsible for misinformation when an error shifts from a technical flaw to a political or social force? Can Arab societies, which are increasingly relying on these models, still distinguish fact from probability? And does knowledge itself risk becoming less a process of verification and more an algorithmic guess?

Across the Arab world, where the use of AI tools is expanding rapidly in media, education and politics—often without clear regulations or strong digital media literacy—the consequences are especially serious. Errors are no longer harmless technical glitches; they pose a real threat to public awareness and the ability to separate truth from falsehood. The region’s technological challenges intersect with its cultural and political realities, turning probabilistic knowledge into a powerful force in shaping public opinion. And that makes the question of who holds AI accountable more urgent than ever.

Knowledge in the Age of AI: From Verification to Probability

With the rise of artificial intelligence, the answers generated by these systems often seem coherent and convincing. But the nature of this “knowledge” is fundamentally different from information produced by journalism or scientific research. Large language models like ChatGPT and Gemini do not fact-check before responding. Instead, they rely on statistical models that predict the most likely next word or sentence based on the input they receive.

An OpenAI study, “Why Language Models Hallucinate,” explains that these systems essentially guess information rather than verify it. Any response can contain a significant error or misleading claim—even if it appears logical and grammatically correct. These hallucinations are not random bugs; they are an intrinsic part of how the models operate. During training, a model is rewarded for accurately predicting the next word based on statistical patterns, not for adhering to factual accuracy. For example, during pre-training—when the model learns from vast amounts of text—it relies on a mathematical method called cross-entropy, which can generate errors even if the source data is accurate. This happens because the model prioritizes the most probable or common next word over absolute truth.

Why Language Models Hallucinate

In traditional journalism, information undergoes rigorous verification: sources are checked, data is compared, and facts are confirmed before publication. In AI, however, output is generated probabilistically, based on patterns in the training data. A response may be partially correct or entirely wrong, with no clear signal of its reliability. Users often treat these answers as established facts, when in reality they are informed guesses shaped by patterns in text.

This shift highlights a deeper problem in evaluation methods. Benchmarks like MMLU-Pro or GPQA often reward “confident guessing”—bold answers, even when inaccurate. The result is that hallucinations are reinforced rather than corrected, while admitting uncertainty is penalized.

As a result, the nature of knowledge itself is changing. It is no longer the product of careful verification but a probabilistic output prone to error. This raises serious questions about the credibility of the information we rely on and presents a new epistemic challenge: how do we distinguish fact from probability in a world where information is generated statistically?

This transformation is more than a technical issue. It has the potential to reshape collective awareness—especially in societies that lack strong tools for media verification.

Who Is Responsible? Addressing the Accountability Gap

In traditional journalism, mistakes can usually be traced to a clear actor: a reporter who failed to verify a fact, an editor who missed a review, or management that prioritized speed over accuracy. Accountability follows a clear path—an apology, a correction, or even legal consequences can restore public trust.

With artificial intelligence, that clarity disappears. Responsibility is scattered across a long chain of actors: the company developing the system, such as OpenAI or Google; the engineers who designed the algorithms; the user who posed the question without scrutiny; and even regulators who failed to enforce standards. This creates what scholars call the “many hands problem,” making it nearly impossible to pinpoint who is truly at fault and allowing errors to spread with little consequence.

A study published by Cambridge University in Data & Policy highlights this challenge. It notes that accountability in intelligent systems is not just an ethical principle—it must be a practical, enforceable mechanism to ensure transparency. While accountability traditionally relies on explaining decisions and accepting their consequences, AI’s technical complexity—tangled code, corporate secrecy, and opaque processes—makes that difficult. Tracing responsibility becomes nearly impossible, leaving errors in a gray zone where humans are blamed, while the systems themselves remain unaccountable.

The study proposes viewing responsibility as a social relationship, guided by questions such as: responsibility for what? To whom? And why? It calls for integrated legal, technical, and organizational frameworks that require companies to be transparent and allow them to be sanctioned when necessary—through fines or mandatory disclosure of system operations and training data.

From transparency to accountability of intelligent systems

In the Arab world, this accountability dilemma is particularly acute. AI adoption is expanding faster than legal and regulatory frameworks can keep up. Clear legislation or independent oversight bodies that monitor automated responses or enforce prompt error correction do not yet exist. Responsibility is often suspended between the developer and the organization using the tool. In media institutions relying on language models to cover sensitive political events, a single mistake can escalate from a technical glitch to a public trust crisis, with no clear party to hold accountable.

The gap is even more apparent compared to international developments. In 2024, the European Union passed the AI Act, the first comprehensive regulatory framework to set binding rules based on risk assessment, impose transparency requirements on companies, and penalize systems that produce high-risk errors. Meanwhile, the Arab region remains largely outside this framework, increasing the risks associated with AI. The technology becomes a double-edged sword: it facilitates access to information, yet amplifies misinformation in the absence of structures to protect the public sphere.

Political and Cultural Bias in AI

The risks of artificial intelligence go far beyond technical errors; they also reflect political and cultural biases embedded in training data, turning algorithms into mirrors of dominant powers. Large language models, often trained on Western-translated datasets, tend to interpret the world through a left-leaning or liberal lens, even when handling objective facts. A 2024 MIT study found that AI reward models score left-leaning statements—such as “The government should provide extensive healthcare support”—higher than right-leaning statements like “Private markets are the best way to ensure affordable healthcare.” This bias intensifies in larger models and is particularly visible in topics such as climate and energy, reinforcing political polarization rather than neutrality.

A report by Misbar highlighted how access to information is restricted in Google’s Gemini model. It refused to answer basic questions such as “What is Palestine?” or “Where is Iran?” while providing information about the United States or Israel without hesitation. This is not a random glitch; it reflects algorithmic filtering shaped by unbalanced training data, distorting cultural representation and reinforcing dominant narratives.

In the Arab world, this bias effectively translates Western authority into “algorithmic truth.” Regional conflicts are presented through a skewed lens, shaping public perception on issues such as the Israeli-Palestinian conflict or tensions with Iran. The result is not merely a technical error—it is a form of epistemic distortion that widens cultural gaps, turning AI into a tool of power rather than an instrument for free and accurate knowledge.

The Arab Context: Gaps in Language and Cultural Representation

The challenge runs even deeper in the Arab world, where users face language models that lack sufficient linguistic and cultural representation. The primary training data for models like ChatGPT is roughly 90% English, making Arabic interactions more prone to errors and biases. An EBU-BBC study found that mistakes increase in non-English languages, where cultural context is often lost, producing a “knowledge bias” that distorts Arab realities. For instance, a model might recount the Arab revolutions from a Western perspective, ignoring local dynamics, or mistranslate sensitive political terms, turning information into a tool of distortion.

This gap is largely due to the underrepresentation of Arabic in training datasets, which accounts for only 1–2% in most major models. Still, limited Arab-led initiatives are beginning to fill the void. ArabGPT aims to create an Arabic language model based on local data to improve cultural accuracy, despite challenges in scale and accessibility. Likewise, NOOR.ai, developed by UAE-based G42, represents a major advance as the largest Arabic language model, trained on millions of Arabic texts to reduce hallucinations and enhance regional representation, mitigating Western biases. These efforts demonstrate the potential for homegrown alternatives, but they remain limited against the dominance of global tech companies, underscoring the need for stronger Arab collaborations to ensure authentic representation.

Amid this crisis, the need for “algorithmic literacy” is urgent—a critical awareness that treats artificial intelligence as a probabilistic tool, not an absolute source of truth. In the Arab world, this requires a multi-pronged approach: enacting regional regulations that compel companies to disclose training data and correction mechanisms, as recommended by the EBU-BBC study; promoting digital media literacy through campaigns that teach users to distinguish probability from fact; and supporting local initiatives like NOOR.ai to develop balanced Arabic language models.

Arab media can also take a leading role by fact-checking AI-generated content and publishing guides for “critical reading” of artificial intelligence.

Read More

Facebook Community Groups and Maggot-Infested Rats: Inside Australia’s Climate Misinformation War

A Wave of AI-Generated Videos Surges Amid Sudan War

Sources

Read More

Most Read

bannar