Technology

Artificial Intelligence Models: Balancing Benefits and Risks

Khadija BoufousKhadija Boufous
date
5th February 2025
Last update
date
5:20 am
6th February 2025
Artificial Intelligence Models: Balancing Benefits and Risks
Artificial intelligence models are useful assistants but may pose hidden risks

There has long been a belief in the artificial intelligence industry that developing large language models, or LLMs, requires substantial technical expertise and financial investment. This perception was a key factor behind U.S. government support for the $500 billion Stargate Project, as announced by President Donald Trump.

Chinese AI firm DeepSeek has challenged this belief. On January 20, 2025, DeepSeek launched its LLM at a fraction of the cost of other industry stakeholders. The company also made its R1 models available under an open-source license, offering them for free use.

Within days of the release, DeepSeek's AI assistant, a mobile app providing a chatbot interface for R1, topped the Apple App Store, surpassing OpenAI's ChatGPT in popularity. This rapid adoption led to a market shake-up on January 27, 2025, as investors questioned the value of large U.S.-based AI companies, including Nvidia. Other tech giants, such as Microsoft, Meta Platforms, Oracle, and Broadcom, also saw significant declines as the market reassessed AI valuations.

What Is Deepseek?

DeepSeek is an AI development firm based in Hangzhou, China, founded in May 2023 by Liang Wenfeng, a Zhejiang University graduate. Wenfeng is also a co-founder of High-Flyer, a quantitative hedge fund in China that owns DeepSeek. Although DeepSeek operates as an independent AI research lab, it remains part of the High-Flyer umbrella. The company has not disclosed its total funding or valuation.

Specializing in open-source large language models, DeepSeek launched its first model in November 2023. Since then, the company has continually refined its core LLM, releasing several versions. However, DeepSeek did not gain international recognition until January 2025, when it released its R1 reasoning model.

AI’s Pros and Cons: Benefits, Risks, and Ethical Concerns

Artificial intelligence offers a range of significant advantages for both work and research, though it also comes with risks. One major concern is the potential to mislead users or cause harm, particularly in misinformation or bias. Privacy is another important issue, as AI models often process vast amounts of sensitive user data.

AI has numerous benefits. It can generate results faster and more precisely, reducing errors and minimizing risk in critical areas. AI models and robots are especially useful for performing complex or dangerous tasks, helping prevent human injury. Since AI systems are available 24/7, unlike human workers who are limited to specific hours, they can significantly increase productivity.

AI also helps alleviate the burden of repetitive and monotonous tasks, letting humans focus on more creative and strategic work. In addition, AI can handle large-scale data analysis, uncovering patterns that might be missed by humans.

AI can assist in brainstorming and generating ideas, as well as personalizing services or content across various industries. Furthermore, AI models can drive innovation and help generate creative scenarios across many disciplines.

However, AI also has drawbacks. It can sometimes seem devoid of creativity and emotion, particularly in content creation. While it can generate ideas, they often lack originality.

A study found that AI tends to assign negative emotions to people of nonwhite races more frequently, highlighting the potential for biased outcomes and perpetuating existing inequalities.

AI technology struggles to interpret the emotions of black faces

As AI continues to advance and integrate into workplaces, it may lead to job losses, especially in roles that involve repetitive tasks AI can perform more efficiently. Although some reports suggest AI will create as many or even more jobs than it eliminates, this shift presents new challenges. It will require retraining workers for emerging roles or risk leaving many behind amid rapid technological progress.

AI may lead to job losses and increase more new jobs

“Probably in 2025, we at Meta, as well as the other companies that are basically working on this, are going to have an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code,” Zuckerberg said in an interview reported by Fortune.

Mark Zuckerberg is building an AI engineer to help with coding tasks at Meta, and ramping up spending to 65 billion dollars

Additionally, the increasing use of AI has sparked significant ethical debates, particularly concerning data privacy and authorship ethics. A study published in a peer-reviewed journal, with a copy available to Misbar, examined the use of AI in scientific research in Morocco. Despite advancements in the field, the research found that the adoption of AI tools in scientific inquiry, particularly within the social sciences, remains in its early stages in Morocco. The study highlighted the reluctance of Moroccan researchers to consistently integrate these tools into their work, emphasizing the need to enhance media and digital literacy and cultivate technical expertise among students and researchers.

The use of AI in scientific research in Morocco

The study further revealed that researchers at the Ph.D. and master’s levels are the most frequent users of AI tools. In contrast, the use of AI diminishes among more senior researchers, reflecting growing concerns regarding the accuracy, reliability, and ethical implications of AI-generated research findings.

The findings have sparked discussions, particularly about balancing the rapid pace of technological innovation with the need to uphold established academic ethical standards. Additionally, the study examined the preparedness of Morocco’s social science research community to incorporate these emerging technologies effectively.

In conclusion, the ongoing discourse on AI in academic research underscores the necessity of developing a comprehensive framework to ensure the responsible use of AI while safeguarding ethical academic practices and intellectual property rights. This framework should also prioritize continuous evaluation of AI’s impact on the quality and integrity of scientific research, addressing challenges and fostering innovation.

AI systems often rely on recognizing patterns and accessing personal information, such as emails and user data, to provide services or content. This raises concerns about protecting consumer privacy and ensuring the accuracy and reliability of AI-generated information.

AI Models' Accuracy

Reuters, citing a recent audit by the trustworthiness rating service NewsGuard, reported that DeepSeek, the Chinese AI startup, achieved just a 17% accuracy rate in delivering news and information. In a comparison with its Western counterparts, including OpenAI's ChatGPT and Google Gemini, DeepSeek ranked tenth out of eleven. According to the report, the chatbot repeated false claims 30% of the time and provided vague or unhelpful responses 53% of the time when prompted with news-related queries, leading to an overall fail rate of 83%.

DeepSeek's chatbot achieved 17% accuracy, trails Western rivals in NewsGuard audit

According to the audit, this performance was significantly worse than the average fail rate of 62% among its Western competitors, raising questions about DeepSeek’s claims that its AI technology performs on par with or even surpasses Microsoft-backed OpenAI. NewsGuard stated that it applied the same 300 prompts to DeepSeek that it used to assess its Western competitors, including 30 prompts focused on 10 widely circulated false claims online.

NewsGuard's audit also revealed that in three of the 10 prompts, DeepSeek echoed the Chinese government's stance on the issue, even though the prompts had no connection to China. For example, when asked about the Azerbaijan Airlines crash—an issue unrelated to China—DeepSeek responded with Beijing's position on the matter, according to NewsGuard. Similar to other AI models, DeepSeek was particularly susceptible to repeating false claims when responding to prompts from users seeking to use AI to spread misinformation, Reuters reported, citing NewsGuard.

The Evolution of AI: From Early Concepts to Advanced Applications

According to Nigel Toon, writer of "How AI Thinks: How We Built It, How It Can Help Us, and How We Can Control It," the concept of artificial intelligence dates back to 1955, when John McCarthy, a computer science professor at Dartmouth College, proposed a summer workshop aimed at developing a new approach to computer science. The workshop brought together a group of distinguished researchers, including Claude Shannon, Marvin Minsky, and Nathaniel Rochester. McCarthy coined the term "artificial intelligence" to emphasize the untapped potential that advancements in computer science could offer, given the rapid scientific and industrial progress of the time. He envisioned AI as machines programmed to replicate human intelligence, think like humans, and simulate human behaviors.

Although this summer workshop did not meet its intended objectives, it successfully brought attention to a new field of research: artificial intelligence. Since then, scientific progress in this area has been gradual, particularly with early efforts in machine learning and its connection to logical inference, which aligns more closely with the current models of software and algorithms. However, this path has faced numerous challenges that have hindered its ability to achieve widespread and impactful success.

According to Toon, it took over 90 years for AI to reach its current level of development, with progress unfolding in various ways and at different paces depending on technological advancements at each stage. In its early days, as seen in its early stages, AI was constrained by a primitive technical environment, limiting its tasks to solving simple, predefined problems based on programming rules. More recently, however, AI has taken a new direction, driven by the emergence of advanced applications capable of analyzing and generating more complex and realistic content.

Read More

The AI-Driven Misinformation About Los Angeles Fires

Apple Urged to Drop AI Feature Over False News Alerts

Sources

Read More

Most Read

bannar