AI Models Like ChatGPT Are Not an Existential Threat to Humanity: Decoding New Study
A recent study sheds light on how large language models work, revealing that they are unable to learn autonomously or acquire new skills without human intervention. The finding challenges widespread fears that these systems could one day threaten humanity's existence.
Also read:
Language Models: Powerful but Limited Tools
Large language models (LLMs), like ChatGPT, are evolved versions of pre-trained language models (PLMs). These models are trained on massive amounts of data from the web, allowing them to understand and generate natural language and other types of content. However, despite their impressive ability to perform a variety of tasks, a new study shows that they lack the ability to learn or develop new skills independently. In other words, these AIs remain dependent on human instructions to complete specific tasks.
Emerging Capabilities: Myth or Reality?
One of the fascinating aspects of LLMs is their tendency to exhibit โemergent abilities.โ These unexpected performances, which the models have not been explicitly trained to perform, have attracted much interest. For example, some emergent abilities include understanding social situations or performing complex tasks, which might suggest that these models are beginning to reason autonomously. However, the recent study indicates that these abilities are not the result of actual reasoning, but rather the result of a well-known ability called โin-context learningโ (ICL), where models rely on specific examples to perform tasks.
Unpredictability of language patterns: An exaggerated threat?
The unpredictability associated with LLMsโ emerging capabilities, particularly as these models are trained on increasingly large datasets, raises important questions about safety and security. Some worry that these emerging capabilities could one day include potentially dangerous skills, such as reasoning or planning, that could pose a threat to humanity. However, the study demonstrates that LLMs do not have the ability to master new skills without explicit instruction, meaning that they are predictable, safe, and controllable, although they can still be misused by individuals.
The Future of LLMs: More Sophisticated, But Still Under Control
As these models continue to grow in size and sophistication, they are likely to generate increasingly accurate language when presented with detailed and explicit prompts. However, they are highly unlikely to develop complex reasoning capabilities. This reality challenges the dominant narrative that this type of AI could pose a threat to humanity. As Dr. Harish Tayyar Madabushi, a computer science researcher at the University of Bath, explains, this misperception distracts from the real issues that need our attention, such as the risks associated with misinformation or fraud.
Experiments and Discoveries: What Science Really Reveals
To test the LLMs' abilities to perform tasks they have never encountered before, the team of researchers, led by Professor Iryna Gurevych of the Technical University of Darmstadt in Germany, conducted more than 1,000 experiments. They found that the LLMs' ability to follow instructions, their memory, and their language skills were sufficient to explain their performance, as well as their limitations. This finding shows that the fear that these models could acquire dangerous skills, such as reasoning and planning, is unfounded.
The Real Threats: What to Really Watch Out For with AI
While the study shows that fears about existential threats posed by LLMs are overblown, it does not downplay other risks associated with AI. Professor Gurevych points out that even if these models do not develop complex reasoning skills, they can still be used to generate false information, which poses a significant risk to society. The researchers stress that the focus should be on managing the real risks posed by these technologies, rather than hypothetical and unfounded threats.
Implications for users: How to use LLMs safely
For end users, this means it is important not to rely on LLMs to interpret and perform complex tasks without explicit instruction. Instead, users should provide clear instructions and examples where possible, especially for the most complex tasks. This will not only improve the accuracy of the results, but also minimize the risks of incorrect use or misinterpretation of AI capabilities.
BMW replaces humans with robots, transforming the way cars are built
This article explores how fears about large language models, such as ChatGPT, are often overblown, particularly regarding their potential to threaten humanity. While these technologies are powerful and impressive, they remain under human control and require careful attention to the real risks they pose, such as misinformation and fraud.
Source: ACL Anthology