Largest-ever study on AI risks warns of "substantial or extreme" danger of disinformation

Source: Journalism Lab, Luca de Tena Foundation,

-More than half of respondents believe that "substantial" or "extreme" concern about AI's ability to spread disinformation is warranted.

By the Editorial Staff

A comprehensive survey of 2,778 artificial intelligence (AI) researchers, who have published in leading industry forums, warns of the significant risks associated with AI, with a particular emphasis on the area of misinformation.

This study, the most comprehensive of its kind to date, covers not only expectations about the technological advancement of AI, but also its potential social and ethical impacts, with a particular focus on the challenges facing the media and the veracity of information.

Disinformation: majority and predominant concern

The survey results indicate a majority concern among AI experts about several potential scenarios, with the spread of false information, such as deepfakes, being the most alarming. According to the survey, an overwhelming 86% of respondents believe that the spread of false information warrants "substantial" or "extreme" concern over the next thirty years. This concern underscores the disruptive potential of AI in influencing public opinion and democratic debate.

In addition, other scenarios of major concern include the manipulation of public opinion trends on a large scale (79%), the use of AI by dangerous groups to create powerful tools such as engineered viruses (73%), the use of AI by authoritarian rulers to control populations (73%) and worsening economic inequality due to AI systems that disproportionately benefit certain individuals (71%).

You may be interested in: Le Figaro agrees not to publish any articles or visual material generated by artificial intelligence.

Although the study does not provide specific details on how AI could be used to propagate disinformation, the nature of current AI technologies, such as advanced language models and synthetic content generation, suggests an era in which discerning between facts and fabricated fictions will be increasingly challenging. Experts point to a future where the veracity of information will be increasingly difficult to guarantee, highlighting the need for effective strategies to manage this emerging risk.

The future of AI

In terms of technological advances, the survey results indicate that there is at least a 50% chance that AI systems will achieve several milestones by 2028, such as autonomously building a payment processing website or creating music indistinguishable from the work of a popular musician. In addition, a 10% chance is predicted by 2027 that machines will outperform humans in all possible tasks, increasing to 50% by 2047.

Despite expectations of progress, there is considerable uncertainty among respondents about the long-term value of advances in AI. While 68.3% believe that positive outcomes from superhuman AI are more likely than negative ones, a significant percentage assign possibilities to extremely bad outcomes, including human extinction.

You may be interested in:   Advance preparedness, an essential need for disaster information management.

The study highlights the complexity and multiplicity of factors surrounding AI development. The findings suggest that while AI has the potential to transform various aspects of life and work, it also carries risks that must be managed with caution and responsibility, especially in the realm of information and media. The research stresses the need for a balanced and well-informed approach to AI, prioritizing the management of its potential risks while exploring its vast potential.

More information in this PDF

Publicaciones Similares