This content is only partially available in English.
This content is only partially available in English.

Mainz AI research

 Screen with ChatGPT input field

SWR and Wikimedia Research Letter report

The study Do You Trust ChatGPT? – Perceived Credibility of Human and AI-Generated Content, published in September, points to a dangerous standoff: AI- and human-generated online content are perceived as similarly credible.

Dr. Martin Huschens, Professor of Information Systems at Mainz University of Applied Sciences and one of the authors of the study, emphasizes, “Our study revealed some really surprising findings. It turned out that the participants in our study rated AI-generated and human-generated content as similarly credible, independent of the user interface.” He added, “What’s even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in perceived newsworthiness and trustworthiness – even though AI-generated content remains at high risk for error, misunderstanding, and hallucinatory behavior.”

The study sheds light on the current state of perception and use of AI-generated content and the associated risks. In the digital era, where information is readily available, users need to apply discernment and critical thinking. The balance between the convenience of AI-driven applications and responsible information use is crucial. As AI-generated content becomes more widespread, users must remain aware of the limitations and inherent biases in these systems. Sensitizing users to the responsible use of AI-generated content remains a responsibility of scientific communication and, not least, a social and political challenge.

 

Link to the study:
https://arxiv.org/abs/2309.02524

Additional articles:
https://www.bild.de/regional/frankfurt/frankfurt-aktuell/kuenstliche-intelligenz-forscher-schlagen-alarm-86290466.bild.html