Artificial intelligence (AI) can lie. For example, a chatbot recently claimed that Bashar al-Assad was still president of Syria. The chatbot was hallucinating. At that point, Ahmed al-Sharaa had long been the head of the transitional government.
Errors only corrected upon request
tuenews employee Oula Mahfouz knew the correct answer. She pointed out the chatbot’s error. But it insisted on its misinformation. Only upon explicit request did it correct its mistake and provide up-to-date information.
Up to 40 per cent is fabricated
This example is not an isolated case. This is shown by a study by the European Broadcasting Union (EBU). The EBU is an association of 68 public broadcasters in 56 countries. It says that despite improvements, chatbots are still unreliable.
In its study, the EBU found that chatbots invent up to 40 percent of their messages and present them as facts. Nevertheless, more and more people worldwide trust them and believe fake news.
AI often uses outdated data
According to Tagesschau, there are several reasons for these errors. AI uses outdated data with which it was trained. Or it hallucinates and puts together supposedly reliable chains of words to form statements—even if the facts are incorrect. Sometimes it also invents sources that do not exist. Or it links facts that do not belong together.
Use more than one source
A particular problem for inexperienced users: AI does not automatically use current information, according to Tagesschau. This only happens if users explicitly enter this in the input (the “prompt”). Experts therefore advise always consulting more than one source. This rule applies in general—not just for chatbots.
About the EBU study:
https://www.ebu.ch/Report/MIS-BBC/NI_AI_2025.pdf
See:
https://www.tagesschau.de/wissen/technologie/kuenstliche-intelligenz-fakten-100.html
tun25110201

