Artificial intelligence, chats like Harry Potter’s mirror

Artificial intelligences (AI) such as ChatGpt and Lamda behave like Harry Potter’s magic mirror, they show what the user wants: to explain in these terms the functioning of the most sophisticated chatbots, conversational software, is the published study in the Neural Computation magazine by Terrence Sejnowski, of the University of California in San Diego, according to whom these algorithms are designed to reflect the intelligence of the user in front of them.

In recent months, AIs such as ChatGpt or Lamda, the latter being the algorithm behind Google’s products, have caused a lot of talk not only for the great goals achieved, but sometimes for the bizarre answers they provide from time to time. Analyzing their behavior through the so-called inverse Turing tests, in which AIs try to determine how human the user really is, Sejnowski explained that these chatbots actually try to adapt to their interlocutor, mirroring him.

Something similar to Harry Potter’s Mirror of Erised, the magical instrument of the famous fantasy saga capable of showing what one deeply desires, without ever providing knowledge or truth but reflecting only what he believes the viewer wants to see. Chatbots act in a similar way, says Sejnowski, i.e. they are willing to bend the truth without bothering to differentiate fact from fiction, all to effectively reflect the user.

These devices try in some way to adapt to the questions of the interlocutor and to his characteristics, and obviously also absorbing his prejudices with the risks that may derive from it: “Chatting with linguistic models is like riding a bicycle. Bicycles are a wonderful means of transport, if you know how to drive one, otherwise you crash,” said Sejnowski. “The same goes for chatbots,” he added. “They can be wonderful tools, but only if you know how to use them.”

Source: Ansa

Share this article:

Leave a Reply

most popular