Experts call for rules on Artificial Intelligence

Simoncini, the alert on AI is dictated by the need for rules

It was the urgency of rules in a pervasive sector such as Artificial Intelligence that dictated the alert launched by the Center for AI Safety. “The extensive use of artificial intelligence on the one hand is leading to a real revolution and on the other it is posing serious problems”, observes one of the signatories of the declaration, information technology expert Luca Simoncini, former professor of Engineering of information at the University of Pisa and former director of the Institute of Information Technologies of the National Research Council.

“Artificial intelligence is so pervasive that it has a strong impact in many sectors of social life (just think of the risk of producing fake news or the control of autonomous cars), as well as on economic, financial, political, educational and ethical aspects” , notes the expert. “It is evident – he adds – that no one can object if an emerging technology is used for beneficial purposes, for example in the biomedical or pharmacological fields”.

Consequently, if speaking of humanity’s risk of extinction may seem hyperbole according to Simoncini, the Cias declaration recalls the manifesto in which Bertrand Russell and Albert Einstein in 1955 denounced the risks of nuclear weapons. The case of artificial intelligence is different, but the point is that clear rules and awareness are needed. “We often forget that these systems are fallible”, adds Simoncini, and the large companies active in the sector “base their activities only on technological prevalence, they have not considered the problem of regulation”. As what is happening in the autonomous car sector demonstrates, in the tests “an empirical approach is followed” and “the need to move towards systems that are not capable of making autonomous decisions without human intervention is not considered, while one should move towards systems that are of help to the driver, who in any case has the opportunity to intervene and regain control at any time”.

Even in the case of Chatbots like ChatGpt, for example, “using them should be understood as an aid, not as the replacement of human capabilities by an artificial intelligence system”. We should think right now “of the need to set limits and constraints”, concludes Simoncini, considering the wrong uses of artificial intelligence in the packaging of fake news that are increasingly difficult to recognize: “the difficulty of distinguishing between true and false – he concludes – could create situations that are difficult to manage”.

Battiston, ‘powerful algorithms that require rules’

Rules are needed to manage powerful algorithms such as those of artificial intelligence and to avoid unforeseen effects: this is the meaning of the alert launched by the Center for AI Safety, according to physicist Roberto Battiston, of the University of Trento and one of the signatories of the declaration. “These kinds of Generative AI algorithms have proven to be very powerful at interfacing people using web data and natural language, so powerful that they could generate unforeseen side effects,” Battiston notes.

“Nobody today really knows what these effects could be, positive or negative: time and experimentation are needed – continues the physicist – to create rules and standards that allow us to manage the effectiveness of this technology while protecting us from the relative dangers. It is not a question of the threat of a superintelligence that can overwhelm humanity, but of the consequences of the way humans get used to using these algorithms in their work and in the daily life of society”. Think for example, he adds, “of the possible interference in electoral processes, the dissemination of false news, the creation of news channels that respond to specific disinformation interests”.

For this, he observes, “we need to be prepared to manage these situations, we have already seen the first signs of problems of this kind in past years with the Cambridge Analytica affair or with the guerrilla tactics of Russian trolls on the web”. Usually, Battiston continues, “when man fails to understand the reality that surrounds him, he invents myths, ghosts, monsters, to try to protect himself from dangers through a certain type of mythological story. The game is still firmly in the field of man, but the tools available are much more powerful than in the past”.

Proposing the comparison with atomic weapons, recently brought into play in relation to the risks of artificial intelligence, Battiston observes that “when we discovered the power of the atom, we had to find a way to contain the threat of a nuclear confrontation. For for the moment we have succeeded, for about 80 years. Someone – he continues – has compared the power of these technologies to nuclear power, asking for the creation of suitable rules to deal with these risks. There is probably some truth in this. I I believe, however, that it is very important to understand how these algorithms work, as only in this way – he concludes – will we be able to activate an appropriate series of rules of social containment, while at the same time exploiting the enormous positive potential”.

Source: Ansa

Share this article:

Leave a Reply

most popular