Artificial intelligence needs limits: The time for rules is now

It was a somber letter signed by Tesla CEO Elon Musk and a host of other tech giants. The competition for the development of artificial intelligence (AI) has gotten out of control. The web could be flooded with fake news and propaganda, even fulfilling jobs could be rationalized away. We could lose control of our civilization. More regulation is needed.

When entrepreneurs and AI experts conjure up such a dystopia, it is counterproductive on the one hand. Fear should not dominate the debate about artificial intelligence. The focus should be on the vision: How AI can make us more productive and our lives better.

Naivety is out of place

At the same time, they are right. Naivety is out of place. The changes that artificial intelligence will bring to us will be so extensive that we cannot even fathom them right now. That is why the demand for regulation is absolutely correct.

ChatGPT is a wake up call. The system, unveiled last November, made headlines for its ability to produce text that reads as if it were written by a human. ChatGPT has long since solved medical exams, can program a functioning website or write a complaint in seconds. A race for supremacy in artificial intelligence has broken out between the big technology companies Google and Microsoft. China is also trying to catch up by any means necessary.

But according to which moral standards should artificial intelligence act? Which tasks can it fulfill and which not? How is it prevented that she lies, commits crimes or acts completely on her own? Which unwanted social effects of artificial intelligence must be prevented? Is it acceptable that people lose skills if they are taken over by an AI in the future? The answers to these questions must not be left to private companies with economic interests. Nor should companies control themselves.

Politicians have long understood that there is a need for action. There is an AI strategy at EU level. An AI regulation is being planned. But it could be years before this comes into force. Even the definition of AI is still controversial. Or the question of which types of AI should be considered risky.

There are concerns in politics about intervening too much and thereby choking off innovation. After a visit to Open AI, the developer of Chat GPT, Digital Minister Volker Wissing warned against excessive regulation. And that despite the fact that he was already able to marvel at the latest, even more powerful version, GPT4.

Politics can’t keep up at the moment

The debate is already fragmented and the speed of politics is not at all able to keep up with the rapidly accelerating development of highly complex AI systems. In the open letter, AI experts warn that not even the developers of these systems can still understand them or reliably control them.

Regulation should not be clarified by individual states for themselves; the EU level also falls short. It is nothing less than a human task. And this must be solved internationally. It is unrealistic to get AI developers to take a self-imposed pause at first – the competitive pressure is too great. The international community must therefore act quickly.

An appropriate starting point could be an international conference, to which experts, politicians and companies from all over the world are invited. It should not be about small-scale regulation, but actually about the big questions. Of course, such a format cannot solve all problems in one fell swoop. But after the launch of the latest version of ChatGPT, it’s needed more than ever. Artificial intelligence needs limits. Only then can she change the world for the better.

To home page

Source: Tagesspiegel

Share this article:

Leave a Reply

most popular