What will happen when artificial intelligence reaches the singularity and will it be able to kill people

A variety of artificial intelligences (from chatbots to neural networks and autopilots in cars), which are rapidly captivating the world, seem to be entertainment for humanity and something that will improve our lives in the near future. But it’s likely that once the AI ​​hits the singularity, people might be having a hard time.

This is stated in the material of the publication Popular Mechanics, which spoke with industry experts. They also talked about how not to turn artificial intelligence into an enemy.

Experts suggest that the moment of singularity may come as early as 2030. After that, artificial intelligence will surpass the human one and it will be very difficult to calculate the consequences of its activities.

What is a singularity

The Singularity is the moment when machine intelligence equals or surpasses human intelligence. This concept was once considered by Stephen Hawking, Bill Gates and other scientists. In particular, the English mathematician Alan Turing, back in the 50s, developed the Turing test, which was designed to find out whether machines are capable of thinking independently and whether they can reach such a level of communication when a person will not be able to understand whether he is talking to AI or to another person. ChatGPT has already proven that AI is capable of holding human-level conversations.

UC Berkeley doctoral student Ishane Priyadarshini explains that the main problem with AI is that its intelligence is virtually unlimited, while human intelligence is fixed, because we can’t just add more memory to ourselves to become smarter.

When can the singularity be reached?

Experts believe that statements about the imminent achievement of the singularity are, at best, speculative. Priyadarshini believes that the singularity already exists in pieces, but the moment when AI surpasses human intelligence completely will not come soon. However, people have already seen the moments of the singularity. Like when, in 1997, the IBM Deep Blue supercomputer first defeated the human chess player, world chess champion Garry Kasparov.

Experts suggest that it will be possible to talk about achieving the singularity when AI can translate the language at the same level or better than a person.

However, Priyadarshini believes that the best indicator that AI has become smarter than humans will be when machines begin to understand memes that are now beyond their reach.

What happens when the AI ​​reaches the singularity?

The problem is that people are too “stupid” to guess what will happen if the AI ​​gets super intelligence. To make predictions, we need a person who will also have superintelligence. Therefore, humanity can only speculate about the consequences of the singularity, using our current level of intelligence.

“You have to be at least as intelligent to be able to guess what the system will do … if we are talking about systems that are smarter than a person (super intelligent), then it is impossible for us to envisage inventions or solutions,” said the assistant professor of computer science. Engineering and Computer Science at the University of Louisville Roman Yampolsky.

As for whether AI can become an enemy to humans, according to Priyadarshini, it is also difficult to make predictions here. Everything will depend on whether his code will contain contradictions.

“We want self-driving cars, we just don’t want them to run red lights and collide with passengers,” says Priyadarshini, explaining that bad code can cause SHI to consider running red lights and people as the most efficient way to get to a place on time. destination. .

AI researchers know that we can’t 100% remove bias from code, she said, so creating a completely unbiased AI that can’t do anything wrong will be a challenge.

Can AI harm humans?

While AI has no feelings, therefore, it is guided only by its own knowledge. So, he probably won’t become uncontrollable anytime soon and try to run away from human control simply because he doesn’t have that kind of motivation.

However, as Yampolsky explains, the uncontrollability of AI can arise as a result of how it will be created by a person and what paradoxes can form in its code.

“We don’t have a way to detect, measure or assess whether systems experience internal states … But this is not necessary in order for them to become very capable and very dangerous,” the scientist explained.

Priyadarshini also supports his colleague, arguing that the only reason that can lead to an AI rebellion is the inconsistency of the code.

“He (SHI. – Ed.) may not have any motives against people, but a machine that believes that a person is the root cause of certain problems may think that way,” the scientist explained.

However, if the AI ​​becomes intelligent enough to be self-aware and gains inner senses, it may have a motive to dislike humanity.

Again, a poorly set task can lead to unintentional homicide. As an example, Yampolsky cites a situation where AI is asked to create a vaccine for humans against conditional COVID.

The system will know that the more people who get COVID, the more the virus will mutate, making it harder to develop a vaccine for all variants.

“The system thinks … maybe I can solve this problem by reducing the number of people, so the virus can not mutate so much,” says the scientist, suggesting that an AI that has no concept of morality can choose a more efficient solution, even if it hurt parts of the people.

How can we prevent the catastrophe of the singularity?

We will never be able to rid artificial intelligence of all its unknowns. These are unintended side effects that humans can’t foresee because they don’t have super intelligence.

“We are really looking at a singularity that will lead to the emergence of many rogue machines. If it reaches the point of no return, it can no longer be corrected,” warns Priyadarshini.

Earlier, GLOBAL HAPPENINGS also said that for artificial intelligence, it is necessary to create Asimov’s fourth law of robotics.

Source: Obozrevatel

Share this article:

Leave a Reply

most popular