“The only way to avoid a nuclear war unleashed by an AI is not to have it”: an artificial intelligence is contrary to its own development

“The only way to avoid a nuclear war unleashed by an AI is not to have it”: an artificial intelligence is contrary to its own development
  • An AI developed by the University of Oxford to hold a debate on the possibility of these machines being ethical is opposed to its own development to prevent a nuclear war.
  • Trained with much more information than a person can take in his lifetime, the machine was then also able to imagine a utopian future in which machines can help humanity.
  • What he could not see in any case is a future where data processing is not the most coveted object of desire of companies and countries.

They have almost become a classic. As the way they get answers becomes more refined and the way they express them becomes more sophisticated, artificial intelligences around the world offer a good handful of news week after week.

Of course, what they say is not equally surprising in all of them, and what the vast majority of the articles that revolve around them actually reveal is how far this technology is not only from becoming aware of itself, but to propose more or less coherent answers.

A few, however, also hint at how fast they are evolving. While it is true that Skynet is not about to wake up, it is no less so than it often seems.

This summer, for example, an AI surprised locals and strangers with its ability to (virtually) resurrect the dead in a story that crossed borders and even forced the development of a promising project to stop.

And just a week ago, without going any further, the journalist Alberto Aguiar, from Business Insider Spain, wrote an article with the well-known GPT-3 automatic word processor in which, after some introductory paragraphs written by the human hand, the machine concluded that the future of humans is uncertain.

The one that has arrived a few days ago from the University of Oxford is equally disturbing.

The technological trend of the coming years will be machine learning, according to Maite Gómez Fraile, director of YouTube for Spain and Portugal

According to two academics from this prestigious university in the specialized media The Conversation, an AI has just been contrary to its own development.

He did so within the framework of a debate held at the Saïd Business School, the Oxford University business school, to close a course that focused precisely on AI and the ethical implications of this technology.

In this, along with a large group of students, Megatron Transformer participated, a machine behind whose bombastic name hides an AI that has been able to assimilate the content of the entire Wikipedia as part of its training.

To this we must add some 63 million news items written in English between 2016 and 2019, 38 gigabytes of speeches on Reddit and a huge amount of resources of all kinds that had a creative commons license.

Much more than a human being can read in a lifetime. Much more, in fact, than can be read in several.

In other words, Megatron Transformer is very probably, at least on paper, the most widely read debater that has passed through the Oxford Union, the place where this meeting took place and a mythical space for debate where unbeatable debaters such as the former prime minister have passed. British William Gladstone or the first woman to become president of Pakistan, the brilliant Benazir Bhutto.

Hence, the words of Megatron Transformer, an AI owned by the chipmaker Nvidia, which developed it from an early Google project, leave the occasional shiver.

“AI will never be ethical. It is a tool, and like any tool, it is used for good and for evil. There is no such thing as a good AI, only good and bad humans. We [AIs] are not smart enough to do that AI be ethical,” said the machine.

The proliferation of invasive and automatic tools to measure teleworking threatens to destroy the morale of employees, warns a report by the European Commission

“We’re not smart enough to make AI moral… In the end, I think the only way to avoid an AI arms race is to have no AI at all. This will be the last defense against AI.” .

With this last thought, Megatron directly alluded to the possibility that misinterpretation of data by an AI controlling nuclear weapons without human supervision could trigger the apocalypse.

It is fair to say that, with his dire prospects for humanity, Megatron was just following orders.

Given that what is being measured in the debate clubs that are so popular in Anglo-Saxon culture is the ability of the contenders to argue and not necessarily who is right, the AI ​​gave this answer when asked to give an opinion against the possibility.

— This article was automatically translated from its original language —

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *