An AI manages to create a complete scientific article and with it an ethical dilemma arises: can it be published in a scientific journal?
While Google’s Artificial Intelligence (AI) is still looking for a lawyer to solve their legal questions, on the other side of the planet, another AI has managed to create an entire academic investigation. And he did it in just two hours.
This story, which could well have its origins in a dystopian reality, was born from a very simple premise. A Swedish researcher asked GPT-3, an AI, to write an academic research about herself. Only 500 words to begin with, but with everything that a text of this genre implies. Minutes later she began to prepare a document with sources and references. One that, according to the researcher, was “pretty good.” Especially, and as the researcher explains, there was not much hope that something good would come of it. The instructions were, to say the least, vague and lacking in detail. However, the AI was able to create a text in which all the elements were where they should be and well supported.
But if the idea of an AI operating as a moderately competent investigator seemed crazy, the story went a bit further. To a point that opened an ethical and legal debate on the role of this technology and how far they can go. Almira Osmanovic Thunström, the Swedish scientist who gave the AI the peculiar assignment, decided that the research of, and on, GPT-3 could be submitted to a scientific journal. Now, can research not signed by a human be published?
The first is the first. Thunström asked the AI if she was willing to publish the research in a journal. GPT-3 answered yes. Also if, for any reason, there was a conflict of interest that affected the veracity of the publication. The AI answered no. Does this mean that GPT-3 has emotions and is sensitive to certain issues? It is a difficult time precisely for this issue. Mainly, after the conversation that the Google engineer had with LaMDA and in which he stated that this was a sentient “being”. According to the researchers, this is not the case. Neither is the one from LaMDA, from which they point out that we are still far from a fully sensible entity. Although they have not responded to the reason why the AI responded in the affirmative.
Pandora’s box of scientific publications created by AI
Journals on scientific research have long been in the eye of the hurricane for issues that are irrelevant. Now they add the debate on whether to include research carried out by Artificial Intelligence or not.
“Scholarly publishing may have to adapt to a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something non-sensitive can take credit for some of their work,” Thunström pointed out in a publication.
Mainly because many researchers have already concluded that GPT-3 AI can go much further. He is capable of writing about anything that is proposed to him, beyond what his mere existence implies. As Thunström later explained in her own investigation, –this time done by a human being–, the legal and ethical questions of an article being made by an AI are enormous. The first one, the contact data for possible rectifications or revisions of the texts. The researcher explains that she had to put her own telephone number and contact email to overcome this problem.
Problems that, they point out, can “open Pandora’s Box”. In any case, the group leading this investigation is still waiting to find out if the text created by the AI will finally be published or not. But they have hope, and for a very specific reason. If science can produce complex research in 24 hours, the industry would no longer rely on the volume of published research for funding. In the end, it would be an absurd fact when they could produce one a day.
— This article was automatically translated from its original language —