Skip To Content
ADVERTISEMENT

Generative AI Meets Scientific Publishing

Melanie Padgett Powers 

Will tools like ChatGPT help or impede the process of disseminating reliable, peer-reviewed scientific information? It’s complicated …

figure[Getty Images]

Seemingly overnight, ChatGPT captured the zeitgeist on its release at the end of November 2022, providing the first mainstream look at generative artificial intelligence (AI). In the eyes of many, generative AI currently is in its “Wild West” phase, with the potential to disrupt nearly every industry—and with no federal regulations and few industry guidelines. This technology, which isn’t as new as it may seem, holds both great promise and great risk.

Abhigyan Arun—the CEO of TNQ Technologies, a global firm providing technology products to scholarly publishers as well as to authors—thinks generative AI has the power to disrupt society much like the internet has over the past four decades.

“The internet changed the way the society operated; the internet changed ... how communication happens,” Arun says. “Generative AIs have the same capability.”

“The internet changed the way the society operated; the internet changed the way businesses operated, how communication happens,” Arun says. “The internet led to globalization of the entire world. … Likewise, generative AIs have the same capability. Democratization of information, democratization of knowledge, is perhaps the biggest disruption that will change the way society operates, but then it will also change the way business operates. That’s why we think it is a huge, huge disruption.”

Of risks and benefits

Optica is following the development of generative AI and how these new tools may affect scientific publishing and communication.

Optica publishes 19 journals—11 of which are wholly owned and another 8 of which are produced in partnership with other publishers. In March, Optica’s Board of Editors discussed AI and authorship, adding language to the society’s ethical guidelines for publishing that states these tools cannot be listed as authors because they “cannot accept the varied responsibilities required of authors.” The guidelines also now state that the use of AI or large language models (LLMs) must be disclosed in the paper, along with the type of tool and how it was used.

“In publishing, if I submit a paper or manuscript to report to the public, I need to be accountable for the results,” says Ruikang “Ricky” Wang, editor-in-chief of the Optica journal Biomedical Optics Express. “So in that case, ChatGPT needs to be used with very much caution.”

At the same time, Kelly Cohen, senior publisher of Optica Publishing Group, says her team recognizes scientific publishing could gain from these tools—if they’re used properly. “We don’t want to be jumping to conclusions and assuming the worst of these tools, because we do feel like there are a lot of opportunities for these tools to enhance productivity and assist authors, especially those whose first language isn’t English, and maybe even inspire research,” Cohen says.

figure[Getty Images]

The generative-AI difference

Nearly all of us use AI every day without a second thought. It’s used to personalize what we see in our social media feeds and the TV shows recommended by our streaming services. Smartphone navigation apps use AI to provide us the best driving route. And customer service chatbots and personal assistants like Apple’s Siri and Amazon’s Alexa use AI.

Generative AI, however, is a type of machine learning that generates content, including text, images, audio and video. It does not create original text, but “predicts” text based on the content that it’s trained on. Tools such as ChatGPT that generate text are trained on LLMs.

“The way I like to describe it is Wordle on steroids,” says Avi Staiman, founder and CEO of Academic Language Experts, an author-services company for English-as-a-second-language scholars. “Most LLMs are just very intelligent word predictors. So they can tell you in a sentence what is most likely to be the completion of that sentence based on your prompt, and it is quite strong when it comes to human language and imitating human language and imitating human dialogue.”

The use of ChatGPT—which reached 100 million users two months after its November 2022 launch—exploded likely because it’s free and easy for ordinary, nontechnical people to use. OpenAI, the company that created ChatGPT, released a new model, GPT-4, in March. (GPT stands for Generative Pre-trained Transformer, which is an LLM that is trained on internet data.) Generative-AI tools that create images include Midjourney and DALL-E.

Potential impacts on peer review

In scholarly publishing, one of the primary areas generative AI has the potential to affect is the peer review process. “Reviewers for our journals receive points for completing a review, and they can build up their points to redeem for Optica membership or open-access journal fee discounts,” Cohen explains. “So, there’s an incentive to be very involved as a reviewer, and being able to use a tool like this could allow them to complete more reviews.” But can reviewers rely on generative AI to produce reviews that are accurate and credible?

Not yet, Cohen and others say. For now, tools such as ChatGPT should not be used to conduct peer review. Besides the issue of accuracy, manuscripts in the peer review process are considered confidential documents that should not be shared—and generative-AI tools often store the data you input, even just for training purposes. Before using any of these tools, it’s important to know how the data will be used and what the privacy policies are, says Scott Dineen, Optica Publishing Group’s senior director of production and technology.

Down the road, improving the efficiency of peer review could incentivize more researchers to volunteer as reviewers, Staiman believes. “Traditionally, publishers do not pay reviewers, or reviewers are doing this as part of their job but it’s not really high priority for them,” he says. “So, if there’s a way to make that process more efficient and easier and to find incentives for reviewers to actually review papers in a timely fashion, then you can have good research come out quickly.”

figure[Getty Images]

The authorship question

Like Optica Publishing Group, several publishers have issued new guidelines stating that a generative-AI tool cannot be an author on a paper. The initial focus on authorship came about when the integrity of the peer review process was bruised by a group of researchers in England. According to an article in The Guardian, the group submitted a paper entirely written by ChatGPT to an education journal, to showcase the tool’s high writing level. Four peer reviewers, who assumed the paper was written by humans, approved it for publication. Only then did the researchers alert the journal editors that ChatGPT had written the paper.

But Staiman says focusing solely on authorship is the wrong approach. “I think publishers went into full-fledged panic. … I don’t have a problem with them banning it. My issue was that you’re running to deal with a few either bad apples or just folks who are trying to test the limits—which I think is an interesting thought exercise—but it’s not terribly practical to worry about this as the main issue because that’s not what 99% of researchers are asking themselves.”

Instead, he says, authors want to know how they can use generative AI to be more efficient. The deeper issues are how to regulate and educate about its use in research. In a March post in The Scholarly Kitchen, a blog hosted by the Society for Scholarly Publishing, a trade association, Staiman asked: “Can or should authors use ChatGPT or other AI tools in the development and writing of their own research writing? Can it be used for conducting a literature review? What about analyzing results? Or maybe for drafting an abstract from an existing article?”

Some authors already use tools such as Grammarly or hire medical writers to ghostwrite their papers. Does it matter who actually writes the paper if the science is sound and the researchers take responsibility for the reliability and validity?

“Yes, there’s an inherent value to learning how to write because I think that we learn how to express ourselves through writing,” Staiman says. “But do I think that for every single study for every single scientist in every single field, we need to be checking whether they’ve written every single word? I don’t think that’s something that should bother us.”

Improving accessibility

One potential benefit that’s been advanced for generative AI is improved accessibility and inclusivity, particularly among those who are not fluent in English.

One potential benefit that’s been advanced for generative AI is improved accessibility and inclusivity, particularly among those who are not fluent in English.

It’s been estimated that more than 90% to as much as 98% of scientific research is currently published in English. Researchers in non-Anglophone countries often already use translators, ghostwriters or services like Staiman’s to write their papers in English. However, not all researchers in developing countries can afford to do this, Arun says. “And the end result could be a poorly written manuscript that is rejected not because of the science in it, but just because it is not intelligible,” he says. “LLMs will make that a level playing field.”

Wang agrees: “I think there’s an opportunity for non-English speakers to disseminate more of their work to the public, and it can be used for that. But they also need to be responsible for their translation output from ChatGPT.”

Translation is perhaps one of the “most impactful contributions” of generative AI, says Sina Farsiu, an Optica Member and a deputy editor for Biomedical Optics Express who develops and uses AI tools for his research in optical image processing. AI-enabled translation can help non-English speakers read scientific papers written in English, and non-English speakers can use AI to translate their work to English for submission. “But, note that all those concerns stay the same,” Farsiu says. “So, in the process of translation, if it’s not supervised, and AI introduces plagiarism or changes facts … it’s a chicken-and-egg problem. [The author] cannot speak English, so they cannot check to see if AI has produced a faithful translation of their work.”

Arun believes that AI tools could also serve as a stand-in for human mentors—access to whom, he says, is a challenge for early-career scientists in developing countries. “Young researchers who have just started their research need a mentor who can help [them] understand: What should be the right area of research? What are the right questions to ask for your thesis? Who else is working on that research, allowing them to collaborate better with the wider community? … Proper use of ChatGPT can support young researchers in [discovering] that.”

Accessibility would not be limited to authors. Generative AI could help both researchers and laypersons better understand science. For interdisciplinary fields like light science and technology, generative-AI tools could explain or summarize wonky academic papers in a related discipline that helps an individual’s own research evolve.

In addition, Dineen believes that professors, students, laypersons and organizations like Optica could use generative-AI tools to “rewrite” academic writing into a more accessible and easy-to-understand explanation of the science. “The ability of ChatGPT to summarize difficult, sometimes poorly written, content in a way that you get a pretty clear sense, even as a non­scientist, what the big deal is, what the main point is, [is powerful],” he says. “And if you still don’t understand, you can ask ChatGPT to explain it in a different way.”

Dineen also sees the potential for students trying to understand a difficult topic to use generative AI as a sort of online tutor, rewording academic concepts, explaining scientific papers and testing individuals on whether they’ve grasped a concept.

Copyright conundrums

However, right now, using ChatGPT in such a way brings up ethical and copyright issues. Jennifer Freer, a reference librarian at the Rochester Institute of Technology, USA, says most published research is owned by the journal and company that published it and that authors do not retain the copyright. Libraries and other organizations pay a subscription fee to license access to the full text of articles. Some database vendors include clauses in their contracts with libraries that either prohibit data mining or require a separate addendum that defines how data mining can be used.

This means a user can’t legally take content from one database and put it into another database, like ChatGPT, and Freer has not seen any vendors willing to give permission to AI-type platforms. In fact, LexisNexis, a data and analytics company focused on legal documents, recently emailed customers to “remind” them that “our Agreements do not permit you to use or upload content you receive through LexisNexis into external applications, bots, software or websites, including those using artificial intelligence technologies such as large language models and generative AI.”

Instead, some commercial service providers may be developing their own revenue-producing generative-AI tools to explain content in the databases they lease. LexisNexis is doing just that. In May, the company announced the launch of Lexis+ AI, which uses LLMs, including GPT-4, to search, summarize and draft content from LexisNexis. More of these tools may be developed and made available—likely with a price tag; these might help address the current copyright issues.

For now, Freer sees the value of people using generative-AI tools for their own research and content. For example, a student or professor could use their own paper to create a digestible slide presentation. And without writing a paper for a student, ChatGPT could develop a list of keywords to search around a topic and create outlines to get a writer started on a project.

figure[Getty Images]

From plagiarism to “hallucinations”

For all the potential of generative AI in scientific publishing and communication, there are also clear risks. These are early days, and some of those risks might be addressed and die out—but others could pop up in their place. Fifteen years ago, it would have been difficult to imagine the increasingly well-documented effects of social media on the US election process, mental health and the spread of hate speech.

One clear risk on the generative-AI side is the issue of plagiarism and copyright infringement. ChatGPT answers are pulled from billions of webpages created before 2022. Who owns that content?

ChatGPT wasn’t created to provide citations, and it’s difficult to ascertain specific sources to a query. In July 2023, the comedian Sarah Silverman and the authors Christopher Golden and Richard Kadrey filed separate copyright-infringement lawsuits against OpenAI and Meta (the owner of Facebook, Instagram, Threads and WhatsApp). The plaintiffs claim they didn’t give permission for the companies to use their books for AI training datasets and allege that their books’ content was obtained through illegal online “shadow libraries,” according to the tech news site The Verge.

Furthermore, generative AI could provide the same responses to like queries. If users shared those responses as their own—say in a presentation or blog post—it could be embarrassing, at the very least, if the response was found to also be published by another person.

“This wouldn’t have happened if I had written it because it’s impossible for me to exactly remember and replicate the paragraph that I have read somewhere else. Relying on the output from ChatGPT could result in unintentional instances of plagiarism,” Farsiu says. “It’s an open issue of how you can utilize these tools for publication.”

Another much-discussed current risk with ChatGPT is “hallucinations”—plausible answers from the tool that turn out to be completely false.

Another much-discussed current risk with ChatGPT is “hallucinations”—plausible answers from the tool that turn out to be completely false. This can include seemingly valid citations, written in the proper format and citing real journal titles, that do not exist. In such cases, users can begin to view ChatGPT as comparable to someone who makes up answers or lies rather than just saying, “I don’t know.”

“People anthropomorphize it to death,” Dineen says. “They say it tries to please the person asking the question, and it does seem to do that.”

Farsiu has experienced this firsthand. “I have played with this a little bit in ChatGPT,” he says. “I have prompted the same question from different directions and sometimes it insists … or states the wrong information with high confidence.” He believes generative-AI developers will eventually solve this problem, but it’s something to be aware of in the short term. “The user should be very careful when they utilize the information that is generated, by double-checking with the sources that they can find directly.” Or, as Freer puts it: Trust but verify.

Still another concern is that the large datasets generative-AI tools are trained on include biases—stereotypes, falsehoods, subtle language choices—that can be perpetuated in the tools’ responses. And marginalized voices are by definition included less frequently in the LLMs.

Looking to the future

Most people were introduced to generative AI through ChatGPT, but the technology is advancing rapidly. We will continue to see different types of applications, with different functions and purposes, built on generative-AI platforms.

Optica is active in publishing-industry organizations such as the Society for Scholarly Publishing and the Committee on Publication Ethics, which Cohen says are helping academic publishers stay connected to the AI conversation and abreast of what others in the space are doing.

“This has become the topic of 2023 at all the conferences taking place this year,” she says. Her team is also asking journal editors to share how these tools are being used in their own research groups. “We can learn from that experience, too. And since we’re a global organization, we’ve benefited from getting input from people all over the world, and it’s not just US-based.”

As users begin to explore generative AI, Freer says it can be easy to get bogged down and overwhelmed trying to understand how GPTs and LLMs work. But she recommends not worrying about understanding the technology behind these tools, and instead focusing on the use cases, knowing the hazards and exploring the possibilities. “It’s OK to not know how your car engine works,” she says, “because you do understand how to use the car.”

Arun believes that everyone, no matter their role, should be thinking about generative AI. “We would like for people to be curious about it,” he says. “Don’t leave this exploration to anyone else, because everybody’s work function is getting impacted.”

He encourages people to explore the technology in a low-risk way. “Only if you get your hands dirty will you know the strengths and weaknesses and be able to extract opportunities.”


Melanie Padgett Powers is a freelance writer in Silver Spring, MD, USA.

Publish Date: 01 October 2023

Add a Comment