AI is all the rage, and researchers are using it in various ways to up their scientific game. But to be productive and avoid risks with potentially substantial repercussions, institutions should provide training and guidance to researchers on using AI tools in their work. To be able to do that, those supporting researchers – librarians and research managers – should understand how and when researchers use AI, responsible use of AI, and potential risks.
In the second blog of his series on research training, Dr Jeffrey Robens, Head of Community Engagement and Lead Trainer at 国产乱伦, shares his top tips on maximising the benefits of AI for researcher. This information can aid librarians and research managers in responsibly supporting their researchers in the use of AI.
AI is on the tip of everyone’s tongues recently, and not surprisingly, with huge strides that have been made in generative AI (GenAI) over the last couple of years. Traveling to various institutions around the world, however, I have seen a spectrum of emotions when it comes to the use of GenAI by researchers, from sceptical to embracing.
Are researchers using GenAI? Definitely! In a poll with over 3800 post-docs worldwide in 2023, Nature found that . And that number is likely higher now. In the workshops that we conduct, questions about GenAI are some of the most common we receive, so you can tell there is a lot of interest right now.
* A remark on nomenclature: In this post, I focus on text generating GenAI models such as ChatGPT (also known as large language models, LLMs), unless referring specifically to image creation. There are various GenAI options available for use, including – but not limited to – ChatGPT, Gemini, and Claude. Which is best? I advise researchers to try a few and see which they prefer.
Training support for researchers on using GenAI
Despite substantial interest from researchers, there are very few opportunities currently available for them to learn how to use GenAI effectively and responsibly. While it is clearly “,” very few universities worldwide provide formal training and support.
There are courses and videos online that researchers can wade through hoping to nurture their GenAI skills, but sometimes just finding these resources can be time-consuming. Many researchers will likely rely on trial and error when learning how to use GenAI, which could lead to inappropriate or inefficient use of this budding technology. At its worst, this could have repercussions on both the researchers and the institutions they work in.
To address this pressing need, institutions should take an active role in supporting their researchers on how to use GenAI for their research, and when they should avoid using it. Institutions can consider developing in-house GenAI training for researchers or bringing in specialists to offer this training. Librarians and researcher managers have an important role in supporting researchers. This now should also include support on issues relating to GenAI use. As you plan and search training opportunities for researchers on using GenAI, it is crucial to understand what GenAI can offer researchers, the benefits and risks, and how best to use it.
In the following I will share some of my top tips on maximising the benefits of GenAI for researchers. These are based on having personally taken numerous courses from IBM, Google, and Vanderbilt University on GenAI, along with many hours of testing the different models in various scenarios.
When can GenAI be used effectively and responsibly by researchers?
AI and GenAI can be used for a broad spectrum of research tasks, from data analysis and synthesis, through code generation and debugging, spreadsheet formula creation, and much more advanced use cases. Here I will focus on some of the broader uses of GenAI that are more related to research communication and formulation, that are not discipline or topic specific.
- Literature review: Busy researchers want to read as many articles as they can related to their topic, however, have limited time to do so. Therefore, to stay up to date, many rely on reading abstracts instead. But while abstracts summarise the key findings, they do not share interpretations, limitations, and negative results that would also be important in evaluating the validity of an article. Using GenAI to summarise articles can be of tremendous value.
Uploading articles (or providing the URLs if they are open access) and prompting the GenAI to summarise the key findings along with key interpretations and any available limitations and negative results, can give researchers a more complete picture of the study than just reading the abstract. As GenAI can sometimes “hallucinate,” it is the responsibility of the researcher to double-check any key points the GenAI mentions. But at least now the researcher knows what they are looking for!
- Brainstorming: GenAI can be used to help formulate topics and research questions for a future study, and to identify limitations and knowledge gaps that should be discussed in the Introduction or Discussion sections of a paper. Importantly, and as already mentioned, it is the responsibility of the researcher to validate any ideas they generate based on an interaction with GenAI before acting on them.
- Grammar and clarity: GenAI can do an excellent job of taking a poorly written text and improving it for publication. You just need to explain in the prompt the topic and position of the text (for example, “the Introduction section of a research paper written for a specialized readership in materials sciences working on solid-state batteries”) for context, then ask the GenAI to improve the text’s grammar, structure, clarity, and readability. Although most GenAI models will not add new content in this case, it is always important to review the output to ensure everything is still accurate.
- Social media posts: Promoting published articles on social media has been shown repeatedly to increase visibility, but many busy researchers simply don’t have the time. By crafting an appropriate prompt and providing the GenAI with the text of the article (PDF or link if open access), the GenAI can craft a great social media post in a matter of seconds. The output generally still needs to be edited a bit, but this takes minimal time and now the researcher has a great post that they can use to boost their visibility!
- Preparing for Q&A sessions: Oh, the dreaded Q&A session at conferences. Researchers can practice over and over again for the presentation, but they can never know what questions they will be asked. Getting insights from colleagues is important but can be limited. Using GenAI, a researcher can upload their script and ask it to generate ten difficult questions that the audience may ask. This could shed a light on some insights that the researcher hadn’t considered before.
Again, being very specific with the prompt will be crucial (for example, “based on the script below for my presentation on the influence of substrate rigidity on the dynamics of synaptic plasticity, what are ten difficult questions I may be asked by a broad audience of neuroscience researchers?”).
When should researchers be cautious while using GenAI?
- Data privacy and security: Although the enterprise versions of ChatGPT and CoPilot clearly state that all entered data will not be used by the model for training or output, the basic versions that most people use do not. Therefore, I usually recommend not to upload unpublished data or ideas into GenAI models. This means that if using GenAI for grammar and clarity (see above), I would not upload text from the Results or Discussion sections of the paper. Better safe than sorry!
- GenAI-produced text and authorship: Researchers should not let GenAI write their paper for them. A text produced by GenAI is based on its training data, and it is therefore possible that full sentences are copied from published articles. Using the GenAI-produced text as is could lead to plagiarism issues. And if relying on a GenAI-produced text, it is the responsibility of the researcher to go through every bit and validate its veracity, since GenAI sometimes “hallucinates.” This would likely be more time-consuming for the researcher than just writing the paper themselves. It is important to remind researchers that GenAI cannot be listed as an author of a paper, because it cannot be held accountable for the published output. Accountability is a key criterion for authorship.
- GenAI-created images: While most publishers allow the use of GenAI for improving a text, images created by GenAI are usually not allowed for publication, due to possible copyright infringement. GenAI models that create images are trained on millions of pictures and photos, some of which may have violated the copyright laws of the image owners. Because GenAI created images might have used some of this data, this could lead to copyright infringement.
- Navigating bias in GenAI: Last but certainly not least, GenAI is trained on information that is publicly available. Because some of this information can be biased, the output from GenAI can also reproduce and perpetuate these biases. Again, it is the responsibility of researchers to always scrutinise any output carefully and ensure that it is as fair and unbiased as possible.
On the importance of prompts when using GenAI
GenAI can be a valuable tool for researchers, but it is important to use it carefully, intentionally, and thoughtfully. As repeatedly mentioned throughout this post, being specific with prompts will make GenAI more effective; Making sure to validate the output from GenAI will ensure responsible use. Inadequate use of GenAI by researchers can lead to poorly written papers, that reflect negatively not only on the author but also on their institution and could entail reputational damage.
Learning the best ways to use GenAI is imperative for researchers nowadays, and it is important to offer researchers GenAI training that suits their needs and to help them build the essential skills of using GenAI. This is a new and evolving aspect of librarian and research managers’ support for their researchers.
Related content:
Don't miss the latest news & blogs, subscribe to The Link Alerts!