Recommendations for the use of generative AI within scientific work
In scientific work, generative artificial intelligence (GSI) has a vast range of applications and, in fact, in all spheres. The tasks for which GSI can be used are unquantifiable, and new ideas are constantly emerging on how it can support scientific work. However, it is essential to remember that the use of GSI in science poses several ethical and legal challenges, therefore, those working in science must act with great responsibility in the context of GSI-generated material and be transparent in their use of the technology.
On this page, we can find the fundamental ethical and legal recommendations for the use of GSI in scientific work. A selection of examples of the constructive use of GSI in the context of scientific activities is also presented, which can undoubtedly have a positive impact on scientific progress. However, it must be made clear that the exemplifications refer to the supportive use of artificial intelligence in scientific work, and thus one in which the generated content is always subject not only to revision but also to further elaboration by scientists. The page concludes with a list of publications that are worth reading before using GSI in scientific work.
Ethical recommendations
The ethical use of GSI tools in research and scientific communication involves, most importantly, the need to know the limitations of these tools due to bias, errors, knowledge gaps and the possibility of plagiarism.
According to the recommendations of internationally recognised publication ethics bodies such as the Committee on Publication Ethics (COPE), World Association of Medical Editors (WAME), and JAMA Network, the following points should be given specific attention to maintaining appropriate standards in science:
- GSI tools may provide false information and create false bibliographies, including assigning non-existent publications, abstracts and URLs to real people (these are called system hallucinations). The information can sound convincing at a linguistic level, making it challenging to identify.
- The text generated by GSI tools should be considered working and carefully reviewed for the possibility of plagiarism, identify all references to literature and attribute authorship correctly.
- GSI tools may give biased information, incomplete or misinterpreted facts and create a distorted, incomplete or outdated picture.
- GSI tools may not demonstrate contextual understanding and may ineffectively mimic human understanding of written text ( notably metaphorical language, sarcasm, humour, idiomatic expressions, etc.).
- Researchers take full responsibility for the accuracy, originality and integrity of their work, including all content produced by GSI tools.
- GSI tools, including chatbots, cannot be cited as co-authors of papers as they do not meet the criteria for authorship.
- Please refer to the publisher's policy on the permissible extent of the use of GSI tools in scientific paper writing. Generating images through GSI tools is primarily unacceptable unless they are the issue of the paper.
- Every time, the use of GSI tools in the work should be documented honestly and accurately, including the name of the tool, the purpose and scope of the use, and the commands (prompts) relevant to the work.
- In scientific work (research, publishing), use those GSI tools that have been developed or adapted for these exact purposes and offer adaptations to the model in terms of reducing hallucinations, increasing the reliability for output, limiting the data used to only reliable sources or scientific databases (e.g. PubMed).
Legal recommendations
- In using GSI in research, it is crucial to evaluate legal aspects, such as intellectual property protection and data protection.
- Working with GSI may involve the risk of infringement of intellectual property rights at two stages: (1) the formulation of a command (prompt) for the GSI system and (2) the use of content generated by the GSI.
- In formulating the command (prompt), attention should be paid to the information and files we make available to the GSI system (including the AI detector). Depending on the tool used, the information and documents may be used for further development of the GSI system by the service provider and may affect content generated in the future by other users. For this reason, GSI-generated content may be similar to existing works or include excerpts from works (cf. New York Times v OpenAI lawsuit).
- It is equally crucial to be aware of data protection rules, notably if using GSI systems via the Internet. Conversation with a GSI system usually involves the transfer to the service provider of all the information we consider in the command (prompt), including image, video, voice and other files. For this reason, we should ensure that the content is appropriately anonymised, e.g. by removing individual names (names, places), dates and other passages pointing to a specific person.
Examples of use
- Literature studies
- Literature search (e.g. using GSI to search academic journal databases, library catalogues and repositories for existing publications related to a particular topic)
- Searching collected articles (e.g. using GSI to extract critical information from extensive collections of articles or to search for common themes, trends and hidden relationships)
- Coding articles for meta-analyses (e.g. using GSI to automatically categorise articles or group their content based on specific characteristics)
- Checking the innovativeness of ideas on selected problems (e.g. using GSI to help assess the originality of research ideas based on an analysis of available publications)
- Generating working hypotheses or questions in a specific problem field (e.g. using GSI to formulate new research questions based on existing solutions)
- Preparation of the study
- Development of stimulus elements (e.g. use of GSI in the creation of visual or narrative content such as graphics, films, and stories)
- Adaptation of stimuli and language for the target group (e.g. use of GSI when translating or adapting content to the specifics of the target group)
- Creating 'items' for questionnaires and scales of research tools (e.g. using GSI to adapt questions or statements)
- Searching research datasets and harmonising them (e.g. using GSI to search research data repositories and subject heterogeneous datasets to preliminary analysis)
- Study simulation (e.g. using GSI to simulate different experiment scenarios and search for the optimal one based on an analysis of similar studies reported in the literature)
- Threat identification (e.g. using GSI in the assessment process of potential risks, errors or ethical issues related to a specific study)
- Analysis of research data
- Programming support (e.g. use of GSI to generate, debug and check code fragments for data processing and statistical analysis)
- Collected text analysis (e.g. use of GSI in the process of categorising material or searching for commonalities in analysed statements)
- Quantitative data analysis (e.g. use of GSI in analysing large data sets and detecting hidden relationships or dependencies)
- Working on a publication or grant application
- Proofreading and editing your text (e.g. using GSI to improve grammar, style or punctuation, to identify potential errors and inaccuracies, and to check the coherence of your text)
- Paraphrasing selected parts of the text (e.g. using GSI to reformulate sentences and paragraphs or to adapt them to the specific audience of the publication)
- Text translation (e.g. use of GSI to generate an automatic translation, subject to further editing)
- Generating alternative text versions (e.g. using GSI when writing a version of a text adapted for non-specialists)
- Popularisation
- Support for creating popular scientific content (e.g. use GSI for preparing summaries of scientific findings in a form understandable to the average reader)
- Support in creating content for social media (e.g. use of GSI in the creation of accessible and attractive popular science content, such as post content or infographics conveying relevant information in an accessible way).
It is worth reading before you start using GSI in your scientific work
- Conroy, G. (2023). How ChatGPT and other AI tools could disrupt scientific publishing. Nature, 622, 234-236
- Demszky, D., Yang, D., Yeager, D.S., Bryan, C.J., Clapper, M., Chandhok, S., ... & Pennebaker, J.W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688-701.
- Ganjavi, C., Eppler, M.B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G.S., ... & Cacciamani, G.E. (2024). Publishers' and journals' instructions to authors on the use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ, 384: 384:e077192
- Hutson, M. (2023). Hypotheses devised by AI could find 'blind spots' in research. Nature.
- Park, J.Y.. (2023). Could ChatGPT help you to write your next scientific paper?: Concerns on research ethics related to the usage of artificial intelligence tools. Journal of the Korean Association of Oral and Maxillofacial Surgeons, 49(3), 105-106.
- Why scientists trust AI too much - and what to do about it. (2024). Nature, 627, 243-243.