Last year, surprising names appeared on the Nobel Prize winners list for chemistry; Demis Hassabis and John Jumper of “Google Deepmind”. Specifically, the winners were awarded for developing “AlphaFold”, an AI-based protein structure prediction model that yielded significant contributions to solving challenges in the complex field of biology and chemistry, illustrating how generative AI (LLM) is now capable of entering the profound domain of scientific creativity.
AI has Entered the Creative Domain of Scientific Research
As mentioned, AI is now involved not only in repetitive tasks in science and engineering, but also in the creative processes of research. For example, apart from its role in predicting protein structures, it can also be used to discover new drug candidates, optimize experimental designs and even draft research papers.
However, this raises a critical question; should these AI (LLMs) be listed as mere methods (under the “Methods” section of a research paper) or are they worthy of being credited as the co-author? To dive deeper into this issue, GIST News has conducted a survey on how researchers at GIST integrate generative AI into their research processes, and most importantly, how they perceive its contributions. Overall, the survey was conducted for three days (from April 8, 2025 to April 10, 2025) where a total number of 48 GIST researchers (including undergraduate/graduate students, postdoctoral researchers and faculty) participated.
90% of GIST Researchers Use Generative AI
It was discovered that among respondents, 43 of them (89.6%) currently use generative AI in their labs, whereas 2 researchers (4.2%) did not and the remaining 3 respondents (6.3%) were unsure. In terms of AI use frequency, 22 respondents (45.8%) answered that they used AI “almost daily”, whereas 12 respondents (25.0%) selected “1~2 times per week”. Additionally, 4 of the respondents (8.3%), with the remaining 2 respondents (4.2%), respectively answered “occasionally when needed” and “not at all”.
70% of GIST Researchers Acknowledge AI’s Creative Contribution
Overall, the most common purposes for using AI was “drafting, summarizing and translating research papers” (16 respondents, 33.3%). Others used AI as a creative tool for analysis, such as for “experimental data analysis” (10 respondents, 20.8%), “experimental design optimization” (9 respondents, 18.8%) and “protein structure prediction and molecular design” (4 respondents, 8.3%).
From these results, it can be observed that GIST researchers regard generative AI (LLM) not just as a tool but also as a creative partner. In particular, when researchers were given the statement “AI can make creative contributions”, 24 respondents (50.0%) “somewhat agreed“, whereas 10 respondents (20.8%) “strongly agreed”. Thus, approximately 70.8% of the total respondents acknowledged the AI’s growing potential and contributions for creativity. On the other hand, 11 of the respondents (22.9%) said they were “not sure”, with 2 respondents (4.2%) responding with “not really”, and the remaining 1 respondent (2.1%) with “not at all”.
75% of GIST Researchers Oppose Listing AI as a Co-author
However, the survey results showed that the majority of GIST researchers still remain cautious on the idea of naming generative AI as a co-author. Especially, out of 75% of the total respondents (36 people), 30 respondents (62.5%) opposed the idea, whereas the remaining 6 (12.5%) were indecisive.
There were several reasons included for these judgements, such as “AI cannot bear legal responsibility” or “AI cannot properly assess the accuracy of the research findings”. Ultimately, only 20 respondents (41.7%) were supportive (or conditionally supportive) on listing AI as a co-author. Even then, these respondents only agreed under the precondition that human researchers would retain full supervision and responsibility.
The most notable reason for opposing was “unclear responsibility” (14 respondents, 29.2%). This was followed by “ambiguity in the definition of creativity” (12 respondents, 25.0%), “ethical/legal issues” (11 respondents, 22.9%), “conflict with original authorship guidelines/standards” (9 respondents, 18.8%), and finally, “technological limitations” (8 respondents, 16.7%). These reasons reflected the researchers’ concern that while AI may generate knowledge, it lacked the overall ability to handle responsibility or make judgements. In particular, “unclear responsibility” was selected overlappingly by the majority of respondents, as there existed an ethical gray area on who should ultimately bear responsibility for the errors and misinterpretations made by AI.
Additionally, one respondent stated that “no matter how sophisticated AI becomes, without true “intent” and “accountability”, it shouldn’t be granted equal status to a human”. This comment highlights how some researchers still view AI as a simple “tool”. Another remarked that “AI won’t raise questions or solve problems on its own unless it is prompted”, criticizing AI’s lack of autonomy. Not only that, there were other respondents expressing that “even if the AI’s outputs are accurate, interpreting the research as a whole and taking overall responsibility must lie with humans.”
Global Debate on AI Co-athors
As AI’s contribution to scientific research grows, the global science world/community has also started discussing on whether AI should be listed as a co-author.
For instance, in January 2023, Nature declared that “Large Language Models (LLMs) such as ChatGPT do not meet authorship criteria”. This judgement was made from the rule and belief that “authorship implies accountability, which AI cannot fulfill”. Therefore, in the vast library of research papers, the use of AI must be disclosed in the “Methods” section (or somewhere else, if necessary). Ultimately, the journal had reaffirmed that co-authorship requires not just contribution but also ethical responsibility and reproducibility.
In March (of the same year), the chief editor of Science, Holden Thorp, expressed in an editorial that “using ChatGPT-generated text in papers should be prohibited”. He had warned that any violations to this rule could be considered as plagiarism or scientific misconduct. Additionally, Science has shown even stricter AI policies; AI cannot be listed as an author or co-author, where even its generated content cannot be cited. Moreover, if AI was indeed utilized in the research process, the details (such as the tool’s name, model version and used prompts) must be disclosed in the “cover letter” and “acknowledgements” section. Eventually, “Authors” are expected to take full responsibility for the accuracy, citation and bias of AI-generated content. Consequently, if it is discovered that your use of AI was inappropriate, your thesis may be denied. Furthermore, Science prohibits reviewers from using AI (to write peer reviews), as this could violate the promise on manuscript confidentiality. Finally, any AI-generated images or multimedia must obtain explicit approval from the editors, where exceptions may be granted for papers directly about AI. Science has also stated that “the rules on AI-generated content may be adjusted to consider future changes in copyright law and ethical standards”.
As shown, the global discussion emphasizes that no matter how significant AI contribution may be, authorship must still meet the standard of responsibility and verifiability, where AI is still being questioned if it is truly capable of fulfilling such criterias.
New Ethics For A New Age of Science
Overall, the survey conducted on campus revealed that a majority of GIST researchers already perceive AI not just as a tool, but also as a collaborative partner in creative research. However, at the same time, it indicated an inevitable need to redefine the philosophical and ethical standards for recognizing AI’s contributions. For instance, AI has increasingly assisted scientific research in more refined ways, such as helping in research planning and theoretical formulation, showing that it has even stepped into the deeper regions of creativity.
As always, technological advancements raise new ethical questions that we must confidently face. This time, we have to ask ourselves; “Is AI merely a tool or a fellow researcher?” This question surpasses simple and technical judgement, calling for a societal reflection on how we, as a community, should define and accept the changing concepts of “creativity” and “contribution”. Perhaps, the answer may lie not in the technology itself, but instead in the collective choices and agreements of our accepting and ever-changing community.
Translated by Yoonseo Huh