{"id":6943,"date":"2025-05-31T18:27:16","date_gmt":"2025-05-31T09:27:16","guid":{"rendered":"https:\/\/gistnews.co.kr\/?p=6943"},"modified":"2025-05-31T18:28:25","modified_gmt":"2025-05-31T09:28:25","slug":"can-ai-be-research-co-authors","status":"publish","type":"post","link":"https:\/\/gistnews.co.kr\/?p=6943","title":{"rendered":"Can AI be Research Co-authors?"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">Last year, surprising names appeared on the Nobel Prize winners list for chemistry; Demis Hassabis and John Jumper of \u201cGoogle Deepmind\u201d. Specifically, the winners were awarded for developing \u201cAlphaFold\u201d, an AI-based protein structure prediction model that yielded significant contributions to solving challenges in the complex field of biology and chemistry, illustrating how generative AI (LLM) is now capable of entering the profound domain of scientific creativity.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>AI has Entered the Creative Domain of Scientific Research<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0As mentioned, AI is now involved not only in repetitive tasks in science and engineering, but also in the creative processes of research. For example, apart from its role in predicting protein structures, it can also be used to discover new drug candidates, optimize experimental designs and even draft research papers.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0However, this raises a critical question; should these AI (LLMs) be listed as mere methods (under the \u201cMethods\u201d section of a research paper) or are they worthy of being credited as the co-author? To dive deeper into this issue, <\/span><i><span style=\"font-weight: 400;\">GIST News<\/span><\/i><span style=\"font-weight: 400;\"> has conducted a survey on how researchers at GIST integrate generative AI into their research processes, and most importantly, how they perceive its contributions. Overall, the survey was conducted for three days (from April 8, 2025 to April 10, 2025) where a total number of 48 GIST researchers (including undergraduate\/graduate students, postdoctoral researchers and faculty) participated.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>90% of GIST Researchers Use Generative AI<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0It was discovered that among respondents, 43 of them (89.6%) currently use generative AI in their labs, whereas 2 researchers (4.2%) did not and the remaining 3 respondents (6.3%) were unsure. In terms of AI use frequency, 22 respondents (45.8%)\u00a0 answered that they used AI \u201calmost daily\u201d, whereas 12 respondents (25.0%) selected \u201c1~2 times per week\u201d. Additionally, 4 of the respondents (8.3%), with the remaining 2 respondents (4.2%), respectively answered \u201coccasionally when needed\u201d and \u201cnot at all\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>70% of GIST Researchers Acknowledge AI\u2019s Creative Contribution<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0Overall, the most common purposes for using AI was \u201cdrafting, summarizing and translating research papers\u201d (16 respondents, 33.3%). Others used AI as a creative tool for analysis, such as for \u201cexperimental data analysis\u201d (10 respondents, 20.8%), \u201cexperimental design optimization\u201d (9 respondents, 18.8%) and \u201cprotein structure prediction and molecular design\u201d (4 respondents, 8.3%).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0From these results, it can be observed that GIST researchers regard generative AI (LLM) not just as a tool but also as a creative partner. In particular, when researchers were given the statement \u201cAI can make creative contributions\u201d, 24 respondents (50.0%)\u00a0 \u201csomewhat agreed\u201c, whereas 10 respondents (20.8%) \u201cstrongly agreed\u201d. Thus, approximately 70.8% of the total respondents acknowledged the AI\u2019s growing potential and contributions for creativity. On the other hand, 11 of the respondents (22.9%) said they were \u201cnot sure\u201d, with 2 respondents (4.2%) responding with \u201cnot really\u201d, and the remaining 1 respondent (2.1%) with \u201cnot at all\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>75% of GIST Researchers Oppose Listing AI as a Co-author<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0However, the survey results showed that the majority of GIST researchers still remain cautious on the idea of naming generative AI as a co-author. Especially, out of 75% of\u00a0 the total respondents (36 people), 30 respondents (62.5%) opposed the idea, whereas the remaining 6 (12.5%) were indecisive.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0There were several reasons included for these judgements, such as \u201cAI cannot bear legal responsibility\u201d or \u201cAI cannot properly assess the accuracy of the research findings\u201d. Ultimately, only 20 respondents (41.7%) were supportive (or conditionally supportive) on listing AI as a co-author. Even then, these respondents only agreed under the precondition that human researchers would retain full supervision and responsibility.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0The most notable reason for opposing was \u201cunclear responsibility\u201d (14 respondents, 29.2%). This was followed by \u201cambiguity in the definition of creativity\u201d (12 respondents, 25.0%), \u201cethical\/legal issues\u201d (11 respondents, 22.9%), \u201cconflict with original authorship guidelines\/standards\u201d (9 respondents, 18.8%), and finally, \u201ctechnological limitations\u201d (8 respondents, 16.7%). These reasons reflected the researchers\u2019 concern that while AI may generate knowledge, it lacked the overall ability to handle responsibility or make judgements. In particular, \u201cunclear responsibility\u201d was selected overlappingly by the majority of respondents, as there existed an ethical gray area on who should ultimately bear responsibility for the errors and misinterpretations made by AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0Additionally, one respondent stated that \u201cno matter how sophisticated AI becomes, without true \u201cintent\u201d and \u201caccountability\u201d, it shouldn\u2019t be granted equal status to a human\u201d. This comment highlights how some researchers still view AI as a simple \u201ctool\u201d. Another remarked that \u201cAI won\u2019t raise questions or solve problems on its own unless it is prompted\u201d, criticizing AI\u2019s lack of autonomy. Not only that, there were other respondents expressing that \u201ceven if the AI\u2019s outputs are accurate, interpreting the research as a whole and taking overall responsibility must lie with humans.\u201d<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Global Debate on AI Co-athors<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0As AI\u2019s contribution to scientific research grows, the global science world\/community has also started discussing on whether AI should be listed as a co-author.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0For instance, in January 2023, <\/span><i><span style=\"font-weight: 400;\">Nature<\/span><\/i><span style=\"font-weight: 400;\"> declared that \u201cLarge Language Models (LLMs) such as ChatGPT do not meet authorship criteria\u201d. This judgement was made from the rule and belief that \u201cauthorship implies accountability, which AI cannot fulfill\u201d. Therefore, in the vast library of research papers, the use of AI must be disclosed in the \u201cMethods\u201d section (or somewhere else, if necessary). Ultimately, the journal had reaffirmed that co-authorship requires not just contribution but also ethical responsibility and reproducibility.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0In March (of the same year), the chief editor of <\/span><i><span style=\"font-weight: 400;\">Science,<\/span><\/i><span style=\"font-weight: 400;\"> Holden Thorp, expressed in an editorial that \u201cusing ChatGPT-generated text in papers should be prohibited\u201d. He had warned that any violations to this rule could be considered as plagiarism or scientific misconduct. Additionally, <\/span><i><span style=\"font-weight: 400;\">Science<\/span><\/i><span style=\"font-weight: 400;\"> has shown even stricter AI policies; AI cannot be listed as an author or co-author, where even its generated content cannot be cited. Moreover, if AI was indeed utilized in the research process, the details (such as the tool\u2019s name, model version and used prompts) must be disclosed in the \u201ccover letter\u201d and \u201cacknowledgements\u201d section. Eventually, \u201cAuthors\u201d are expected to take full responsibility for the accuracy, citation and bias of AI-generated content. Consequently, if it is discovered that your use of AI was inappropriate, your thesis may be denied. Furthermore, <\/span><i><span style=\"font-weight: 400;\">Science<\/span><\/i><span style=\"font-weight: 400;\"> prohibits reviewers from using AI (to write peer reviews), as this could violate the promise on manuscript confidentiality. Finally, any AI-generated images or multimedia must obtain explicit approval from the editors, where exceptions may be granted for papers directly about AI. <\/span><i><span style=\"font-weight: 400;\">Science<\/span><\/i><span style=\"font-weight: 400;\"> has also stated that \u201cthe rules on AI-generated content may be adjusted to consider future changes in copyright law and ethical standards\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0As shown, the global discussion emphasizes that no matter how significant AI contribution may be, authorship must still meet the standard of responsibility and verifiability, where AI is still being questioned if it is truly capable of fulfilling such criterias.\u00a0<\/span><\/p>\n<p><b>\u00a0<\/b><\/p>\n<p><b>New Ethics For A New Age of Science<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0Overall, the survey conducted on campus revealed that a majority of GIST researchers already perceive AI not just as a tool, but also as a collaborative partner in creative research. However, at the same time, it indicated an inevitable need to redefine the philosophical and ethical standards for recognizing AI\u2019s contributions. For instance, AI has increasingly assisted scientific research in more refined ways, such as helping in research planning and theoretical formulation, showing that it has even stepped into the deeper regions of creativity.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0As always, technological advancements raise new ethical questions that we must confidently face. This time, we have to ask ourselves; \u201cIs AI merely a tool or a fellow researcher?\u201d This question surpasses simple and technical judgement, calling for a societal reflection on how we, as a community, should define and accept the changing concepts of \u201ccreativity\u201d and \u201ccontribution\u201d. Perhaps, the answer may lie not in the technology itself, but instead in the collective choices and agreements of our accepting and ever-changing community.\u00a0<\/span><\/p>\n<p>Translated by Yoonseo Huh<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Last year, surprising names appeared on the Nobel Prize winners list for chemistry; Demis Hassabis and John Jumper of \u201cGoogle Deepmind\u201d. Specifically, the winners were awarded for developing \u201cAlphaFold\u201d, an AI-based protein structure prediction model that yielded significant contributions to solving challenges in the complex field of biology and chemistry, illustrating how generative AI (LLM) [&hellip;]<\/p>\n","protected":false},"author":159,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[79,78],"tags":[],"coauthors":[378],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pdL9j0-1NZ","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/6943"}],"collection":[{"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/users\/159"}],"replies":[{"embeddable":true,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6943"}],"version-history":[{"count":3,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/6943\/revisions"}],"predecessor-version":[{"id":6946,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/6943\/revisions\/6946"}],"wp:attachment":[{"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6943"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6943"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6943"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/gistnews.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcoauthors&post=6943"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}