바로가기메뉴

본문 바로가기 주메뉴 바로가기
 

logo

  • P-ISSN1738-6764
  • E-ISSN2093-7504
  • KCI

AI

As advancements in Artificial Intelligence (AI) continue to evolve rapidly, IJoC is committed to ensuring responsible, ethical use of AI in research and publishing. We will regularly review and update this policy to align with technological and ethical developments.


1. AI Authorship

AI tools, including Large Language Models (LLMs) like ChatGPT, do not meet the criteria for authorship under IJoC guidelines. Authorship requires accountability for the entirety of the work, which cannot be fulfilled by AI. Therefore, AI cannot be listed as an author on any manuscript.

    • Disclosure: The use of AI tools in manuscript preparation must be transparently declared in the Methods section or another suitable section. This includes any AI assistance in drafting, data analysis, or other substantial contributions.
    • AI-Assisted Copy Editing: Use of AI for minor editing, such as grammar correction or language refinement (termed “AI-assisted copy editing”), does not require disclosure. However, autonomous content generation by AI is not permitted.
    • Accountability: Human authors are fully responsible for the integrity of the manuscript and must ensure that any AI-generated content aligns with their original work and reflects their intellectual contributions.

        2. Generative AI Images

        IJoC prohibits the use of AI-generated images and videos in submitted works due to unresolved legal and ethical concerns surrounding copyright and research integrity.

          • Exceptions: AI-generated images may be used if sourced from legally recognized agencies or if the manuscript directly addresses AI technologies. These exceptions will be reviewed on a case-by-case basis.
          • Non-Generative AI: The use of AI tools for enhancing, combining, or modifying existing images (such as non-generative machine learning tools) must be disclosed in the image caption. Editors will review these cases individually to ensure compliance with ethical guidelines.
          • Image Types Covered: This policy applies to generative images such as illustrations, scientific diagrams, and editorial art (e.g., drawings, cartoons). Text-based figures like tables, flow charts, and basic graphs that do not involve image generation are exempt from this policy.

              3. AI Use by Peer Reviewers

              The integrity of the peer review process is paramount, and we recognize the essential role of expert human judgment. While Springer Nature may explore safe AI tools for reviewers in the future, we currently prohibit reviewers from uploading manuscripts into generative AI tools due to concerns over privacy, accuracy, and ethical integrity.

                • Declaration: If any AI tools are used to assist in the review process (e.g., for grammar checking or data validation), reviewers must disclose this in their review report. Transparency will allow editors to assess the appropriateness of the AI’s role.
                • Human Accountability: Peer reviewers are responsible for the content of their evaluations, and AI-generated content is not permitted as a replacement for human expertise.

                  4. Ethical AI Use in Research

                  Researchers using AI for data analysis, simulations, or other parts of the research process must ensure that AI models are transparent, reproducible, and free from bias. Researchers must provide details about the algorithms, data sources, and AI tools used to ensure that the methods are verifiable and ethical.

                    • Data Transparency: Authors must share the code, datasets, and algorithms used in their research where possible, aligning with field standards for transparency and reproducibility.
                    • Bias and Integrity: AI systems used in research should be evaluated for biases, and authors must report steps taken to mitigate any potential ethical issues.

                      INTERNATIONAL JOURNAL OF CONTENTS