Policy on the Use of GenAI

Artificial Intelligence (AI) has increasingly become part of everyday discussions, including academic research. However, the term AI is often used broadly and can create confusion. In this discussion, the term AI specifically refers to Generative Artificial Intelligence (GenAI)—systems trained on datasets that are not directly visible to or controlled by a research team. Our focus is on the use of AI as research support, such as assisting in conducting or documenting research, rather than on research that studies the impact of AI itself.

This distinction allows us to examine when the use of AI in research may be justified and to define the boundaries of responsible use. Although AI tools are frequently promoted for their benefits, their effects on the research and creative process are still being explored. For instance, there are concerns that reliance on AI may reduce human reflexivity, creativity, and critical thinking, potentially discouraging the development of new research ideas. Additionally, questions arise about whether the use of AI could compromise responsible research practices.

While AI tools may enhance certain aspects of the research process, they also raise significant concerns related to academic integrity. One important issue is the risk of uploading unpublished work into generative systems, where unreviewed research could become part of broader datasets and influence future outputs. This situation may affect both the integrity of scholarly knowledge and the reliability of AI-generated information.

Moreover, AI systems can produce biased, inaccurate, or misleading outputs, which are often difficult to detect and verify. The use of AI may also complicate the proper attribution of prior research. Another concern is that AI providers may collect and reuse user inputs and outputs, potentially raising intellectual property and data ownership issues.

General Policies for Authors

When authors submit their work to our journal, they are responsible for ensuring the originality, accuracy, validity, and integrity of the submitted content. If Artificial Intelligence (AI) tools are used during the research or writing process, authors must use these tools responsibly and in accordance with the journal’s ethical standards and authorship guidelines.

The journals represented in this collaboration support the responsible and transparent use of AI tools, provided that they comply with high standards of data security, confidentiality, and copyright protection. AI tools may be used in the following cases, subject to appropriate disclosure, justification, and verification by the authors, who remain fully responsible for the accuracy and reliability of any AI-generated output:

  • Language editing and improvement
  • Literature classification and organization
  • Data collection (excluding the generation of synthetic or fabricated data)
  • Coding assistance (e.g., processing large datasets or multimedia sources)
  • Data analysis, provided that the results are replicable and interpretable by humans

However, the journal does not permit the use of AI tools to create, alter, or manipulate images, figures, or other forms of empirical data intended for publication.

The term “images, figures, or other forms of empirical data” includes, but is not limited to:

  • Photographs and illustrations
  • Charts and graphs
  • Data tables
  • Medical images
  • Image fragments or snippets
  • Computer code and formulas
  • Video, audio, field recordings, and other media formats

The term “manipulation” refers to actions such as augmenting, concealing, relocating, removing, or introducing specific elements within an image, figure, or dataset.

Policies in the use of AI

1. Disclosure and Documentation of AI Use

Authors must clearly disclose and document any use of AI tools in their research. Except for basic grammatical and copy-editing assistance, the use of AI should be reported on the journal submission page, in the methods section, and in the acknowledgment section (if applicable). Authors must provide the following information:

  • The full name of the AI tool and its version number.
  • How and when the AI tool was used during the research process.

Authors must also acknowledge the limitations of AI language models, including potential bias, errors, and knowledge gaps, within the manuscript.

2. Justification for the Use of AI

When AI tools are used beyond grammatical or copy-editing purposes, authors must clearly explain why AI was used in the research process. This justification should include:

  • The reasons alternative methods were insufficient.
  • The precautions taken to prevent bias, errors, hallucinations, or misinformation generated by AI.

As a general rule, AI tools must not be used to generate substantive content in research articles. This includes prohibiting AI from generating:

  • General overviews, ideas, or concepts
  • Motivational statements
  • Theories or arguments
  • Literature review references
  • Discussions or analytical interpretations

3. Verification and Responsibility for AI Output

The following principles apply to all uses of AI, including grammar and copy-editing tools.

  • AI tools must not be listed as authors. AI systems cannot take responsibility for published content or provide consent for copyright and licensing agreements. In line with the COPE’s position statement on Authorship and AI tools, AI cannot fulfill the role of an author.
  • Authors are responsible for verifying the accuracy, validity, and appropriateness of AI-generated content and citations. Any errors or inconsistencies must be corrected. For example, manual verification methods such as subsample coding should be conducted to minimize the risk of AI hallucinations.
  • AI must not be used as a fully automated or “black-box” analytical process. It should not independently generate summaries, interpretations, claims, or research findings. Research findings must be validated using established benchmark methods.
  • Authors must be aware of potential plagiarism risks. AI-assisted language editing may unintentionally reproduce phrases or sentences from existing sources. Multi-word edits produced by AI may constitute unintentional plagiarism. Authors must check original sources to ensure proper originality.
  • Human oversight is essential. All AI-assisted work must be carefully reviewed and edited by the authors. AI tools can produce outputs that appear authoritative but may contain inaccuracies, incomplete information, or bias. Ultimately, authors remain fully responsible and accountable for the content of the manuscript.

Policies for Editors and Reviewers

  • Journals want very high standards of honesty and transparency in the editorial andCOPE’s position statement on Authorship and AI tools review process.
  • Editors and reviewers are not allowed to upload any part of unpublished manuscripts (such as files, images, or data) into Generative AI tools.
  • This rule exists because the manuscript is confidential and belongs to the author. Sharing it with AI tools may violate the author’s intellectual property rights.
  • Editors and reviewers must not use AI tools to write or generate peer-review reports.
COPE’s position statement on Authorship and AI tools