Generative AI Policy

For Authors

Effective March 22, 2023, a new policy was introduced to address the growing use of these tools by creators. Its aim is to improve transparency and offer guidance for everyone involved in producing scholarly work: authors, editors, reviewers, readers, and contributors. The publisher will track developments and update the policy as needed. Note that this policy covers the writing stage only; it does not restrict the use of AI for data analysis in research.

When using generative AI or AI-assisted tools while writing, authors should limit their use to improving clarity and readability. Human oversight is essential: authors must carefully check and edit any AI-produced text. Ultimately, authors are responsible for the accuracy and quality of their manuscripts.

Authors must disclose any use of generative AI or AI-assisted tools in a dedicated section placed before the References, titled “Declaration of use of AI in the writing process.”

Suggested statement: The author(s) used [NAME TOOL/SERVICE] during the preparation of this work to [REASON]. After using the tool/service, the author(s) carefully reviewed and edited the output as needed and accept full responsibility for the published content.

This disclosure is required for papers published from Vol. 2(1), November-February 2024 onward.

This policy does not apply to basic utilities such as spelling and grammar checkers. If there is nothing to report, state “nothing to disclose.”

AI systems and AI-assisted technologies must not be credited as authors. Authorship entails duties only humans can assume. Each author/co-author is responsible for the accuracy and integrity of the work, must approve the final version, and consent to submission. Authors must also ensure originality, meet authorship criteria, and avoid infringing third-party rights.

Use of generative AI will be assessed case by case, as editors can often recognize AI-produced writing patterns.

Is this policy about AI used in research workflows (e.g., data processing)?

No. The policy addresses generative AI and AI-assisted tools (such as large language models) used for writing. It does not forbid AI tools in study design or research methods. Where AI is used in those capacities, it should be described as part of the methodology with full details provided in the Methods section.

Use of generative AI for images

Whether AI-generated images are acceptable depends on the type and rights: Explanatory/Concept images are allowed with verification. AI may create diagrams or illustrations to explain ideas. Ensure accuracy and that the visuals convey the intended concepts before inclusion. Artistic renderings allowed if rights are clear. AI may be used for draft artwork (e.g., cover art) later be refined by a designer. Ensure you hold rights to any source material and check the tool’s terms for commercial restrictions. Factual/Evidential images are not allowed. Do not use AI to create or alter images that substantiate scientific, clinical, or technical claims, as these require demonstrable accuracy.

For Reviewers

When a researcher is asked to review another scholar’s manuscript, the document must be kept strictly confidential. Reviewers must not upload the manuscript, whether in full or in part, to any generative AI system. Doing so may violate the authors’ confidentiality and intellectual property rights and, if the manuscript contains personally identifiable information, may breach data-privacy laws. Reviewers are fully responsible and accountable for the content of their review reports.

For Editors

All submitted manuscripts must be treated as confidential. Editors must not upload a submission or any portion of it to a generative AI tool. Such actions may infringe the authors’ confidentiality and proprietary rights and, where personal identifiers are present, may violate data-privacy protections.