The journal follows the Recommendations on the Use of Artificial Intelligence in Scholarly Communication developed by the European Association of Science Editors, as well as the recommendations of leading scientometric databases, including Web of Science and Scopus.

Detailed recommendations are available at:
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

Artificial intelligence (AI) is regarded by the Editorial Office as an auxiliary tool that may improve the efficiency of scholarly work. At the same time, all results obtained with its use must be critically verified, scientifically justified, and professionally interpreted by the author.

The Editorial Office expects authors to voluntarily and transparently disclose the use of AI tools in the preparation of a manuscript.

When submitting an article, the author must upload a separate file entitled “Author’s Declaration on the Use of AI”, which should specify:

  • the purpose of using AI;

  • the way in which it was used;

  • the stages of manuscript preparation at which AI was used.

The content of such a declaration must also be provided at the end of the article before the list of references.

A separate structured element entitled “Use of AI” must be included at the end of the article, containing the relevant information.

The use of generative AI for the creation or modification of images is not permitted, except in cases where this constitutes part of the research design and is properly described in the text of the article.

The journal’s policy is based on the principles of academic integrity, transparency, and the responsible use of modern digital technologies in scholarly activity.

For structured disclosure of AI use, authors are encouraged to complete the GAIDeT declaration (Generative AI Delegation Taxonomy), available at:
https://panbibliotekar.github.io/gaidet-declaration/index-uk.html

The completed declaration must be submitted together with the manuscript.

The Editorial Board is also encouraged to disclose any use of AI tools, including screening software that makes it possible to determine whether particular parts of a manuscript or review were created or edited using AI.

The Editorial Office does not recommend the use of AI during the peer review process.

At the same time, the Editorial Office cannot fully exclude the possibility that reviewers may use such tools.

For this reason, reviewers are asked to maintain the highest possible level of transparency and to inform the editors of the fact of AI use, the way in which it was applied, and the results obtained.