AI Usage Policy

JIFFK acknowledges that large language models (LLMs) or Gen AI present opportunities for the acceleration of research and its dissemination. While these opportunities have the potential to be transformative, they are unable to replicate the creative and critical reasoning capability of humans. We differentiate between three categories of applications for AI and related technologies: assistive (which no longer necessitates disclosure), generative (which requires disclosure), and prohibitive. The policy on the use of AI technology at JIFFK has been established to help authors, reviewers, and editors make informed decisions regarding the ethical application of this technology. 
 
We acknowledge that AI-assisted writing has become increasingly prevalent as technology becomes more accessible. AI tools that provide recommendations for enhancing or enhancing one's own work, such as those that enhance language, grammar, or structure, are classified as assistive AI tools and do not necessitate disclosure by authors or reviewers. Nevertheless, authors are accountable for guaranteeing that their submission is precise and adheres to the rigorous scholarship standards.
 
Although submissions will not be rejected solely due to the disclosed use of GenAI tools, the editor reserves the right to reject the submission at any point during the publishing process if the editor becomes aware that GenAI was inappropriately used in the preparation of a submission without adequate disclosure.  
Reviewers who generate review reports inappropriately using ChatGPT or other GenAI tools will not be invited to review for the journal, and their review will not be included in the final decision.  
Editors are prohibited from employing ChatGPT or other GenAI to produce decision letters or summaries of unpublished research. If reviewers and editors violate peer-review confidentiality by employing GenAI tools, the journal and publisher reserve the right to take appropriate action. 
The following are examples of inappropriate use of GenAI:
 
1.⁠ ⁠The production of inaccurate text or content 
2.⁠ ⁠Data or submission generation through a sequence of prompts  
3.⁠ ⁠Utilizing GenAI tools to conduct interviews in place of participants for qualitative research
4.⁠ ⁠Examination of themes and experiences  
5.⁠ ⁠Inappropriate attribution to preceding sources or plagiarism 
6. Images that are generated and exhibited as novel or distinctive research images  
7. False claims or fabricated references
8. Utilization of GenAI tools for the purpose of conducting peer review or editorial work