The increasing reliance on artificial intelligence (AI) for writing tasks ranging from academic essays to creative content has sparked debates about the loss of human authenticity in written expression. Critics argue that AI-generated texts lack the nuanced emotions, lived experiences, and unique voice that define human writing, resulting in writing pieces that may be technically proficient but emotionally sterile (Bender, 2024). Studies suggest that while AI can mimic stylistic and structural elements of human language, it often fails to convey the depth of personal perspective, cultural context, and ethical reasoning inherent in human-authored works (Floridi, 2019). Furthermore, the detachment of AI from genuine human experience raises concerns about the potential homogenization of thought and the loss of individuality in communication. As such, the question persists: if writing is an extension of human identity, can AI-generated texts ever truly reflect the richness of human thought and feeling?
As an educator, I find myself being in an ethical and moral conundrum when it is assignment season especially the academic written ones. Did my students actually write this or was it AI produced? While I do see some parts of them shine through in the written work, the imperfections and somewhat clumsy and haphazard thoughts, but most of it seemed to be almost perfect writing pieces.
PERSONAL PERSPECTIVE- THE GENUINE HUMAN EXPERIENCE?
The use of artificial intelligence (AI) in writing essentially fails to capture the genuine human experience, as it lacks the lived subjectivity, emotional depth, and contextual awareness that define authentic human expression. Unlike human authors, whose work emerges from personal history, cultural background, and ethical reflection, AI-generated text is derived from statistical patterns in pre-existing data, rendering it devoid of true intentionality or consciousness. Scholars argue that writing is not merely an act of linguistic construction but a manifestation of identity, memory, and social situatedness, qualities that AI cannot replicate (Hayles, 2017). Furthermore, while AI may simulate stylistic conventions, it does not engage in meaning-making with the same ethical and emotional weight as human writers, often producing outputs that feel hollow or derivative (Floridi, 2019). This limitation is particularly evident in creative and reflective writing, where nuances, vulnerability, and introspection are essential (Gunkel, 2020). While AI can mimic aspects of human language, it remains incapable of embodying the lived experiences that make writing a deeply human endeavour.
The inability of artificial intelligence (AI) to genuinely replicate human vulnerability, introspection, and nuanced expression presents a fundamental limitation in its application to creative and reflective writing. Creative writing relies on personal lived experiences, emotional truth, and the subtleties of human perception, qualities that AI, as a system trained on existing data rather than genuine consciousness, cannot embody (Oppermann, Boden, Hofmann, Prinz & Decker, 2018). Reflective writing, on the other hand demands introspection and ethical self-awareness, processes that require a sense of identity and subjective experience that AI fundamentally lacks (Hayles, 2017). While AI can generate grammatically correct and stylistically coherent text, its outputs remain derivative, pieced together from patterns in its training data without true understanding or emotional weight. This limitation is especially evident in works that demand vulnerability, such as personal essays or poetry, where the absence of human authorship results in writing that feels sterile or emotionally disconnected (Gunkel, 2020). Ultimately, AI may assist in drafting or editing, but it cannot replace the uniquely human capacity to infuse writing with genuine meaning, self-reflection, and emotional resonance.
HOW MUCH AI ASSISTANCE IS ACCEPTABLE BEFORE WRITTEN ASSESSMENTS LOSE THEIR EDUCATIONAL VALUE?
The dawn of AI-powered writing tools, such as OpenAI’s ChatGPT, Google’s Gemini, and other large language models (LLMs), have transformed how students approach written assessments (Cotton, Cotton & Shipway, 2023). While these tools offer efficiency and support, they also pose significant challenges regarding originality, intellectual development, and assessment validity. The question of how much AI assistance is acceptable before written assessments lose their educational value is a growing concern in academia. While AI tools can enhance productivity by aiding in research, grammar correction, and idea generation, excessive reliance risks undermining critical thinking and originality whic are the core goals of education. If students depend too heavily on AI for content creation, they may miss opportunities to develop essential skills such as analysis, argumentation, and independent problem-solving.
The integration of AI as a supportive tool in academic writing presents both opportunities and challenges for scholarly communication. AI-powered applications, such as grammar checkers, paraphrasing tools, and content generators, can enhance efficiency by assisting with language refinement, structural coherence, and even initial drafting. Nevertheless, their application prompts serious concerns about authenticity, analytical rigor, and scholarly integrity. While AI can streamline the writing process, over-reliance may diminish a writer’s ability to develop original arguments, engage in deep analytical reasoning, and cultivate a distinctive scholarly voice. To maximize benefits while mitigating risks, educators and institutions must establish clear ethical guidelines, ensuring AI serves as a supplement rather than a substitute for intellectual work. Educators must strike a balance, setting clear guidelines on permissible AI use to ensure assessments remain meaningful measures of learning. Ultimately, the ethical and pedagogical challenge lies in leveraging AI as a supportive tool rather than a replacement for genuine intellectual engagement.
MITIGATING THE ETHICAL AND PEDAGOGICAL CHALLENGES
The increasing use of AI writing tools in education presents both ethical and pedagogical challenges, necessitating proactive strategies to preserve academic integrity and learning outcomes. To address ethical concerns, institutions should implement clear policies defining acceptable AI use, distinguishing between permissible assistance (e.g., grammar checks, brainstorming) and prohibited practices like full-text generation (Cotton ,Cotton & Shipway, 2023). Tools such as AI detectors (e.g. Turnitin’s AI writing detection) can help identify misuse, but educators must pair them with critical discussions about plagiarism and authorship to foster student accountability (Perkins, Row, Postma, McGaughran & Hickerson 2023). In term of addresssing the pedagogical dilemma, AI’s role should be scaffolded rather than substitutive in which assignments can be re-designed to emphasize process-based learning, such as requiring drafts, reflections, or oral defenses to demonstrate original thought (Warschauer, 2022). Additionally, Ng, Leung, Chu & Qiao (2021) posit that it is equally essential that students be taught to critique AI-based outputs for bias, accuracy as well as relevance, ultimately transforming reliance into critical thinking practices.
FINAL THOUGHTS – USE WITH CARE AND CAUTION
The role of AI in written assessments is inevitable, but its unimpeded use threatens the fundamental objectives of education. Education institutes must re-define assessment strategies to safeguard critical thinking and originality. Thus, the question is not whether AI should be used, but how much AI assistance aligns with pedagogical goals without compromising learning outcomes. You cannot dismiss it completely but use it to your advantage with care, caution and of course ethically.
REFERENCES
Bender, E. M. (2024). Resisting Dehumanization in the Age of “AI”. Current Directions in Psychological Science, 33(2), 114-120.
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 60(1), 1–12.
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185-193.
Gunkel, D. J. (2020). The right (s) question: Can and should robots have rights?. In Artificial Intelligence (pp. 255-274). Brill mentis.
Hayles, N. K. (2023). Figuring (Out) Our Relations to AI. Feminist AI: Critical perspectives on algorithms, data, and intelligent machines, 1.
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.
Oppermann, L., Boden, A., Hofmann, B., Prinz, W., & Decker, S. (2019). Beyond HCI and CSCW: Challenges and useful practices towards a human-centred vision of AI and IA. In Proceedings of the Halfway to the Future Symposium 2019 (pp. 1-5).
Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, A. (2023). Detection and avoidance of AI plagiarism in academic research. Research Ethics, 19(2), 1–15.
Warschauer, M. (2022). The paradoxical future of digital learning. Learning, Media and Technology, 47(1), 1–15.
Written by Shubashini Suppiah

Shubashini Suppiah is a teacher educator at the Institute of Teacher Education Gaya Campus in Kota Kinabalu, Sabah Malaysia. Her areas of research interests are teacher education and teacher professional development, reflective practice approaches and digital literacy in the ESL classroom.