Artificial intelligence (AI) is rapidly transforming education, offering new ways to engage students and enhance learning. While AI-powered tools bring many benefits, they also introduce significant risks, particularly for young children who may not yet understand the complexities of the digital world. For preschool teachers, safeguarding children now extends beyond physical safety to include digital protection.
This focuses on the risks AI poses to preschool-aged children and discusses how teachers can create a secure and safe learning environment. By understanding these challenges, teachers play a crucial role in fostering responsible digital engagement while ensuring the well-being of young learners.
AI and the Risk of Child Exploitation
Advancements in AI technology, particularly deepfake capabilities, have raised concerns about child exploitation. AI-generated images and videos can be manipulated to create harmful content, including child sexual abuse material (Chesney & Citron, 2019). Malaysian preschools enforce stricter policies on image-sharing, requiring parental consent before using children’s photos for marketing, newsletters, or parent engagement. Risks persist when these images are made publicly accessible. Malicious actors can download, modify, and misuse them, increasing the potential for digital exploitation.
To mitigate these risks, preschool teachers should advocate for stronger digital policies. While obtaining parental consent is a good first step, schools must ensure that images remain within secure platforms and are not freely accessible. Teachers can also help parents understand the dangers of sharing their children’s photos online and encourage them to use stricter privacy settings on social media. Schools can further enhance security by limiting children’s online exposure to marketing materials and adopting stricter access controls.
Digital Manipulation and the Vulnerability of Young Learners
Preschool-aged children, who are in Piaget’s preoperational stage of cognitive development (ages 2–7), have limited ability to think abstractly and differentiate between real and AI-generated content. At this stage, children rely on perception rather than logic, making them highly susceptible to digital manipulation (Harrison & McTavish, 2018). AI-powered applications can generate realistic videos, voice imitations, and interactive content, making it difficult for young learners to distinguish between reality and fiction.
Beyond misinformation, data privacy is another critical concern. Many AI-driven educational apps track children’s speech, behaviour, and preferences, collecting vast amounts of personal data. If these platforms lack strong security measures, sensitive information may be misused, putting children at risk of data breaches and privacy violations (Livingstone et al., 2020).
To address these challenges, preschool teachers should introduce digital literacy concepts early. Using child-friendly explanations and hands-on activities, teachers can help children understand that not everything they see online is real. Schools must also evaluate digital learning tools to ensure they comply with Malaysia’s Personal Data Protection Act (PDPA) 2010 before integrating them into the curriculum. Encouraging a balanced approach to learning, where screen time is complemented by hands-on play, can prevent over-reliance on digital platforms. Ideally, teachers should collaborate with parents to implement privacy controls and age-appropriate content filters on children’s devices at home.
AI and the Risk of Online Grooming
The rise of AI has significantly increased the sophistication of online grooming, allowing predators to mimic children’s language patterns and analyse their behaviour to manipulate them (Whittle et al., 2013). As children engage more with digital devices, AI-assisted grooming tactics become harder to detect. Even in preschool settings, children may be exposed to risks through various online platforms. Educational apps with unmonitored chat functions can allow harmful interactions, while publicly shared social media images, whether posted by parents or schools, can increase children’s visibility to online predators. Moreover, unregulated video content on platforms like YouTube may feature AI-generated characters that introduce inappropriate or harmful themes.
Preschool teachers play a critical role in digital safety education. Teachers should incorporate age-appropriate online safety lessons into daily activities to help children recognize and respond to unsafe situations. Using storytelling, role-playing, and puppetry, teachers can teach children when to seek help from a trusted adult. Schools must also ensure that digital tools used in classrooms do not include open chat functions or unsecured communication channels. Collaboration with parents is key, as teachers can guide them in monitoring their children’s online activity and recognizing warning signs of online grooming. Staying updated on national child online safety policies, such as those provided by the Malaysian Communications and Multimedia Commission (MCMC), further strengthens teachers’ ability to protect young learners.
Best Practices for Preschool Teachers
To ensure a safe digital learning environment, preschool teachers should adopt proactive strategies to protect young learners from AI-related risks. One of the most important measures is strengthening digital media policies. Schools must establish clear guidelines on collecting, storing, and sharing children’s images and personal data. Using secure educational platforms that comply with child safety regulations and limiting publicly accessible media featuring children can help prevent unauthorized use and reduce the risk of exploitation.
Additionally, preschool teachers should advocate for AI and child safety regulations. Teachers must work with school administrators to develop clear policies on AI-powered tools used in classrooms. Professional development is equally crucial. Teachers should attend training sessions on AI risks and digital safety to stay informed about emerging challenges in early childhood education.
Another key practice is the integration of digital literacy in early education. Teachers should introduce basic online safety concepts in a way that is accessible to preschoolers. Using storytelling, songs, and interactive activities, teachers can help children recognize unsafe digital interactions and understand when to seek help. Teaching foundational digital literacy skills fosters responsible technology use from an early age.
Lastly, teachers must commit to staying informed on AI risks and child protection strategies. Participating in workshops, online courses, and professional discussions with other teachers and parents ensures they remain updated on new AI developments and threats. Collaboration with parents helps extend digital safety practices beyond the classroom and into the home environment. By maintaining open communication and sharing best practices, teachers and families can work together to safeguard children from AI-related risks.
Conclusion
AI is reshaping early childhood education, offering both opportunities and challenges that teachers must address. Understanding AI risks, such as digital manipulation, data privacy concerns, and online grooming, allows preschool teachers to take proactive measures to protect students. A secure learning environment can be created by implementing strict digital policies, integrating responsible technology use, and fostering collaboration between teachers, parents, and policymakers. Protecting young learners in the digital age is a shared responsibility. Continuous vigilance, education, and collaboration are essential in creating a safer digital future for Malaysia’s youngest learners.
References
Chesney, R., & Citron, D. (2019). Deepfakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1810. https://doi.org/10.15779/Z38RV0D15J
Harrison, C., & McTavish, M. (2018). Literacy learning in the digital age: Research and practice. Routledge.
Livingstone, S., Stoilova, M., & Nandagiri, R. (2020). Data and privacy literacy: The role of the school in educating children in a datafied society. In R. Hobbs & P. Mihailidis (Eds.), The handbook of media education research (pp. 413–425). Wiley-Blackwell. https://doi.org/10.1002/9781119166900.ch36
Whittle, H., Hamilton-Giachritsis, C., Beech, A., & Collings, G. (2013). A review of online grooming: Characteristics and concerns. Aggression and Violent Behavior, 18(1), 62–70. https://doi.org/10.1016/j.avb.2012.09.003
WritTen by Azura Abrasid

Azura Abrasid is an experienced and passionate academic, project leader, and PhD candidate focusing on Digital Literacy in Education. With a career spanning over 15 years, she has conducted numerous workshops, talks, forums, and seminars globally, shaping the next generation of preschool practitioners. As a university lecturer since 2008, Azura is dedicated to designing programs that enhance the professionalism of Early Childhood practitioners while advocating for the holistic development of young learners. She believes in unlocking every child’s potential through patience, humour, and respect for differences. Currently serving as the Head of Early Childhood Education Programmes at Veritas University College, Azura holds various professional certificates and awards, is an active member of several educational organizations, and has provided training for numerous institutions worldwide.