Skip To Main Content

Main Navigation

Utility

Header Logo

Stonar School

Header Holder

Header Top Right

Header Utility

Header Trigger

Header Desktop Menu

Close Desktop Menu

Landing Navigation

Breadcrumb

The Stonar Way 27.02.26

headshot of headmaster

Mr Way reflects on the use of AI in schools.

The stimulus for this week’s piece came from two sources. I visited a Year 10 GCSE PRE (Philosophy, Religion & Ethics) lesson this week in which the class were looking at racial discrimination, where it comes from and how it might be prevented. As part of the discussion, they noted the importance of representation in the media and our cultural life. The pupils raised the issue of the Disney princesses that they were exposed to when they were young and how they only represented one type of person. They noted the changing nature of representation in Disney films, but this had not been their experience when they were young. I was impressed by their thoughtful contributions to the debate. Later that day I read an article in a professional magazine examining the growing concern around AI bias.

Both prompted the same question: how do subtle patterns shape the way we see ourselves and others? Our school community is proudly diverse and we work hard to foster belonging and to celebrate difference. Yet AI, if used uncritically, can quietly undermine this work.

Consider a simple example. Ask an AI tool to create a game using girls’ names and it may generate Olivia, Sophie and Alice. Names are deeply tied to identity. What message might this send to Kataryna, Aahana or Amara about how they are seen and valued? Similarly, when asked to produce images of successful students or certain professions, AI systems frequently depict white males. One can request “more diversity”, but the result can feel tokenistic rather than genuinely representative.

As the article explained, AI systems do not think; they identify and reproduce patterns in the data on which they are trained. If that data over-represents certain groups, the outputs will too. Research from the Alan Turing Institute and guidance from UNESCO both warn that such biases can become amplified over time. As more online content is generated by AI, a feedback loop emerges: the system trains on its own skewed outputs, further narrowing representation.

If AI repeatedly generates the same names, faces and stories, we subtly signal who “belongs” in particular spaces. That has real consequences for confidence and participation.

So what can we do? First, notice the bias — and teach our pupils to notice it too. We are making AI bias an explicit part of classroom conversation and encouraging pupils to report examples they encounter. As staff, we must also critically evaluate AI outputs rather than accepting them at face value.

Our recently launched AI Positive School policy reflects this balanced approach. We embrace AI’s educational potential, but we do so with discernment, integrity and a steadfast commitment to ensuring every member of our community feels seen and valued.

  • Stonar Way