The Gender Gap in AI Trust: A Wake-Up Call for Education

The Gender Gap in AI Trust: A Wake-Up Call for Education

A new Deloitte report highlights a growing gender trust gap in the use of generative AI in Australian workplaces. According to The Australian, only 50% of women report trusting and using generative AI, compared to 70% of men.

The reasons behind this disparity are sobering. Women are significantly more likely to encounter negative AI interactions, including deepfake harassment, online abuse, and biased outputs. As a result, they are often more cautious — or even resistant — to adopting AI tools at work.

In response, several organizations are launching targeted upskilling and reskilling programs aimed specifically at women, hoping to narrow this trust gap and promote more equitable access to emerging technologies.


Summary

This isn’t just a workplace story — it’s an education story.

If women are entering the workforce already hesitant or skeptical about AI, the roots of that skepticism likely run back into earlier experiences in school and university. And if girls are disproportionately exposed to the risks of AI — like deepfakes and harassment — it’s no wonder they’re less enthusiastic about these tools later on.

For educators, this raises a huge challenge:

  • Are we preparing all students to engage critically and confidently with AI?
  • Are we naming and addressing the unique risks faced by girls and other marginalized groups?
  • Are we ensuring that “AI literacy” isn’t just about technical skills, but also about ethical awareness, consent, and digital resilience?

Reflection

I found this report both unsurprising and unsettling.

On one hand, it’s obvious that women’s trust in technology has been shaken by lived experience. Harassment, bias, exclusion — these aren’t abstract risks, they’re daily realities online. But what’s striking here is how this plays out long before people reach the workplace.

If we want true AI literacy, we can’t just hand out tutorials or coding workshops. We need to create environments where all students — especially girls — feel safe to experiment, question, and push back on the technology itself. That means:

  • Calling out bias in classroom AI tools.
  • Making sure students understand how deepfakes work and how to spot them.
  • Talking openly about online safety and digital rights.

For me, this connects to my own use of AI tools like ChatGPT. Yes, I use them — heavily! — but I do so from a position of critical agency. I know when I’m offloading mental work, I know when I’m checking sources, and I know when the machine’s confidence doesn’t equal competence.

That’s the mindset we should be nurturing in students: AI as tool, not master; support, not replacement.


References