Subscribe for Updates
Subscribe to receive ECPAT NZ updates on keeping tamariki and rangatahi in Aotearoa safe from sexual exploitation.
12
Jan
2026
Before we all jump on the growing call to ban Grok, we first need to ask ourselves what the problem is we’re trying to solve. Then we can decide if banning a platform or app is really an effective way of solving it.
This isn’t about one platform or app feature. The real question is whether Aotearoa believes a young person’s image and likeness should be protected in an AI world where any ordinary photo can be “nudified”, sexualised, and shared at scale. Until we’re clear on that national value—what privacy actually means now—our responses will keep oscillating between outrage and improvisation.
In Europe they’re discussing bans and enforcement because the rules and policies they already have in place around generative AI are stronger. We’re seeing generative AI companies repeatedly collide with stronger privacy expectations and enforcement. We don’t yet have agreement on what the standards should be for an AI world in Aotearoa, and we certainly don’t yet have a comprehensive, AI-era privacy and online safety framework with the same level of clarity and enforcement leverage that you see in much of Europe.
In Aotearoa, the risk is that we end up relying on platform promises and piecemeal responses, rather than a clear national line on what protections people—especially young people—should reasonably expect over their image and likeness.
In Aotearoa, we’ve seen a massive surge in AI-generated fake nudes and “nudified” images. That alone shows how quickly this technology is being used to sexualise people’s photos—including young people’s.
Grok sits inside Tech Multi-Billionaire Elon Musk’s X (formerly known as Twitter). Right now it’s not just another platform in the mix, it is a platform where the public stance from the top has repeatedly signalled hostility to regulation and enforcement. That matters, because when leadership treats rules as optional, safety becomes optional too.
Elon Musk and X (formerly known as Twitter) may say certain content creation isn’t allowed by Grok, but what matters is how easy it is for users to get around those rules and generate and share harmful material anyway. “It’s not allowed” isn’t a safeguard if it’s still simple to do.
The risk is not just “bad content exists online.” The risk is a step-change in what’s possible: images taken in normal, everyday contexts can be turned into sexualised content without consent. And once that content exists, it can be used to harass, humiliate, blackmail, or simply circulate—causing harm that is immediate and lasting. This has been possible for years, but now almost anyone can do it, anywhere, for free, in seconds.
Some people still tell themselves that generating a sexualised image from a non-sexual photo “isn’t really harming anyone.” That is wrong. The young person may see it. Their peers may spread it. The shame and loss of control can be devastating. And there is a wider concern about escalation when these tools make sexualised depictions easier to access and share, or extort someone.
When this issue hits the news, the conversation too often slides toward controlling young people: screen-time crackdowns, bans aimed at kids, lectures about what young people should or shouldn’t do online.
Young people aren’t the problem.
We need to spend less time discussing regulation to get young people off social media as though they’re the issue, and more time looking to where the problem really lies: with perpetrators using platforms to generate child sexual abuse material, and with the systems in platforms and policy that are enabling it.
Whether the tool is Grok today or another generator tomorrow, the principle is the same: companies must prevent their products being used to create and spread child sexual abuse material, and we need enforceable standards to ensure they do.
That means Aotearoa should move quickly on:
Bans and restrictions are tools, not the starting point. The starting point is a discussion on what protections New Zealanders have a right to expect.
Banning Grok alone is like repainting one room because you can see mould, without fixing the leak in the roof. The mould might disappear there for a while—but it’ll show up again somewhere else, because the underlying conditions haven’t changed.
Once consistent regulation is in place, then if a platform won’t implement effective safeguards and won’t comply with enforceable standards, governments do have to consider stronger measures, including restrictions.
If we want safety in an AI world, we can’t keep improvising platform by platform—we need a clear national vision of privacy, backed by law.
Outrage comes and goes; enforceable standards are what keep young people safe when attention moves on.
Subscribe to receive ECPAT NZ updates on keeping tamariki and rangatahi in Aotearoa safe from sexual exploitation.