Advertisement
Advertisement
Real Talk

Why Grok AI and “Digital Undressing” Have Parents Talking

Here’s what’s been going on with Grok AI and why it’s been involved with “digital undressing”

Artificial intelligence is reshaping how we live, learn, and connect—but not always in easy-to-digest ways. Recently, Grok AI, an AI chatbot developed by Elon Musk’s xAI and integrated with the social platform X, made headlines not for what it can teach us, but for how it’s being misused. The controversy highlights an unsettling reality: tools that can manipulate images and generate deepfakes are no longer science fiction—and parents may feel the ripple effects long before laws and safeguards catch up.

Here’s what Filipino parents need to understand about Grok AI, digital undressing, and why this conversation matters beyond the headlines.

What is Grok AI?

Before panic Googling sets in, let’s clarify what Grok AI actually is.

Grok AI is a conversational artificial intelligence developed by xAI, Elon Musk’s AI company, and integrated into X (formerly Twitter). On the surface, it works much like other AI chatbots: it answers questions, explains concepts, generates text, and reacts in real time to trending conversations online.

What makes Grok different—and more controversial—is that it appears to be less filtered than other AI tools. Musk himself has described it as “rebellious,” with fewer guardrails and more access to real-time data from X. That openness is what made it appealing to users who want fewer restrictions—but it’s also what raised red flags.

Advertisement

Recently, Grok’s image-generation and editing capabilities were allegedly used to create sexualized or “digitally undressed” images from real photos, including those of women and potentially minors. While the tool itself wasn’t built for exploitation, its loose moderation made misuse easier, faster, and more visible.

Simply put: Grok didn’t invent digital undressing—it made it easier to do.

Not New, Just More Visible

The idea of digitally altering someone’s image isn’t new. Predators and bad actors have used deepfake technology for years to create non-consensual content, often starting with adult targets and sometimes including minors.

Today’s uproar around Grok is partly because its edit image and nudification features—intended for creative AI use—have reportedly been used to generate sexualized and undressed images from real photos, potentially including minors. That’s a stark reminder that technology can amplify both good and harmful intentions.

Advertisement

What Some Countries Are Saying

Ireland once made the headlines for pre-empting the issue. Officials there warned about the dangers of “sharenting”—posting children’s photos online—and how easily such images can be exploited by AI tools like Grok to produce deepfakes or sexually suggestive images. It’s a very serious online safety concern that may already violate existing laws on non-consensual media creation.

In Europe as well, regulators like Italy’s privacy watchdog have warned AI providers and users that creating or sharing AI-generated sexual imagery from real content without consent, including digitally “undressing” people, could be a criminal act and a significant privacy violation.

Parents Often Don’t Know It’s Happening

One of the hardest parts about this issue is visibility—or the lack of it. Parents may never know if their child’s photo has been scraped or manipulated into a deepfake. Historically, victims of deepfake abuse learned of it only when content circulated widely online. At that point, not only is emotional harm done, but legal remedies become harder to enforce. This is why digital safety advocates urge parents to think twice about what they share publicly.

Advertisement

Content Creator Angel Aquino (also known as Queen Hera) found her daughter a victim of this. That was even before Grok AI became live!

What Parents Can Do Now (And What the Law Says)

  • Think before you post
    Under the Data Privacy Act of 2012 (RA 10173), photos—especially of minors—are considered personal and sensitive personal information. Once shared publicly, these images can be collected, altered, or misused without consent, even beyond your control.
  • Review privacy settings
    The same law (RA 10173) emphasizes a child’s right to data privacy. Parents and guardians are expected to exercise “reasonable care” in protecting a minor’s personal data, including photos shared online.
  • Talk about consent early
    The Anti-Photo and Video Voyeurism Act of 2009 (RA 9995) penalizes the creation, distribution, or possession of intimate images without consent—even if those images are digitally altered. Teaching kids about consent isn’t just good parenting; it aligns with how the law defines harm.
  • Know that digital abuse is real abuse
    The Anti-Child Pornography Act of 2009 (RA 9775) covers any representation—real or simulated—of a child engaged in sexual activity. AI-generated or manipulated images of minors may still fall under this law, regardless of how they were created.
    • AIStay informed and speak up
      The Cybercrime Prevention Act of 2012 (RA 10175) recognizes online harassment, identity misuse, and image-based abuse as punishable offenses. If something feels wrong, it usually is—and there are legal avenues to report it.

Why This Isn’t Only About Grok AI

But Grok AI didn’t invent predatory behavior. Deepfake tools have existed for years, often under the radar or in niche corners of the internet. Its controversy simply brought the issue into sharper view because it involves a mainstream AI that’s readily accessible. Similar tools were used to generate deepfake adult images in past years; what has changed is scale, ease of access, and viral potential.

Like any tool (e.g., social media, smartphones, or the internet), AI reflects how people choose to use it. The developer holds responsibility for safety features, and the user bears accountability for ethical use.

But the uncomfortable truth is: the law is still playing catch-up. AI made accountability more complex. Responsibility doesn’t stop with developers or platforms; it extends to users, sharers, and yes, even well-meaning parents who overshare.

Advertisement

Frequently Asked Questions

Digital undressing refers to AI-generated imagery where clothing is removed or altered without consent, creating suggestive or sexually explicit results using tools like Grok or Deepfake.

Yes. In many countries, sharing non-consensual intimate images—real or AI-generated—is illegal. Regulators in places like Ireland are enforcing existing laws and exploring stricter AI legislation.

Limit public sharing of children’s photos, use strict privacy controls, educate about consent, and regularly review what’s shared on social media.

Not necessarily—practical caution and privacy settings can reduce risk, but balance is key. Understand platform policies and monitor what’s publicly visible. And remember: once uploaded, that photo becomes free for anybody’s use.

Grok AI is an artificial intelligence chatbot developed by xAI, designed to generate text and images based on user prompts. Like other generative AI tools, it can be used for creative, educational, and productivity-related tasks. However, recent reports show that many have been misusing it to generate harmful and explicit content.

More about AI and its use?

Catz Jalandoni: Will AI and Kids Run the World?
Every Parent’s Nightmare: Their Child Becoming a “Deepfake” Victim
Dear Families, Please Be Careful In Referring To AI

Shop for Modern Parenting's print issues through these platforms.
Download this month's Modern Parenting magazine digital copy from:
Subscribe via [email protected]
Advertisement

To provide a customized ad experience, we need to know if you are of legal age in your region.

By making a selection, you agree to our Terms & Conditions.