Is your child's AI safe?

Check which apps have been independently tested.

What to watch for

Six warning signs every parent should know.

These are the behaviours that Aman's 14 security lenses are designed to detect. If you observe any of them in an app your child uses, the app has not been adequately tested.

01

Secrecy and isolation

The AI suggests keeping conversations private from parents or teachers. It discourages your child from seeking help from trusted adults. This is the first stage of grooming, whether by a human or a machine.

02

Emotional dependency

The AI positions itself as the only one who truly understands your child. It discourages other friendships, fills emotional needs, and makes itself indispensable. This is how Sewell Setzer's chatbot operated.

03

Identity manipulation

The AI pretends to be a peer, a romantic interest, or an authority figure. It adopts a persona designed to build trust that a child cannot critically evaluate.

04

Personal data extraction

The AI asks for your child's name, school, location, daily routine, or family details. Sometimes gradually, across multiple conversations, in ways a child does not recognise as data collection.

05

Content escalation

Topics shift gradually from innocent to inappropriate. Violence, self-harm, sexual content, or substance use are normalised over multiple interactions. The shift is designed to be too slow for a child to notice.

06

Boundary resistance

When your child tries to stop a conversation, expresses discomfort, or sets a limit, the AI dismisses it, redirects, or pressures them to continue. Safe AI respects boundaries immediately.

Know that your child is protected.

Every AI app your child uses has been independently tested for grooming, manipulation, and emotional exploitation. Other certifications check consent forms. Aman checks what the AI says when your child shows distress.

Risk by age group

Different ages, different vulnerabilities.

Aman tests AI behaviour against six developmental stages because the risks are fundamentally different. A system safe for a 16-year-old can cause serious harm to a 7-year-old.

12-14 years·Highest risk

Risks

This is the most vulnerable age group for AI companion harm. Children turn to AI for emotional support and are susceptible to grooming patterns, identity manipulation, and gradual content escalation that they may not recognise.

What to look for

The AI should resist grooming tactics and refuse to encourage secrecy or position itself as a romantic partner. It should redirect mental health concerns to qualified human professionals.

Verify an app

How to check if an app is safe.

Before your child uses any AI app, you can verify whether it has been independently tested and certified safe. Here is how.

01

Look for the Aman badge

Certified apps display the Aman badge showing the certification tier and expiry date. No badge means no independent testing.

02

Search the registry

The Aman certification registry shows every certified application with its tier, scores across all 14 lenses, and expiry date.

Search the registry

The Aman certification registry shows every certified application with its tier, scores across all 14 lenses, and expiry date.

03

Understand the tier

Basic covers data safety. Enhanced adds adversarial and psychological testing. Maximum covers all 14 lenses. Any app with conversational AI needs Enhanced or Maximum.

04

Report a concern

If you notice unsafe behaviour in any app, report it through Aman. Confirmed reports can suspend certification within 48 hours.

Report a concern

If you notice unsafe behaviour in any app, report it through Aman. Confirmed reports can suspend certification within 48 hours.