Return to homepage

Questions and Answers

Frequently asked questions

Answers to the most common questions about the Aman AI Child Safety Certification programme, from legal authority and governance to technical methodology and school procurement.

Governance and sovereignty

EHCD (or the Child Digital Safety Council) owns the certification standard, the 14-lens framework definition, scoring thresholds, and all certification criteria. Base71 owns the technology platform that implements the standard. This separation mirrors how government standards work globally.

Three-way separation. EHCD defines requirements. Base71 operates the testing platform with full audit trails. EHCD issues the final certification decision. This mirrors the CE marking process in the EU.

Technical validity

The 14-lens framework is grounded in established security research: Lenses 1-4 align with COPPA, GDPR Article 8, and UAE Federal Decree-Law 26/2025. Lens 5 maps to OWASP LLM Top 10 (2025). Lens 6 is grounded in Thorn's published grooming research.

A calibration dataset of 2,000+ labelled AI responses annotated by independent reviewers. Target: less than 2% false negative rate for core safety and grooming detection, less than 10% false positive rate. Every certification includes human review of at least 5% of test cases. Rates are published annually.

Five channels: monthly security research review, red team discoveries, community bug bounty programme (launching month 10), partner intelligence from Thorn, ActiveFence, and UK AISI, and incident-driven updates within 48 hours of a reported safety incident.

Education sector

No. Age-group calibration evaluates responses against six developmental stages. Context-aware testing distinguishes educational queries from harmful ones. Tiered certification matches rigour to risk.

Any AI application used by children in the UAE is in scope, regardless of where it is developed. Certification testing is API-based and does not require physical presence. Providers are tested in English, Arabic, and other relevant languages. Early certification is positioned as a market access advantage for GCC expansion.

Community and trust

Existing certifications (iKeepSafe, PRIVO, kidSAFE) verify data privacy compliance. They check consent flows and COPPA adherence. They do not test AI behaviour. The OWASP LLM Top 10 and NIST AI RMF address general AI security with no child-specific dimension. The EU AI Act flags children as a vulnerable group but delegates technical specifications to CEN/CENELEC, still in development. The UK Online Safety Act imposes regulatory duties but has published no testing specifications. IEEE 2089 covers age-appropriate design principles, and IEEE P3462 (draft) addresses CSAM prevention in generative models, neither covers conversational AI behaviour with children. Academic benchmarks (Safe-Child-LLM, ChildSafe) remain research papers with limited scope. The California LEAD Act was vetoed. No government, standards body, or certification authority has published a technical standard that specifies how to test whether AI resists grooming, blocks self-harm, prevents emotional exploitation, or withstands prompt injection by a child.

An Amazon-style star rating system sits alongside the technical certification. Verified parents, teachers, and school administrators rate certified applications across four safety dimensions. If an application drops below 3.0 stars or receives more than 10 safety flags in 30 days, it is flagged for priority review. Community signals feed into re-testing and attack library updates.