1. Governance model
We operate an AI governance program aligned with NIST AI RMF and ISO/IEC 42001 for trustworthy AI; and ISO/IEC 27001 for information security. Our approach: risk-based, independent, and auditable.
1.1 Oversight
- Executive responsibility: Governance led by the COO and CISO; independent reviews by our Internal Assurance function.
- DPO: Accountable for privacy compliance and DPIAs.
- AI Assurance Board: Senior technical, risk, legal, and ethics leaders who approve frameworks, review red-team results, and sign off on certifications.
1.2 Policies & frameworks
- AI Risk Management: Identification, measurement, and mitigation of model risks across the lifecycle; risk registers and acceptance thresholds aligned to NIST/ISO 42001.
- Model Documentation: Model cards, test coverage matrices, data lineage, and change logs for traceability.
- Fairness & Bias: Demographic parity/impact assessments; explainability where feasible; independent review for high-risk use cases.
- Security & Resilience: Threat modeling, secure SDLC, adversarial testing, prompt-injection testing, data-poisoning checks, and regular penetration testing aligned with 27001 practices.
- Compliance & Auditability: Procedures to map obligations under the EU AI Act with phased timelines; evidence packs for regulators and clients.
2. Data governance
- Data sourcing & rights: We verify usage rights and document lawful bases or contracts before ingestion.
- Minimization & purpose limits: Collect only what we need; process for stated purposes (GDPR Art. 5; PDPA obligations).
- Quality & integrity: Validate datasets for completeness, drift, and representativeness.
- Access control: Least privilege, MFA, logging, quarterly reviews.
- Retention & deletion: Per SOW/contract; secure deletion or anonymization after purpose ends.
- International transfers: SCCs and equivalent safeguards for cross-border data.
3. Testing & certification discipline
- Test architecture: Unit, integration, scenario, and system-level tests; stress & edge-case harnesses; safety and misuse cases; red-team playbooks for LLMs and perception models.
- Independence: Testing and certification functions are firewalled from commercial teams.
- Evidence & seal: Clients who meet thresholds receive the Gates-AI Certified seal (Standard/Advanced/Elite). Misuse of the seal, material changes, or failed surveillance checks may trigger suspension or withdrawal.
4. Incident management & disclosure
- 24/7 monitoring for security/availability incidents.
- Triage & containment within defined SLAs; root-cause analysis and corrective actions documented.
- Notifications to clients/authorities in line with applicable law (e.g., GDPR/PDPA breach rules).
5. Third-party & supply chain
- Subprocessor due diligence: Security reviews, DPAs, SCCs, and ongoing monitoring.
- Right to audit: Available to clients/regulators under contract.
- Country expansion: Local teams, local R&D, global standards (same controls everywhere).
6. Ethics & accountability
- Human oversight: Critical decisions retain human accountability; escalation paths defined.
- Transparency: Clear scoping, test limitations, known model constraints communicated.
- No dark patterns: We avoid deceptive UX or obfuscation in our tools and reports.
- Public interest: Prioritize safety in high-risk sectors (finance, health, energy, defense) consistent with the AI Act’s risk-based approach.
7. Contact
Questions about governance, audits, or certifications: governance@gates-ai.com
Data Protection Officer: privacy@gates-ai.com