Responsible AI: Governance, Risk, and Oversight

Board Brief · 2026

Why This Matters (Now)

Standards Spine: NIST AI RMF ↔ ISO 42001

NIST RMF cycle: GOVERN → MAP → MEASURE → MANAGE → MONITOR
ISO 42001 alignment: Context/Leadership (4–7) → Operation (8) → Performance/Improvement (9–10)

Who Uses These, and When?

Policy Stack (Board Approval)

Procedures: Lifecycle Controls

Procedures: Safety & Mis/Disinformation

Training & Literacy

Leaders

30-minute briefing: obligations, risk appetite, oversight asks.

Practitioners

2-hour workshop: intake, testing standards, documentation.

All Staff

Quarterly micro-learnings and refreshers.

Sources: Microsoft_AI_Literacy_Starting_Guide_2025.pdf; Yale_AI_Literacy_Framework_2025.pdf; AI_Literacy_Framework_ParadoxLearning_2024.pdf; AI_Literacy_Framework_DigitalPromise_2024.pdf.

Oversight Model

Metrics Snapshot

MetricTarget (example)
% AI systems risk-assessed100% before launch
Validation coverage (robustness/fairness/security)100% of in-scope systems
Training completion≥ 95% of required audiences
Third-party AI reviews completedAll new vendors/models pre-contract
Incidents/exceptionsTracked with remediation cycle time
Use cases in monitoringTop 5 live use cases monitored

Roadmap (0–3–6 Months)

Spotlight: High-Impact AI Use Cases

All use cases follow intake, testing, deployment gates, and monitoring per the RMF/ISO controls.

Decisions / Board Asks