Governance Defensibility for Boards
Your AI decisions
cannot be reconstructed.
Your board liability is real.
Most organisations document governance intent. But when regulators ask whether a decision can be defended under scrutiny, documentation is irrelevant. What matters is evidence: the specific information available at the moment the system decided, the rule applied, and the human interventions possible.
This is not a technical problem. It is a fiduciary one.
Founded by Ivan Roche — former CTO & COO · 25+ years across insurance, aviation, financial services · 30+ expert network engagements in AI governance
The Defensibility Gap
Why standard governance
fails when stakes
are highest
You have policies, audit trails, risk registers. But can you prove what information was available to the human at the moment the algorithm decided?
Most organisations cannot. Policies describe intent; evidence proves defensibility.
When context collapses — a regulatory inquiry, a customer lawsuit, a media investigation — you discover that governance exists only in retrospect. DORA, the EU AI Act, FCA SM&CR require evidence that decisions were defensible at the moment they occurred. Post-hoc documentation does not satisfy this requirement.
Your board has fiduciary duty to understand material risks. Signing off on "governance is handled by IT" is not a defence.
The scrutiny standard
Five questions your board
cannot answer today.
Select each to see your board's exposure.
Most organisations have no formal record of who assessed downside risk before deployment. When regulators or litigants ask who knew what and when — the answer is usually silence. That silence is itself a finding.
The Governance Classification
Every organisation has
a governance address.
Most don't know it.
The Governance Classification is Otopoetic's proprietary assessment framework. It gives every client organisation a precise, multi-dimensional position within the AI governance landscape — not a score, not a traffic light, but a specific address across five independent facets.
The methodology draws on faceted classification theory first formalised by the Indian mathematician and librarian S.R. Ranganathan in 1933. His core insight — that complex subjects cannot be reduced to a single fixed hierarchy, but must be expressed as a combination of independent dimensions — has never previously been applied to AI governance.
Ninety years later, it is exactly the right tool for the problem boards now face.
Locate your organisation
in five dimensions.
Select your position on each facet. Your governance address generates live.
Your governance address
Complete all five facets to generate your address.
The Governance Classification gives every organisation a precise, actionable position across five independent governance dimensions. Request a confidential briefing to understand what your address means and what comes next.
“A library is a growing organism.” — S.R. Ranganathan, 1931. So is every AI system deployed without governance.
Methodological foundation
The Governance Classification applies faceted classification theory first developed by S.R. Ranganathan (1892–1972), mathematician and librarian, whose Colon Classification of 1933 demonstrated that complex knowledge cannot be reduced to a single hierarchy. Ranganathan's original works are in the public domain. The application of his methodology to AI governance accountability is original to Otopoetic.
The evidentiary standard
The Digital Alibi
Documentation describes intent.
Defensibility is temporal.
When a consequential AI-assisted decision is challenged — by a regulator, a litigant, or a shareholder — the question is not whether it was documented. It is whether the complete information picture that existed at the moment of that decision can be forensically reconstructed.
Not assembled retrospectively. Not inferred from systems logs. Reconstructed — precisely as it existed at the point the decision was made.
This is a different evidentiary standard than most compliance programmes are built to meet. Boards that cannot satisfy it do not merely face a technical gap. They face a fiduciary one.
The forthcoming book by Ivan Roche sets out the full evidentiary framework for board-level AI governance defensibility — written for NEDs, Audit Chairs, General Counsel, and Company Secretaries who need to answer the question regulators will ask.
The case against the handoff
The Single Chain
Three providers. Three handoffs. One accountability gap.
Most AI governance programmes fail at the handoff. A strategy firm produces a governance report. A compliance team implements it. A technology vendor builds tooling against it. At every handoff, the original diagnostic becomes a document rather than a governing instrument. Institutional knowledge is lost. Accountability dilutes. The organisation ends up with three invoices and no one who owns the complete picture.
The Alibi Protocol holds the full arc under one framework, one advisor, and one governing instrument throughout.
"The single chain is the proposition. The governance address established in the diagnostic governs the infrastructure specification, which governs the verification, under one framework and one accountable advisor throughout. That is what no Big 4 firm can offer at board level."
One engagement. One address. One accountable advisor from diagnosis to verification.
Why this matters
AI is a structural liability,
not a feature upgrade
The data on board accountability, regulatory exposure, and governance gaps.
of boards take direct responsibility
for AI governance oversight
McKinsey State of AI Survey, 2025
AI reshapes accountability across the entire organisation
It fundamentally changes who owns decisions, where risk is located, and what boards are liable for. Standard productivity metrics do not capture this.
83% of boards have no formal AI accountability structure. That is not a gap — it is a liability.
increase in boards citing AI risk
as an oversight responsibility in 12 months
EY Center for Board Matters, 2025
Boards no longer ask whether a model is “elegant”
They ask who understood the downside, what was documented, and whether the decisions will survive forensic, regulatory, or financial scrutiny.
From 16% to 48% of Fortune 100 boards in one year. The standard is moving faster than most governance programmes.
of organisations have not fully implemented
an AI governance programme
AuditBoard / IAPP Governance Survey, 2025
In high-stakes environments, self-assessment is a liability
Independent assessment is no longer optional. It is a requirement for fiduciary duty. Organisations that rely solely on internal review carry unquantified structural risk.
Three-quarters of organisations are self-assessing a risk they have not independently verified. That is not caution — it is exposure.
Sources: McKinsey & Company State of AI 2025 · EY Center for Board Matters Fortune 100 Analysis 2025 · AuditBoard 2025 · IAPP AI Governance Profession Report 2025
Governance requirements
by sector
Each sector faces distinct regulatory frameworks and board-level exposure. Your Governance Classification maps directly to the floor and ceiling positions that satisfy regulatory requirements and sector best practice.
FCA SM&CR · DORA · PRA
Floor: A2·E3·C3·R2·M2
Ceiling: A3·E3·C3·R3·M3
Senior managers bear personal accountability for AI governance failures under SM&CR. Defensibility is not optional.
Solvency II · PRA · Claims
Floor: A2·E3·C2·R2·M2
Ceiling: A3·E3·C3·R3·M3
Algorithmic underwriting and claims decisions require contemporaneous evidence of the information picture at each decision point.
GMC · CQC · Patient Safety
Floor: A3·E3·C3·R2·M2
Ceiling: A3·E3·C3·R3·M3
Clinical governance requires the highest accountability floor. AI-assisted diagnostic decisions carry direct patient safety liability.
AI Standards · ICSA · ICO
Floor: A2·E2·C3·R2·M2
Ceiling: A3·E3·C3·R3·M3
Public authority AI decisions affecting citizens require defensibility under judicial review and public sector AI standards.
CAA · Safety-Critical · EASA
Floor: A3·E3·C3·R3·M2
Ceiling: A3·E3·C3·R3·M3
Safety-critical environments require the highest combined floor across all five dimensions. No governance gap is acceptable at altitude.
Governance Classification maps
to regulatory frameworks
Your governance address is not an internal metric — it maps precisely to the frameworks regulators, courts, and boards will apply when scrutiny arrives.
EU AI Act
High-risk system classification maps directly to A·E·C tier requirements. Compliance demonstration requires board-level defensibility evidence — not documentation of intent.
DORA
Governance, risk, resilience, and technology standards require contemporaneous decision evidence. The Digital Alibi standard satisfies third-pillar requirements.
FCA SM&CR
Senior managers' personal accountability requires proof of governance knowledge and oversight at decision time — not post-hoc documentation assembled under inquiry.
NIS2
Operator resilience and incident response require evidence of governance maturity and human oversight capability at system design and decision time.
Representative Domains of Work
Where we apply
forensic scrutiny
Digital Alibi Assessment — A structured forensic review that establishes whether your organisation can reconstruct the complete information picture behind every material AI-assisted decision — as it existed at the moment it was made. Not retrospectively. Not from memory. Precisely and defensibly.
Alibi Protocol — Full-scope engagement establishing evidentiary infrastructure, accountability architecture, and the temporal decision record that survives regulatory inquiry, litigation, and shareholder challenge. The governance requirement is that defensibility precedes scrutiny — not chases it.
Engagements are scoped to the organisation's specific exposure. Fees are discussed in the initial briefing.
Pre-Acquisition Scrutiny — Identifying hidden technical debt and architectural fragility prior to capital commitment. We examine what vendor presentations do not show and what due diligence checklists do not ask.
AI Asset Valuation — Assessing proprietary integrity and regulatory risk to separate marketing claims from defensible, audit-ready code.
Integration Risk — Evaluating operational resilience and security exposure in complex, high-availability environments where failure propagates.
Expert Witness Reporting — Formal forensic reporting and independent opinion for high-stakes technology litigation and governance breaches. Structured for legal admissibility and board-level comprehension.
Regulatory Support — The question regulators and litigants ask is not whether decisions were documented. It is whether accountability was named and evidence was present at the exact moment the decision was made. This is a different evidentiary standard than most compliance programmes are built to meet. We establish the defensible position before inquiry arrives — not in response to it.
Safety-Critical Governance — Operating model and data governance design for environments where technological failure carries immediate real-world consequences. Drawing on direct experience across aviation, public safety, insurance, and national infrastructure.
Resilience Audits — Forensic platform scrutiny to identify systemic single-points-of-failure and mitigate dangerous vendor dependencies before they become operational crises.
Control Framework Design — When an agentic system operates across multiple systems, failure is no longer an event — it is a condition that is systemic before it is visible. Governance structures must evolve ahead of autonomy, not chase it. We design the oversight architecture that keeps boards accountable and regulators satisfied before deployment, not after incident.
Accountability Chain Mapping — The governance requirement is that accountability architecture precedes autonomy. When an agentic system causes harm, who is responsible? We establish clear, formally documented chains of accountability before deployment — because ownership that is not named in advance will be assigned by inquiry afterwards.
See Dr. Roche's analysis of the Mythos governance gap on The Roche Review: why the sandbox escape demonstrates the accountability cascade every agentic AI deployment must address before incident.
Our Foundation
Built on the study of systems that sustain themselves under pressure
Our name is derived from autopoiesis — the study of systems that sustain themselves under pressure. We apply this principle to the world's most complex regulated environments, ensuring that as your technology evolves, your organisational coherence remains intact.
The founder's background spans astrophysics, statistical modelling, and 25 years of executive leadership across sectors where the cost of systemic failure is measured not in downtime, but in lives, capital, and institutional trust.
This is not general advisory. It is structural analysis applied to the specific problem of AI accountability.
Full background at ivanroche.com →
"The question boards should be asking is not whether their AI decisions were documented. It is whether they can be forensically reconstructed — as they existed at the moment they were made, not as they are later explained."
We operate in environments where discretion is a prerequisite. To protect our clients' strategic interests, we do not publish names, sectors, or specific financial outcomes. We publish the patterns, principles, and architectural insights that prevent failure.
Request a Confidential Briefing
Every engagement begins with
a single, private conversation
We do not offer automated onboarding. Every briefing is conducted directly by Ivan Roche, without obligation, and in full confidence.