Governance Defensibility for Boards

Your AI decisions
cannot be reconstructed.
Your board liability is real.

Most organisations document governance intent. But when regulators ask whether a decision can be defended under scrutiny, documentation is irrelevant. What matters is evidence: the specific information available at the moment the system decided, the rule applied, and the human interventions possible.

This is not a technical problem. It is a fiduciary one.

Founded by Ivan Roche — former CTO & COO  ·  25+ years across insurance, aviation, financial services  ·  30+ expert network engagements in AI governance

Latest The Sandbox is Not a Gate — What Mythos Reveals About the Governance Gap Boards Cannot See 13 April 2026 The Roche Review →

Why standard governance
fails when stakes
are highest

You have policies, audit trails, risk registers. But can you prove what information was available to the human at the moment the algorithm decided?

Most organisations cannot. Policies describe intent; evidence proves defensibility.

When context collapses — a regulatory inquiry, a customer lawsuit, a media investigation — you discover that governance exists only in retrospect. DORA, the EU AI Act, FCA SM&CR require evidence that decisions were defensible at the moment they occurred. Post-hoc documentation does not satisfy this requirement.

Your board has fiduciary duty to understand material risks. Signing off on "governance is handled by IT" is not a defence.

The scrutiny standard

Five questions your board
cannot answer today.

Select each to see your board's exposure.

01
Who understood the downside?
Before your last AI deployment, who formally identified and documented the failure modes? If that person left tomorrow, would the record survive?
Accountability gap
02
What was actually documented?
Not what was discussed in a meeting. What exists in writing, with a date, an owner, and a decision trail that would satisfy a regulator or a judge?
Audit trail risk
03
Who owns this system when it fails?
Not who built it. Not who approved the budget. Who is named — formally, in writing — as accountable for the consequences if this system makes the wrong decision?
Ownership void
04
Will your decisions survive scrutiny?
Forensic review. Regulatory inquiry. Shareholder challenge. If any of these arrived tomorrow, how long would it take to produce a complete, defensible account of every consequential AI decision made in the last 12 months?
Existential exposure
05
Can you reconstruct this decision as it was made?
Not as it was later documented. Not as it was subsequently explained. Can you forensically reconstruct the complete information picture behind this decision — as it existed at the exact moment it was made?
The alibi question

Most organisations have no formal record of who assessed downside risk before deployment. When regulators or litigants ask who knew what and when — the answer is usually silence. That silence is itself a finding.

Request a confidential briefing 45 minutes · no obligation · fully confidential

Every organisation has
a governance address.
Most don't know it.

The Governance Classification is Otopoetic's proprietary assessment framework. It gives every client organisation a precise, multi-dimensional position within the AI governance landscape — not a score, not a traffic light, but a specific address across five independent facets.

The methodology draws on faceted classification theory first formalised by the Indian mathematician and librarian S.R. Ranganathan in 1933. His core insight — that complex subjects cannot be reduced to a single fixed hierarchy, but must be expressed as a combination of independent dimensions — has never previously been applied to AI governance.

Ninety years later, it is exactly the right tool for the problem boards now face.

The Governance Classification

Locate your organisation
in five dimensions.

Select your position on each facet. Your governance address generates live.

Your governance address

Complete all five facets to generate your address.

The Governance Classification gives every organisation a precise, actionable position across five independent governance dimensions. Request a confidential briefing to understand what your address means and what comes next.

Request a confidential briefing 45 minutes · no obligation · fully confidential

“A library is a growing organism.” — S.R. Ranganathan, 1931.  So is every AI system deployed without governance.

Methodological foundation

The Governance Classification applies faceted classification theory first developed by S.R. Ranganathan (1892–1972), mathematician and librarian, whose Colon Classification of 1933 demonstrated that complex knowledge cannot be reduced to a single hierarchy. Ranganathan's original works are in the public domain. The application of his methodology to AI governance accountability is original to Otopoetic.

The evidentiary standard

Forthcoming book — for NEDs, Audit Chairs & General Counsel

Documentation describes intent.
Defensibility is temporal.

When a consequential AI-assisted decision is challenged — by a regulator, a litigant, or a shareholder — the question is not whether it was documented. It is whether the complete information picture that existed at the moment of that decision can be forensically reconstructed.

Not assembled retrospectively. Not inferred from systems logs. Reconstructed — precisely as it existed at the point the decision was made.

This is a different evidentiary standard than most compliance programmes are built to meet. Boards that cannot satisfy it do not merely face a technical gap. They face a fiduciary one.

The Digital Alibi — the book

The forthcoming book by Ivan Roche sets out the full evidentiary framework for board-level AI governance defensibility — written for NEDs, Audit Chairs, General Counsel, and Company Secretaries who need to answer the question regulators will ask.

T₀ — Decision moment
The complete information picture: data inputs, model state, human context, accountability chain, risk assessment — all as they existed at this precise moment.
Without Digital Alibi
Reconstructed from memory. Assembled from logs. Inconsistent with the original record. Fails forensic and regulatory review.
With Digital Alibi
Exact reconstruction. Timestamped. Immutable. Accountable to named individuals. Survives inquiry.
The standard Compliance passes the audit. Control survives the incident.
See the Digital Alibi Assessment →

The case against the handoff

Three providers. Three handoffs. One accountability gap.

Most AI governance programmes fail at the handoff. A strategy firm produces a governance report. A compliance team implements it. A technology vendor builds tooling against it. At every handoff, the original diagnostic becomes a document rather than a governing instrument. Institutional knowledge is lost. Accountability dilutes. The organisation ends up with three invoices and no one who owns the complete picture.

The Alibi Protocol holds the full arc under one framework, one advisor, and one governing instrument throughout.

"The single chain is the proposition. The governance address established in the diagnostic governs the infrastructure specification, which governs the verification, under one framework and one accountable advisor throughout. That is what no Big 4 firm can offer at board level."

Request a Confidential Briefing One framework  ·  One advisor  ·  One governing instrument throughout

One engagement. One address. One accountable advisor from diagnosis to verification.

AI is a structural liability,
not a feature upgrade

The data on board accountability, regulatory exposure, and governance gaps.

17%

of boards take direct responsibility
for AI governance oversight

McKinsey State of AI Survey, 2025

Beyond productivity

AI reshapes accountability across the entire organisation

It fundamentally changes who owns decisions, where risk is located, and what boards are liable for. Standard productivity metrics do not capture this.

83% of boards have no formal AI accountability structure. That is not a gap — it is a liability.

3×

increase in boards citing AI risk
as an oversight responsibility in 12 months

EY Center for Board Matters, 2025

The scrutiny standard

Boards no longer ask whether a model is “elegant”

They ask who understood the downside, what was documented, and whether the decisions will survive forensic, regulatory, or financial scrutiny.

From 16% to 48% of Fortune 100 boards in one year. The standard is moving faster than most governance programmes.

75%

of organisations have not fully implemented
an AI governance programme

AuditBoard / IAPP Governance Survey, 2025

Mandatory independence

In high-stakes environments, self-assessment is a liability

Independent assessment is no longer optional. It is a requirement for fiduciary duty. Organisations that rely solely on internal review carry unquantified structural risk.

Three-quarters of organisations are self-assessing a risk they have not independently verified. That is not caution — it is exposure.

Sources: McKinsey & Company State of AI 2025  ·  EY Center for Board Matters Fortune 100 Analysis 2025  ·  AuditBoard 2025  ·  IAPP AI Governance Profession Report 2025

Governance requirements
by sector

Each sector faces distinct regulatory frameworks and board-level exposure. Your Governance Classification maps directly to the floor and ceiling positions that satisfy regulatory requirements and sector best practice.

Financial Services

FCA SM&CR · DORA · PRA

Floor: A2·E3·C3·R2·M2

Ceiling: A3·E3·C3·R3·M3

Senior managers bear personal accountability for AI governance failures under SM&CR. Defensibility is not optional.

Insurance

Solvency II · PRA · Claims

Floor: A2·E3·C2·R2·M2

Ceiling: A3·E3·C3·R3·M3

Algorithmic underwriting and claims decisions require contemporaneous evidence of the information picture at each decision point.

Healthcare

GMC · CQC · Patient Safety

Floor: A3·E3·C3·R2·M2

Ceiling: A3·E3·C3·R3·M3

Clinical governance requires the highest accountability floor. AI-assisted diagnostic decisions carry direct patient safety liability.

Public Sector

AI Standards · ICSA · ICO

Floor: A2·E2·C3·R2·M2

Ceiling: A3·E3·C3·R3·M3

Public authority AI decisions affecting citizens require defensibility under judicial review and public sector AI standards.

Aviation

CAA · Safety-Critical · EASA

Floor: A3·E3·C3·R3·M2

Ceiling: A3·E3·C3·R3·M3

Safety-critical environments require the highest combined floor across all five dimensions. No governance gap is acceptable at altitude.

Governance Classification maps
to regulatory frameworks

Your governance address is not an internal metric — it maps precisely to the frameworks regulators, courts, and boards will apply when scrutiny arrives.

EU AI Act

High-risk system classification maps directly to A·E·C tier requirements. Compliance demonstration requires board-level defensibility evidence — not documentation of intent.

DORA

Governance, risk, resilience, and technology standards require contemporaneous decision evidence. The Digital Alibi standard satisfies third-pillar requirements.

FCA SM&CR

Senior managers' personal accountability requires proof of governance knowledge and oversight at decision time — not post-hoc documentation assembled under inquiry.

NIS2

Operator resilience and incident response require evidence of governance maturity and human oversight capability at system design and decision time.

Where we apply
forensic scrutiny

20–24 · Investment

Investment & Transactional Support

Pre-Acquisition Scrutiny — Identifying hidden technical debt and architectural fragility prior to capital commitment. We examine what vendor presentations do not show and what due diligence checklists do not ask.

AI Asset Valuation — Assessing proprietary integrity and regulatory risk to separate marketing claims from defensible, audit-ready code.

Integration Risk — Evaluating operational resilience and security exposure in complex, high-availability environments where failure propagates.

Private Equity Institutional Investors M&A Diligence Technical Debt
VENDOR PRESENTATION AI capability certified ✓   Architecture verified ✓ Regulatory compliant ✓   Integration ready ✓ SCRUTINY THRESHOLD WHAT IS FOUND Technical debt — undisclosed, accumulating AI claims — marketing, not architecture Regulatory exposure — unidentified Integration fragility — single points of failure Ownership gaps — undocumented accountability Standard due diligence does not reach this layer.
26 · Forensic

Forensic & Legal Testimony

Expert Witness Reporting — Formal forensic reporting and independent opinion for high-stakes technology litigation and governance breaches. Structured for legal admissibility and board-level comprehension.

Regulatory Support — The question regulators and litigants ask is not whether decisions were documented. It is whether accountability was named and evidence was present at the exact moment the decision was made. This is a different evidentiary standard than most compliance programmes are built to meet. We establish the defensible position before inquiry arrives — not in response to it.

Expert Witness EU AI Act FCA Regulatory Compliance
EU AI ACT €35M / 7% global revenue FCA Financial services ICO Data & privacy SECTOR Aviation / insurance ENFORCEMENT ACTION Litigation · inquiry · sanctions INDEPENDENT OPINION Defensible position before inquiry arrives
21–22 · Governance

Critical Infrastructure Oversight

Safety-Critical Governance — Operating model and data governance design for environments where technological failure carries immediate real-world consequences. Drawing on direct experience across aviation, public safety, insurance, and national infrastructure.

Resilience Audits — Forensic platform scrutiny to identify systemic single-points-of-failure and mitigate dangerous vendor dependencies before they become operational crises.

Aviation Insurance Public Sector Vendor Risk
AVIATION ops data VENDOR AI model DECISION ENGINE ● spof PUBLIC safety failure propagates OPERATIONAL FAILURE Real-world consequences RESILIENCE AUDIT identifies: ■ Single points of failure  ■ Vendor dependencies  ■ Control gaps before operational deployment — not after a crisis Independent governance audit — not self-assessment Structured for regulatory admissibility
23 · Agentic AI

Agentic AI & Autonomous Systems

Control Framework Design — When an agentic system operates across multiple systems, failure is no longer an event — it is a condition that is systemic before it is visible. Governance structures must evolve ahead of autonomy, not chase it. We design the oversight architecture that keeps boards accountable and regulators satisfied before deployment, not after incident.

Accountability Chain Mapping — The governance requirement is that accountability architecture precedes autonomy. When an agentic system causes harm, who is responsible? We establish clear, formally documented chains of accountability before deployment — because ownership that is not named in advance will be assigned by inquiry afterwards.

See Dr. Roche's analysis of the Mythos governance gap on The Roche Review: why the sandbox escape demonstrates the accountability cascade every agentic AI deployment must address before incident.

Agentic AI Autonomous Systems Control Frameworks Board Accountability
AGENTIC AI autonomous decision HARM OCCURS Who is responsible? WITHOUT GOVERNANCE ownership cascades up until it reaches the board WITH OTOPOETIC GOVERNANCE accountability chain documented in advance Accountability established before deployment, not after failure

Built on the study of systems that sustain themselves under pressure

Our name is derived from autopoiesis — the study of systems that sustain themselves under pressure. We apply this principle to the world's most complex regulated environments, ensuring that as your technology evolves, your organisational coherence remains intact.

The founder's background spans astrophysics, statistical modelling, and 25 years of executive leadership across sectors where the cost of systemic failure is measured not in downtime, but in lives, capital, and institutional trust.

This is not general advisory. It is structural analysis applied to the specific problem of AI accountability.

Full background at ivanroche.com →

"The question boards should be asking is not whether their AI decisions were documented. It is whether they can be forensically reconstructed — as they existed at the moment they were made, not as they are later explained."
Confidentiality

We operate in environments where discretion is a prerequisite. To protect our clients' strategic interests, we do not publish names, sectors, or specific financial outcomes. We publish the patterns, principles, and architectural insights that prevent failure.

Every engagement begins with
a single, private conversation

We do not offer automated onboarding. Every briefing is conducted directly by Ivan Roche, without obligation, and in full confidence.

Duration 45 minutes
Obligation None
Confidentiality Absolute
Response Within 24 hours

Your details are used solely to arrange a briefing and are not shared with any third party.