High-Stakes AI Risk Mitigation

When failure is not an option,
clarity matters more than enthusiasm

We provide Private Equity, Institutional Investors, and Global Boards with the statistical rigour and architectural scrutiny required to de-risk high-consequence technology investments.

Founded by Ivan Roche — former CTO, COO and Interim CDO  ·  25+ years across insurance, aviation, financial services, telecoms and public sector

Structural integrity
for organisations where
failure is not an option

We bridge the critical gap between innovation enthusiasm and investment reality.

When technology decisions carry billion-dollar reputational and financial risks, we provide the forensic clarity that standard audits miss. Our work ensures that AI and systems architecture strengthen institutional resilience rather than introducing hidden fragilities.

In high-stakes environments, clarity matters more than enthusiasm.

The scrutiny standard

Four questions your board
cannot currently answer.

Select each to see your exposure.

01
Who understood the downside?
Before your last AI deployment, who formally identified and documented the failure modes? If that person left tomorrow, would the record survive?
Accountability gap
02
What was actually documented?
Not what was discussed in a meeting. What exists in writing, with a date, an owner, and a decision trail that would satisfy a regulator or a judge?
Audit trail risk
03
Who owns this system when it fails?
Not who built it. Not who approved the budget. Who is named — formally, in writing — as accountable for the consequences if this system makes the wrong decision?
Ownership void
04
Will your decisions survive scrutiny?
Forensic review. Regulatory inquiry. Shareholder challenge. If any of these arrived tomorrow, how long would it take to produce a complete, defensible account of every consequential AI decision made in the last 12 months?
Existential exposure

Most organisations have no formal record of who assessed downside risk before deployment. When regulators or litigants ask who knew what and when — the answer is usually silence. That silence is itself a finding.

Request a confidential briefing 45 minutes · no obligation · fully confidential

Every organisation has
a governance address.
Most don't know it.

The Governance Classification is Otopoetic's proprietary assessment framework. It gives every client organisation a precise, multi-dimensional position within the AI governance landscape — not a score, not a traffic light, but a specific address across five independent facets.

The methodology draws on faceted classification theory first formalised by the Indian mathematician and librarian S.R. Ranganathan in 1933. His core insight — that complex subjects cannot be reduced to a single fixed hierarchy, but must be expressed as a combination of independent dimensions — has never previously been applied to AI governance.

Ninety years later, it is exactly the right tool for the problem boards now face.

The Governance Classification

Locate your organisation
in five dimensions.

Select your position on each facet. Your governance address generates live.

Your governance address

Complete all five facets to generate your address.

The Governance Classification gives every organisation a precise, actionable position across five independent governance dimensions. Request a confidential briefing to understand what your address means and what comes next.

Request a confidential briefing 45 minutes · no obligation · fully confidential

“A library is a growing organism.” — S.R. Ranganathan, 1931.  So is every AI system deployed without governance.

Methodological foundation

The Governance Classification applies faceted classification theory first developed by S.R. Ranganathan (1892–1972), mathematician and librarian, whose Colon Classification of 1933 demonstrated that complex knowledge cannot be reduced to a single hierarchy. Ranganathan's original works are in the public domain. The application of his methodology to AI governance accountability is original to Otopoetic.

AI is a structural liability,
not a feature upgrade

The data on board accountability, regulatory exposure, and governance gaps.

17%

of boards take direct responsibility
for AI governance oversight

McKinsey State of AI Survey, 2025

Beyond productivity

AI reshapes accountability across the entire organisation

It fundamentally changes who owns decisions, where risk is located, and what boards are liable for. Standard productivity metrics do not capture this.

83% of boards have no formal AI accountability structure. That is not a gap — it is a liability.

3×

increase in boards citing AI risk
as an oversight responsibility in 12 months

EY Center for Board Matters, 2025

The scrutiny standard

Boards no longer ask whether a model is “elegant”

They ask who understood the downside, what was documented, and whether the decisions will survive forensic, regulatory, or financial scrutiny.

From 16% to 48% of Fortune 100 boards in one year. The standard is moving faster than most governance programmes.

75%

of organisations have not fully implemented
an AI governance programme

AuditBoard / IAPP Governance Survey, 2025

Mandatory independence

In high-stakes environments, self-assessment is a liability

Independent assessment is no longer optional. It is a requirement for fiduciary duty. Organisations that rely solely on internal review carry unquantified structural risk.

Three-quarters of organisations are self-assessing a risk they have not independently verified. That is not caution — it is exposure.

Sources: McKinsey & Company State of AI 2025  ·  EY Center for Board Matters Fortune 100 Analysis 2025  ·  AuditBoard 2025  ·  IAPP AI Governance Profession Report 2025

Where we apply
forensic scrutiny

20–24 · Investment

Investment & Transactional Support

Pre-Acquisition Scrutiny — Identifying hidden technical debt and architectural fragility prior to capital commitment. We examine what vendor presentations do not show and what due diligence checklists do not ask.

AI Asset Valuation — Assessing proprietary integrity and regulatory risk to separate marketing claims from defensible, audit-ready code.

Integration Risk — Evaluating operational resilience and security exposure in complex, high-availability environments where failure propagates.

Private Equity Institutional Investors M&A Diligence Technical Debt
VENDOR PRESENTATION AI capability certified ✓   Architecture verified ✓ Regulatory compliant ✓   Integration ready ✓ SCRUTINY THRESHOLD WHAT IS FOUND Technical debt — undisclosed, accumulating AI claims — marketing, not architecture Regulatory exposure — unidentified Integration fragility — single points of failure Ownership gaps — undocumented accountability Standard due diligence does not reach this layer.
26 · Forensic

Forensic & Legal Testimony

Expert Witness Reporting — Formal forensic reporting and independent opinion for high-stakes technology litigation and governance breaches. Structured for legal admissibility and board-level comprehension.

Regulatory Support — Independent risk exposure assessment and compliance mapping for emerging global AI frameworks, including the EU AI Act, FCA guidance, and sector-specific obligations.

Expert Witness EU AI Act FCA Regulatory Compliance
EU AI ACT €35M / 7% global revenue FCA Financial services ICO Data & privacy SECTOR Aviation / insurance ENFORCEMENT ACTION Litigation · inquiry · sanctions INDEPENDENT OPINION Defensible position before inquiry arrives
21–22 · Governance

Critical Infrastructure Oversight

Safety-Critical Governance — Operating model and data governance design for environments where technological failure carries immediate real-world consequences. Drawing on direct experience across aviation, public safety, insurance, and national infrastructure.

Resilience Audits — Forensic platform scrutiny to identify systemic single-points-of-failure and mitigate dangerous vendor dependencies before they become operational crises.

Aviation Insurance Public Sector Vendor Risk
AVIATION ops data VENDOR AI model DECISION ENGINE ● spof PUBLIC safety failure propagates OPERATIONAL FAILURE Real-world consequences RESILIENCE AUDIT identifies: ■ Single points of failure  ■ Vendor dependencies  ■ Control gaps before operational deployment — not after a crisis Independent governance audit — not self-assessment Structured for regulatory admissibility
23 · Agentic AI

Agentic AI & Autonomous Systems

Control Framework Design — As autonomous AI systems make consequential decisions without human review, governance structures must evolve. We design the oversight architecture that keeps boards accountable and regulators satisfied.

Accountability Chain Mapping — When an agentic system makes a decision that causes harm, who is responsible? We establish clear chains of accountability before deployment, not after failure.

Agentic AI Autonomous Systems Control Frameworks Board Accountability
AGENTIC AI autonomous decision HARM OCCURS Who is responsible? WITHOUT GOVERNANCE ownership cascades up until it reaches the board WITH OTOPOETIC GOVERNANCE accountability chain documented in advance Accountability established before deployment, not after failure

Built on the study of systems that sustain themselves under pressure

Our name is derived from autopoiesis — the study of systems that sustain themselves under pressure. We apply this principle to the world's most complex regulated environments, ensuring that as your technology evolves, your organisational coherence remains intact.

The founder's background spans astrophysics, statistical modelling, and 25 years of executive leadership across sectors where the cost of systemic failure is measured not in downtime, but in lives, capital, and institutional trust.

This is not general advisory. It is structural analysis applied to the specific problem of AI accountability.

"The question boards should be asking is not 'does our AI work?' It is 'do we know what our AI will do when it doesn't?'"
Confidentiality

We operate in environments where discretion is a prerequisite. To protect our clients' strategic interests, we do not publish names, sectors, or specific financial outcomes. We publish the patterns, principles, and architectural insights that prevent failure.

Every engagement begins with
a single, private conversation

We do not offer automated onboarding. Every briefing is conducted directly by Ivan Roche, without obligation, and in full confidence.

Duration 45 minutes
Obligation None
Confidentiality Absolute
Response Within 24 hours

Your details are used solely to arrange a briefing and are not shared with any third party.