High-Stakes AI Risk Mitigation
When failure is not an option,
clarity matters more than enthusiasm
We provide Private Equity, Institutional Investors, and Global Boards with the statistical rigour and architectural scrutiny required to de-risk high-consequence technology investments.
Founded by Ivan Roche — former CTO, COO and Interim CDO · 25+ years across insurance, aviation, financial services, telecoms and public sector
What is Otopoetic?
Structural integrity
for organisations where
failure is not an option
We bridge the critical gap between innovation enthusiasm and investment reality.
When technology decisions carry billion-dollar reputational and financial risks, we provide the forensic clarity that standard audits miss. Our work ensures that AI and systems architecture strengthen institutional resilience rather than introducing hidden fragilities.
In high-stakes environments, clarity matters more than enthusiasm.
The scrutiny standard
Four questions your board
cannot currently answer.
Select each to see your exposure.
Most organisations have no formal record of who assessed downside risk before deployment. When regulators or litigants ask who knew what and when — the answer is usually silence. That silence is itself a finding.
The Governance Classification
Every organisation has
a governance address.
Most don't know it.
The Governance Classification is Otopoetic's proprietary assessment framework. It gives every client organisation a precise, multi-dimensional position within the AI governance landscape — not a score, not a traffic light, but a specific address across five independent facets.
The methodology draws on faceted classification theory first formalised by the Indian mathematician and librarian S.R. Ranganathan in 1933. His core insight — that complex subjects cannot be reduced to a single fixed hierarchy, but must be expressed as a combination of independent dimensions — has never previously been applied to AI governance.
Ninety years later, it is exactly the right tool for the problem boards now face.
Locate your organisation
in five dimensions.
Select your position on each facet. Your governance address generates live.
Your governance address
Complete all five facets to generate your address.
The Governance Classification gives every organisation a precise, actionable position across five independent governance dimensions. Request a confidential briefing to understand what your address means and what comes next.
“A library is a growing organism.” — S.R. Ranganathan, 1931. So is every AI system deployed without governance.
Methodological foundation
The Governance Classification applies faceted classification theory first developed by S.R. Ranganathan (1892–1972), mathematician and librarian, whose Colon Classification of 1933 demonstrated that complex knowledge cannot be reduced to a single hierarchy. Ranganathan's original works are in the public domain. The application of his methodology to AI governance accountability is original to Otopoetic.
Why this matters
AI is a structural liability,
not a feature upgrade
The data on board accountability, regulatory exposure, and governance gaps.
of boards take direct responsibility
for AI governance oversight
McKinsey State of AI Survey, 2025
AI reshapes accountability across the entire organisation
It fundamentally changes who owns decisions, where risk is located, and what boards are liable for. Standard productivity metrics do not capture this.
83% of boards have no formal AI accountability structure. That is not a gap — it is a liability.
increase in boards citing AI risk
as an oversight responsibility in 12 months
EY Center for Board Matters, 2025
Boards no longer ask whether a model is “elegant”
They ask who understood the downside, what was documented, and whether the decisions will survive forensic, regulatory, or financial scrutiny.
From 16% to 48% of Fortune 100 boards in one year. The standard is moving faster than most governance programmes.
of organisations have not fully implemented
an AI governance programme
AuditBoard / IAPP Governance Survey, 2025
In high-stakes environments, self-assessment is a liability
Independent assessment is no longer optional. It is a requirement for fiduciary duty. Organisations that rely solely on internal review carry unquantified structural risk.
Three-quarters of organisations are self-assessing a risk they have not independently verified. That is not caution — it is exposure.
Sources: McKinsey & Company State of AI 2025 · EY Center for Board Matters Fortune 100 Analysis 2025 · AuditBoard 2025 · IAPP AI Governance Profession Report 2025
Representative Domains of Work
Where we apply
forensic scrutiny
Pre-Acquisition Scrutiny — Identifying hidden technical debt and architectural fragility prior to capital commitment. We examine what vendor presentations do not show and what due diligence checklists do not ask.
AI Asset Valuation — Assessing proprietary integrity and regulatory risk to separate marketing claims from defensible, audit-ready code.
Integration Risk — Evaluating operational resilience and security exposure in complex, high-availability environments where failure propagates.
Expert Witness Reporting — Formal forensic reporting and independent opinion for high-stakes technology litigation and governance breaches. Structured for legal admissibility and board-level comprehension.
Regulatory Support — Independent risk exposure assessment and compliance mapping for emerging global AI frameworks, including the EU AI Act, FCA guidance, and sector-specific obligations.
Safety-Critical Governance — Operating model and data governance design for environments where technological failure carries immediate real-world consequences. Drawing on direct experience across aviation, public safety, insurance, and national infrastructure.
Resilience Audits — Forensic platform scrutiny to identify systemic single-points-of-failure and mitigate dangerous vendor dependencies before they become operational crises.
Control Framework Design — As autonomous AI systems make consequential decisions without human review, governance structures must evolve. We design the oversight architecture that keeps boards accountable and regulators satisfied.
Accountability Chain Mapping — When an agentic system makes a decision that causes harm, who is responsible? We establish clear chains of accountability before deployment, not after failure.
Our Foundation
Built on the study of systems that sustain themselves under pressure
Our name is derived from autopoiesis — the study of systems that sustain themselves under pressure. We apply this principle to the world's most complex regulated environments, ensuring that as your technology evolves, your organisational coherence remains intact.
The founder's background spans astrophysics, statistical modelling, and 25 years of executive leadership across sectors where the cost of systemic failure is measured not in downtime, but in lives, capital, and institutional trust.
This is not general advisory. It is structural analysis applied to the specific problem of AI accountability.
"The question boards should be asking is not 'does our AI work?' It is 'do we know what our AI will do when it doesn't?'"
We operate in environments where discretion is a prerequisite. To protect our clients' strategic interests, we do not publish names, sectors, or specific financial outcomes. We publish the patterns, principles, and architectural insights that prevent failure.
Request a Confidential Briefing
Every engagement begins with
a single, private conversation
We do not offer automated onboarding. Every briefing is conducted directly by Ivan Roche, without obligation, and in full confidence.