
I help regulated businesses build the governance structures, policies, and board-level accountability frameworks they need to deploy AI with confidence. Not after a regulatory failure or a failed audit. Before.
The EU AI Act is not a future obligation. It is live. Organisations that cannot demonstrate compliance are already exposed.
Weak AI governance is not an operational inconvenience. It is a direct pathway to regulatory sanction, reputational damage, and board liability.
Regulators expect documented, demonstrable oversight of AI systems. Good intentions without governance architecture will not hold up under scrutiny.
Establish the foundation for responsible AI adoption.
So your board can evidence oversight, not just assert it.
Align AI initiatives with your organisation's structure, culture, and capability.
So AI investment delivers value without creating governance liability.
Translate governance principles into operational controls that work in practice.
So governance holds up when regulators, auditors, or the board ask hard questions.
Build internal capability and keep governance effective as AI evolves.
So governance decisions are made with confidence at every level of the organisation, not just at the point of engagement.
Map your AI landscape, regulatory obligations, and governance gaps with enough precision to know where your organisation is already exposed.
Build the frameworks, policies, and structures your organisation needs, architected for your regulatory context and not adapted from a generic template.
Includes a proprietary L1–L4 Agentic AI Autonomy Classification framework, developed for regulated industry contexts.
Put governance into practice. Operational controls, board-level accountability, and an audit trail that holds up when regulators or senior leadership ask hard questions.
About

I built this practice because I kept seeing the same problem. Organisations were investing in AI while their governance architecture lagged months, sometimes years, behind. Not because leaders didn't care. Because no one had translated the regulatory landscape into something an executive team could actually act on.
That gap is where I work.
If that gap exists in your organisation, the starting point is a conversation. Book a discovery call or read more about Theodora.
Thought Leadership
Perspectives on building effective AI governance in regulated industries.
Most organisations can tell you which AI vendors they selected. Few can tell you which are currently active, qualified, and documented without assembling the answer manually.
Regulators are not asking whether your AI system is capable. They are asking who was accountable for the decision it supported. Most organisations have not answered that yet.
A validated platform tells you what your vendor has done. It says nothing about whether your organisation has designed the governance structures that hold up under inspection.
Giving domain experts direct data access makes sense. In regulated environments, every modification to a data workflow is a potential change control event. Sequence matters.
Contact
Every engagement begins with a focused discovery conversation. No obligation, no generic proposals. If your organisation is navigating AI governance, regulatory exposure, or board-level AI accountability, this is where that conversation starts.
Visit full contact page