Leading Responsibly: Governing AI in an Era of Regulation, Risk, and Trust

In this article, we explore the key themes from Dot Collective’s second roundtable at Data Decoded London 2026, Leading Responsibly: Governing AI in an Era of Regulation, Risk, and Trust. From AI agents and non-human consumers to accountability, governance architecture, and responsible AI frameworks, the discussion focused on a growing challenge for organisations: how to balance innovation with control in an environment where AI is becoming an active participant in decision-making.
Artificial intelligence is not just another wave of technology. It’s reshaping how organisations fundamentally operate and make decisions.
At Data Decoded London 2026, Dot Collective’s CEO, Svetlana Tarnagurskaja, hosted a roundtable on "Leading Responsibly: Governing AI in an Era of Regulation, Risk, and Trust”. The session covered how boards and senior leadership teams can design pragmatic AI governance models that balance innovation with control.
One topic was particularly interesting: Governance is no longer designed only for humans.
For years, governance frameworks assumed human actors. People accessed dashboards, interpreted reports, and made decisions. However, with the rise of AI, organisations are designing systems not just for human data consumers, but for non-human actors.
Designing for Non-Human Consumers
AI consumes and acts on data in a different way. They require different standards of data quality, structure, and accessibility, and they do not interpret ambiguity in the same way humans do.
This creates a critical governance question:
What are these non-human actors allowed to access, modify, and decide?
Defining boundaries becomes essential:
- What data can an AI agent access?
- What actions can it take?
- Where are the limits of autonomy?
Without clear answers, organisations risk losing control over their own systems.
Uncertainty and Governance
During the session, one thing became apparent, people are nervous that organisations have not yet solved human data governance, let alone moving onto non-humans. Ownership is often unclear. Data quality is inconsistent. Accountability can be difficult to trace.
So, when AI enters the picture, there’s a lot of scepticism.
But during the talk, Svetlana highlighted that uncertainty is not a reason to discard existing knowledge. Established governance principles still apply:
- Define actors
- Understand their roles
- Control access
- Track accountability
The Rise of AI Agents
The rise of autonomous agents introduces a new layer of complexity. Organisations are left asking what exactly they are governing.
There are many unknowns, and that can feel overwhelming. But governance does not require perfect clarity to begin. It requires structured thinking.
A practical approach is to ask simple questions:
- Who or what are the actors?
- What do they do?
- What do they have access to?
Many organisations still struggle to answer these questions for human users and AI makes the gaps more visible.
Governance as Architecture
Governance is often treated as a separate function. In reality, it should be embedded into system design from the beginning. Good governance is not an add-on; it’s a part of good engineering principles.
Organisations that fail to embed governance early often create risk for themselves later. Separating governance from design leads to fragmented systems and unclear accountability.
This is closely linked to Conway’s Law. System design often reflects organisational structure. If teams are siloed, governance will be fragmented. If organisations adopt more mature approaches such as data mesh, governance tends to be more distributed and aligned with value creation.
This raises even more questions:
- Is data treated as a strategic asset?
- What is the value chain for data?
- How is value measured and cost attributed?
Governance can help us answer these questions, not just enforce rules.
Responsible AI Frameworks
Responsible AI frameworks are becoming essential. They should not sit on the sidelines as compliance exercises. They need to be part of how systems are assessed and built.
Leaders who want AI capabilities immediately often underestimate what is required. AI adoption is not just a technology decision. It requires understanding workflows, challenging existing processes, and sometimes rebuilding them entirely – a topic we discussed in depth in our first roundtable of the day which you can read about here: Data as a Strategic Asset: Operating Models for AI-Driven Organisations.
Operating Without a Rulebook
We are working in a period of rapid change and unfortunately there is no complete rulebook for AI governance yet.
Waiting for one is risky. By the time standards fully stabilise, organisations that delay this process will already be behind.
Instead, organisations need to develop the ability to manage uncertainty:
- How do you define acceptable quality?
- How do you ensure security in dynamic systems?
- How do leaders make decisions with incomplete information?
A Practical Example: AI and Data Access
Here’s a use case that we discussed during the roundtable:
An organisation builds a chatbot that can query internal systems such as a BI dashboard or a data platform. A user asks a straightforward question like “What were our quarterly results?”
The challenge is not access; it’s trust.
Unlike traditional systems, AI does not always produce a single, fixed answer, in fact there could be hundreds of correct answers that the bot can choose from. This raises a new problem: How do you define accuracy when there are many possible correct answers? And what are the consequences if the AI system gets the answer wrong?
Governance needs to address:
- What counts as an acceptable answer?
- How edge cases are tested
- Where accuracy thresholds are set
In some contexts, 99% accuracy is acceptable, in others it’s not. A 1% error in a high-stakes environment such as financial auditing could have serious consequences.
Governance helps determine what level of risk is acceptable before systems are deployed.
Summary
AI is changing how organisations think about governance, but it’s not replacing its foundations.
The core principles still apply:
- Clear ownership
- Defined access
- Structured accountability
- Embedded design
What has changed is the environment. Organisations are now designing for both human and non-human actors, operating in conditions of uncertainty, and making decisions that carry new types of risk.


