A recent conversation at Davos, "The Day After AGI", with Demis Hassabis (Google DeepMind) and Dario Amodei (Anthropic), captured something most of us are sensing in the world of business and innovation: artificial intelligence is moving at a pace that feels both exhilarating and unsettling. Faster than governance. Faster than organisational culture. Often faster than our collective ability to absorb what it is doing to work, trust and power.
I am genuinely excited by AI’s potential. Used thoughtfully, it can help us see patterns we could not see before, accelerate learning, unlock creativity and support transformation at a scale our societies urgently need. AI’s power is, ultimately, its potential to widen access to knowledge, increase confidence and speed up action. Its benefits to humanity, however, rely on good intentions and a commitment from leaders to translate these intentions into real action. This takes reflection, investment and time.
I therefore remain uneasy about the gold-rush dynamics taking hold around the technology which Hassabis and Amodei spoke of - the sense that slowing down, even briefly, is impossible. Reflection is being treated as inconvenient delay rather than as a moral prerequisite for getting things right.
This reluctance is telling. It suggests that much of the current rush is driven less by collective benefit than by familiar forces: power, money, and the fear of losing competitive advantage.
When these forces dominate, agency concentrates further in the hands of those who already have capital, technical proximity and influence—while the consequences are often borne by those with the least power to shape them.
Many people have asked how Kite Insights is responding to AI and whether it changes our role. For me, this moment does not weaken our relevance. It sharpens it. Why? Because leadership now hinges on a central judgement: knowing when speed creates value and when it undermines long-term impact—and helping organisations build the capability to make that call repeatedly as the technology evolves.
Staying present: what AI is already changing
If we focus only on what AI can do, we may miss what it is already doing to people.
AI does not land in a vacuum. It arrives in organisations shaped by trust or mistrust, confidence or fear, openness or silence—and it tends to amplify whatever culture it encounters.
The most important considerations are therefore not primarily technical. They are human:
- Do people feel included in what is unfolding—or being imposed upon them?
- Do they understand how decisions are being made and how data is being used?
- Do they trust the intent behind deployment—and feel they have agency within it?
- Do leaders grasp the second-order impacts on roles, morale, fairness and legitimacy?
These questions cannot be answered by models or dashboards alone. They require conversation, disagreement, reflection and time. And time, right now, is precisely what many organisations believe they do not have.
This is the tension leaders are living inside: urgency to deploy alongside a growing awareness that trust, fairness and legitimacy are fragile and easily lost.
The sustainability parallel: the risk of “fast” transitions
In many ways, this moment echoes the early sustainability movement. Sustainability began as a moral and scientific imperative and quickly became a corporate agenda. Much progress was real. Some was performative.
Many organisations moved fast to commit and position. But a hard truth emerged: when ambition isn’t matched by capability and governance, speed produces unintended consequences—public scepticism, employee cynicism, backlash, and ultimately slower progress.
AI risks repeating that pattern.
The temptation is clear: announce adoption, roll out tools, declare efficiency gains. But the deeper work—impact assessment, governance, workforce transition, data stewardship, employee training and readiness, accountability and change management—takes discipline and time.
Sustainability taught us that successful transitions combine:
- Clear intent — what you are trying to achieve and why
- Credible governance — how decisions are made and impacts monitored
- Employee engagement – company-wide transitioning, not just top-level ambition
- Stakeholder trust — who is involved and how concerns are addressed
- Tangible progress — evidence and learning, not slogans
AI requires the same—only faster, and with higher volatility.
From deployment to discernment
One of the quieter dangers of this moment is mistaking deployment for leadership.
Rolling out AI tools quickly can look decisive. But leadership today is increasingly about discernment: knowing where speed genuinely helps and where it undermines trust, judgement or legitimacy. Recognising when friction is a problem to solve—and when it is a signal to pause and listen.
In practice, this means asking better questions:
-
What problem are we solving—and for whom?
-
Where is AI appropriate, and where is it a shortcut that creates risk?
-
What must remain human-led - where decisions affect rights, safety or dignity?
-
How will we detect harm early—and who has authority to act?
-
How do we prevent benefits concentrating in one area while costs fall elsewhere?
Good AI leadership is not “AI everywhere.” It is AI used where it strengthens outcomes without compromising trust, fairness or agency.
What organisations need most right now
Across Kite’s work with leaders and clients on AI, four needs surface consistently.
-
A shared understanding
AI quickly creates knowledge gaps across organisations. This is exacerbated by false binaries (AI vs jobs, AI vs sustainability). When understanding is uneven, anxiety rises and decisions fragment. Organisations need a common, accessible language for AI and clarity on real trade-offs. Ai optimism and AI fear need to transition towards AI accountability. -
Architecture of trust
Trust comes from visible decision rules and accountability: who decides, on what basis, with what oversight, and what happens when things go wrong. -
A sustainable workforce transition
AI will reshape tasks, entry pathways and professional identity. Leaders must be honest about change, and incorporate employees as design partners, not end users. Alongside this, investing in reskilling, redesigning roles and ensuring opportunity widens rather than narrows. - A fairness and legitimacy lens
AI can embed bias and create new inequities through access and oversight. Fairness must be treated as an operational requirement, not a communications position. Moving on from questions over whether it is good or bad, organisations need to start asking better questions, which examine what conditions AI creates a net benefit.
Clear direction over rapid roll out
Good leadership isn’t just about being the fastest. It’s about being the clearest. Good leaders understand that legitimacy is a strategic asset—built through fairness, transparency and disciplined decision-making.
In sustainability, the organisations that ultimately led were not always the fastest. They were those that built credible capability: governance, measurement, stakeholder engagement and the willingness to change operating models, not just messaging.
AI will reward the same.
Of course, speed will matter. But so will trust, fairness, explicability and the courage - and ability to recognise – when a pause is needed. There is a difference between urgency and haste. Urgency can be wise. Haste is often costly.
A choice, not a race
AI will shape the coming decade in profound ways. Whether it deepens division or builds capability, concentrates power or distributes agency, accelerates extraction or supports regeneration depends on choices made now.
The potential of AI is extraordinary. So too is the responsibility that comes with it.
Thinking clearly about the impacts and implementation process is both a discipline and a duty. Progress should not come at the expense of reflection. And responsibility, ultimately, cannot be automated.


COMMENTS