Why AI Fails in Organizations: It’s Not the Tech, It’s the Politics

Why AI Fails in Organizations: It’s Not the Tech, It’s the Politics

July 23, 2025
Why AI Fails in Organizations: It’s Not the Tech, It’s the Politics

Based on a conversation between Jeffrey Pfeffer, Christian Guttmann, and Lauri Paloheimo

When AI implementation fails in organizations, it’s rarely because the technology isn’t ready. It’s because the organization isn’t. In a thought-provoking exchange, Jeffrey Pfeffer, Christian Guttmann, and Lauri Paloheimo break down the human and political dynamics that derail even the most technically sound AI initiatives, and what leaders can do about it.

The Invisible Barrier: Power and Resistance

Jeffrey Pfeffer reminds us that innovation is a negotiation with power. Those who benefit from the status quo, whether consciously or not, will often resist changes that threaten their influence, workflow, or perceived competence. That’s why one of the first steps in any AI initiative should be a power map: who holds influence, who can block change, and how can their interests be aligned with the transformation?

Christian Guttmann echoes this with his own practice of stakeholder mapping, particularly in large-scale AI deployments. AI adoption isn’t just about installing software. It’s about earning buy-in from across the organization, often from people whose support isn’t guaranteed.

AI Is Fast—But People Aren’t

One of AI’s unique challenges is the speed at which it evolves. New models, tools, and capabilities emerge at a breakneck pace. That speed creates pressure for quick adoption, often without adequate alignment. It also deepens knowledge gaps between leaders, users, and technical teams, leading to poor decisions and ineffective rollouts.

As Guttmann notes, AI moves faster than most corporate governance structures, and its impact on white-collar work, especially decision-making, feels more personal than previous tech waves. That triggers fear, which can paralyze action.

Fear Is the Enemy of Change

Fear is a systemic blocker of organizational progress. Pfeffer and Paloheimo emphasize that psychological safety is a prerequisite for effective AI adoption. If employees see AI as a threat to their job, identity, or relevance, they won’t experiment, share feedback, or collaborate. They’ll disengage.

Instead, leaders need to deliberately create a culture where AI is positioned as an enabler. Rather than automating people away, organizations should focus on augmenting roles and unlocking new value—reframing AI as liberating rather than overpowering.

From Change Management to Systems Intelligence

Paloheimo points out that traditional change management models, like sending out a single email or deck, are no longer sufficient. AI adoption requires sustained reflection and individualized support. Pandatron’s own approach, where AI coaches help people articulate their personal goals, fears, and use cases, is one example of making AI adoption deeply human.

Guttmann goes even further, arguing that we’re moving toward a future where AI agents are part of the power map. These agents will carry out complex tasks with increasing autonomy, making decisions that shape outcomes. That makes it even more critical to design governance frameworks and feedback loops that ensure AI systems remain aligned with human values.

The Future Requires Wise Leaders

Ultimately, successful AI adoption isn’t just about smart tools. It’s about wise leadership. Leaders must:

  • Understand how their organization actually works (process mapping).
  • Know who holds formal and informal power.
  • Cultivate trust and psychological safety.
  • Frame AI as an invitation, not a threat.
  • Celebrate success visibly to create momentum.

In Pfeffer’s words, “You can come up with any type of strategy all day long… but if people are not on board, you ain’t gonna pedal anywhere.” For organizations chasing AI transformation, that’s the lesson to remember. Strategy is theory. Power, people, and purpose are what make it real.