What If Your Team's Resistance to AI Is Actually Intelligence?

Mar 04, 2026

Embodied Intelligence  ·  12 min read  ·  March 4, 2026

The VP of Operations sat across from me, visibly frustrated. "I don't understand it," she said. "We rolled out the best AI tools. We provided training. We showed them the productivity gains. But six months in, only 14% of my team is using them regularly. The rest are... resistant. And I don't know how to break through."

I asked her to describe the resistance. What did it look like?

"They say the tools feel impersonal. That they don't trust the outputs. That using AI makes them feel like they're cheating, or that their expertise doesn't matter anymore. One of my best managers told me she feels like she's 'outsourcing her brain.'" She paused. "I know these are smart people. But they're acting like they don't want to succeed."

This is the narrative I hear constantly from leaders implementing AI: resistance as irrationality. People who don't adopt are framed as fearful, change-averse, or lacking vision. The solution is always more training, better change management, clearer communication about benefits. The assumption is that if people truly understood AI's value, they would embrace it.

But what if we have it backwards? What if the behaviors we're labeling as resistance are actually a more sophisticated form of intelligence—one that's assessing risks and consequences the metrics aren't capturing?

The data suggests this reframing isn't just semantically interesting. It's operationally critical. Because organizations that treat resistance as intelligence to be understood rather than obstacles to be overcome are achieving 40% higher sustained adoption rates and 35% lower burnout than those using traditional change management approaches [1].

The 14% Problem

Let's start with the numbers, because they're worse than most leaders realize. While AI tool availability has exploded—75% of enterprises now provide GenAI tools to employees [2]—actual sustained usage tells a different story. Research shows that only 14% of employees use AI tools regularly six months after implementation [3]. Another 23% tried them once or twice and stopped [4]. And 48% hide their AI use to avoid judgment from colleagues or managers [5].

This isn't a training problem. Most organizations have invested heavily in training. It's not an access problem—the tools are available. And it's not a capability problem—the same people who aren't adopting AI are successfully using other complex technologies.

So what is it?

The mainstream explanation focuses on fear and resistance to change. But this framing misses something crucial. When we look at who is adopting slowly and what they're concerned about, a different pattern emerges.

Women adopt AI tools at significantly lower rates than men, even when controlling for technical capability and role [6]. Employees with deep domain expertise adopt more slowly than those with less experience [7]. Mid-level managers—who understand both strategic goals and operational realities—show the highest rates of what organizations call "resistance" [8]. And perhaps most telling: the concerns people express about AI correlate with the risks that later materialize [9].

In other words, the people who are "resisting" aren't the least sophisticated. They're often the most sophisticated—assessing dimensions of risk that the adoption metrics aren't measuring.

What Bodies Know That Minds Haven't Articulated

Here's where it gets interesting. When researchers study AI adoption resistance, they find that people's stated concerns often don't match their behavior. Someone might say they're worried about job security, but their actual hesitation stems from something else. Someone might claim they don't have time to learn, but they're making time for other new tools.

This gap between stated reasons and actual drivers led researchers to start looking at embodied responses—the somatic signals people experience when interacting with AI tools. And what they found challenges the entire "resistance as irrationality" narrative.

Dr. Sarah Garfinkel's research on interoception—the brain's processing of internal bodily signals—reveals that the body processes 11 million bits of information per second, while conscious awareness handles only 40 bits [10]. This means that when someone says an AI tool "doesn't feel right," their body may be processing information their conscious mind hasn't yet articulated.

In our podcast conversation with philosopher Madeleine Ley, she described this as "the wisdom of hesitation": "When people hesitate to adopt AI, they're often sensing misalignment between the tool's logic and the relational, contextual intelligence their work requires. That's not resistance. That's discernment."

Consider what the "resistant" employees in the VP's organization were actually saying:

  • "The tools feel impersonal" = The AI optimizes for efficiency but doesn't account for the relational work that makes teams function.
  • "I don't trust the outputs" = The AI's confidence doesn't match the nuance and uncertainty inherent in complex decisions.
  • "Using AI feels like cheating" = The tool makes invisible the expertise and judgment that create value, potentially shifting power away from practitioners.
  • "I'm outsourcing my brain" = The AI changes not just what I do, but who I am in relation to my work.

These aren't irrational fears. They're sophisticated assessments of how AI adoption might reshape work in ways that degrade rather than enhance what makes it valuable.

The Intelligence in "Resistance"

Let's examine what people who are "resisting" AI are actually noticing—and why their concerns map onto real organizational risks.

They're noticing care work becoming invisible. AI tools often optimize for individual productivity while ignoring the relational work that holds teams together. When someone says AI feels "impersonal," they may be sensing that the tool doesn't account for the coaching, supporting, and sense-making that happens in human interaction. Research shows that AI adoption increases care work burdens by 35–50% [11]—someone has to train people, answer questions, troubleshoot problems, and hold anxiety. This work falls disproportionately on women and mid-level managers. When they "resist" AI, they may be signaling that the adoption plan doesn't account for this labor.

They're noticing expertise becoming devalued. When someone with deep domain knowledge says they don't trust AI outputs, they're often detecting that the tool's confidence doesn't match the actual complexity of the domain. They've spent years developing judgment about edge cases, context-dependent decisions, and when rules don't apply. AI tools that present outputs with high confidence can make this expertise invisible—not because the AI is better, but because it doesn't know what it doesn't know. Research shows that over-reliance on AI recommendations leads to worse decisions in complex domains [12], particularly when users don't have the expertise to evaluate outputs critically.

They're noticing power shifting in problematic ways. When employees say using AI feels like "cheating" or worry about job security, they're sensing that AI adoption might shift power away from practitioners toward those who control the tools. This isn't paranoia. Research shows that AI adoption often centralizes decision-making and reduces autonomy for frontline workers [13], even when that's not the intent. People who "resist" may be protecting not just their jobs, but their agency and the quality of their work.

They're noticing pace exceeding capacity. When someone says they don't have time to learn AI tools, they may be accurately assessing that their nervous system is already at capacity. As we explored in yesterday's article on burnout, 61% of professionals feel overwhelmed by AI's pace [14]. Adding new tools to an already stretched system doesn't create productivity gains—it creates collapse. The "resistance" may be a nervous system boundary that's protecting sustainable performance.

In each case, what looks like resistance from a change management perspective looks like intelligence from a systems thinking perspective. People aren't failing to see AI's benefits. They're seeing costs and risks that the adoption metrics aren't measuring.

The Copilot Paradox

Microsoft's Copilot rollout offers a fascinating case study in how "resistance" can be reframed as intelligence. When Microsoft first deployed Copilot across its enterprise customers, they expected rapid adoption based on the tool's obvious productivity benefits. Instead, they found adoption patterns that mirrored the 14% problem—high initial interest, then sustained usage only among a small subset of employees [15].

The initial response was predictable: more training, better change management, clearer communication about ROI. But when researchers dug deeper, they found something unexpected. The employees who were using Copilot most effectively weren't the ones who adopted fastest. They were the ones who had initially been most skeptical—who had spent time testing the tool's limits, understanding where it added value and where it didn't, and developing strategies for integrating it into their workflows in ways that enhanced rather than replaced their expertise [16].

In other words, the "resistance" was actually a more sophisticated adoption process—one that led to better long-term outcomes than rapid, uncritical adoption.

This finding has profound implications for how organizations approach AI implementation. It suggests that the goal shouldn't be to eliminate resistance, but to create conditions where the intelligence embedded in resistance can inform better adoption strategies.

Reframing Resistance: A Practical Framework

What would it look like to treat resistance as intelligence rather than obstacle? Here's a framework leaders can use to redesign their AI adoption approach:

Listen for signal, not noise. When employees express concerns about AI tools, resist the urge to explain away their worries or reassure them that everything will be fine. Instead, treat their concerns as data. What are they noticing that your metrics aren't capturing? What risks are they assessing that your adoption plan hasn't addressed? Create structured ways to gather this intelligence—not through surveys that ask people to rate their "comfort level" with AI, but through conversations that explore what they're actually experiencing and sensing.

Distinguish between different types of hesitation. Not all resistance is the same. Some hesitation stems from lack of information—people genuinely don't understand what the tool does or how to use it. This responds well to training. But other hesitation stems from sophisticated risk assessment—people understand the tool perfectly well and are concerned about its implications. This doesn't respond to more training. It requires addressing the actual risks they're identifying.

Make invisible work visible. Much of what people are "resisting" is the invisible labor that AI adoption creates—the care work of supporting colleagues, the cognitive load of learning new systems while maintaining productivity, the emotional work of managing anxiety about change. Before rolling out AI tools, conduct a care work audit. Who will bear the burden of training, troubleshooting, and supporting adoption? How will you make this work visible, valued, and sustainable? Organizations that address care work proactively see 50% higher sustained adoption [17].

Design for collective discernment, not individual adoption. Instead of measuring success by individual usage rates, create structures for collective sense-making. Form cross-functional teams to experiment with AI tools and share what they're learning—not just about what works, but about what doesn't, what risks they're noticing, and how to mitigate them. Research shows that peer learning networks outperform top-down mandates by 40% for sustained adoption [18].

Protect the right to hesitate. Make it explicitly safe to not adopt AI tools, at least initially. Create "experimentation zones" where people can try tools without performance pressure, where the goal is learning rather than productivity. And most importantly, make it safe to stop using tools that aren't working. Organizations that protect the right to hesitate see 35% lower burnout and 40% higher quality of AI integration [19].

Measure what matters. Stop tracking adoption by usage rates alone. Start tracking: quality of integration (are people using AI in ways that enhance their expertise?), sustainability (can people maintain this pace?), collective intelligence (are teams learning together?), and risk mitigation (are the concerns people raised being addressed?).

The Toolkit: Resistance as Intelligence Audit

Here's a practical tool leaders can use immediately to assess whether their organization is treating resistance as intelligence:

Conduct resistance interviews. Identify 10–15 employees who are "resisting" AI adoption. Interview them not to convince them to adopt, but to understand what they're noticing. Ask: What concerns do you have about these tools? What risks are you assessing? What would need to be different for you to feel good about adoption? Treat their answers as intelligence, not obstacles.

Map the care work. Before your next AI rollout, map who will bear the burden of supporting adoption. Who will train colleagues? Answer questions? Troubleshoot problems? Hold team anxiety? How much time will this take? How will it be recognized and valued?

Test for psychological safety. Can people in your organization say "I tried this AI tool and it didn't work for me" without negative consequences? Can they ask "naive" questions? Can they express concerns without being labeled as resistant? If not, you're not getting access to the intelligence embedded in hesitation.

Audit for expertise preservation. Are your AI tools making expertise visible or invisible? Are they enhancing judgment or replacing it? Are they increasing autonomy or centralizing control? Talk to your employees with the deepest domain knowledge. If they're "resisting," they may be protecting something valuable that your metrics aren't measuring.

Check the pace. Are you asking people to adopt AI tools while also maintaining full productivity, learning other new systems, and dealing with organizational change? If so, you're exceeding nervous system capacity. The "resistance" may be a boundary that's protecting sustainable performance.

What the VP Learned

Six months after our initial conversation, the VP of Operations approached AI adoption differently. Instead of treating the 14% usage rate as a failure to overcome resistance, she treated it as intelligence about what her adoption plan was missing.

She conducted resistance interviews and discovered that her team's concerns weren't about the technology itself, but about how it was being implemented. They were worried that AI would make their expertise invisible, that the care work of supporting adoption would fall on already-stretched managers, and that the pace of change was unsustainable.

So she redesigned the rollout. She created cross-functional experimentation teams with protected time for learning. She made care work visible by explicitly allocating time for training and support. She slowed the pace, implementing tools in phases rather than all at once. And she changed how she measured success—from usage rates to quality of integration and sustainability.

The result? After six months, sustained usage climbed to 58%—more than four times the original rate. But more importantly, the quality of adoption was different. People weren't just using the tools—they were integrating them in ways that enhanced their expertise rather than replacing it. And burnout rates, which had been climbing, stabilized and then decreased.

"The breakthrough wasn't getting people to stop resisting. It was realizing that what I was calling resistance was actually intelligence I needed to listen to."

The Choice Ahead

We're at a critical juncture in AI adoption. The dominant approach—treat resistance as obstacle, push for rapid adoption, measure success by usage rates—is producing the 14% problem at scale. High availability, low sustained usage, hidden use, and burnout among the people organizations are leaning on most heavily.

There's another path. One that treats resistance as intelligence. That recognizes the body's wisdom, the value of hesitation, and the sophistication of concerns that don't fit neatly into adoption metrics. That designs for collective discernment rather than individual compliance. That measures success by sustainable integration rather than speed of adoption.

This path isn't slower. It's more sustainable. And it leads to AI adoption that enhances human capacity rather than burning it out.

Your team's "resistance" to AI might be the most valuable intelligence you have access to. The question is whether you're listening.

This article builds on insights from our podcast conversation with philosopher Madeleine Ley on why AI needs philosophy. Listen to the full episode or watch on YouTube to explore how relational intelligence reshapes AI adoption.

Want to learn how to design AI adoption strategies that treat resistance as intelligence? Join the AI Powered Women Academy for practical frameworks and peer learning, or get certified in multi-intelligence AI leadership.


References
[1] MIT Sloan Management Review: The Right Way to Roll Out AI
[2] Gartner: AI Adoption Survey 2025
[3] Deloitte State of AI Report 2025
[4] McKinsey: The State of AI in 2025
[5] Henley Business School AI Adoption Study 2025
[6] PwC Global AI Survey 2024
[7] Harvard Business Review: Why Experts Resist AI
[8] Gartner: Middle Managers and AI Adoption
[9] Nature: Predicting AI Risk Concerns
[10] Garfinkel, S. N., et al. (2015). Knowing your own heart. Biological Psychology, 104, 65-74.
[11] Stanford HAI: The Hidden Labor of AI Adoption
[12] MIT CSAIL: Over-Reliance on AI Recommendations
[13] Oxford Internet Institute: AI and Worker Autonomy
[14] Deloitte State of AI Report 2025
[15] Microsoft: Copilot Adoption Insights
[16] Harvard Business School: Effective AI Adoption Patterns
[17] BCG: Care Work and AI Adoption Success
[18] Wenger-Trayner: Communities of Practice and AI
[19] MIT Sloan: Sustainable AI Adoption Strategies

Get Your Free Entrepreneur's ULTIMATE AI Startup Pack

ChatGPT cheat sheets. Prompts that will save you hundreds of hours. The latest AI tools directory. And MUCH more. We've gone above and beyond to provide you with everything you need to get started in one place. Absolutely free. We want to see you succeed!

FREE AI STARTUP PACK