Managing Your Team in the Age of AI

By Adrian Owen Jones

from our recent Notes from the Lab Series

Managing Your Team in the Age of AI

AI leadership strategies for managers

Last month, we wrote about career security — the idea that even when jobs shift and industries restructure, certain things travel with you. Being genuinely great at what you do, relationships built on trust, and being able to clearly articulate what you’ve accomplished and why it mattered. If you haven’t read it yet, I’d encourage you to. 

This month, we are addressing the other side of this topic – how to lead teams through AI disruption, which presents both tremendous possibility, and risk. 

The question I keep hearing in coaching sessions and hallway conversations isn’t really about AI tools or adoption timelines. It’s more like: how do I lead my team well when I honestly don’t know what the work is going to look like in two years?

Most of the advice floating around right now doesn’t quite reach that question. So, here’s what we’ve been thinking about and working through with the leaders we coach.

We have a leadership design problem, and we’ve been mistaking it for a technology problem.

Most leaders in established organizations came up in a model that worked really well for a long time. You take in complexity from above, make sense of it, and hand your team clear direction. You make the calls, set priorities, and absorb ambiguity so your people can focus on execution.

That approach made sense when change showed up as a defined event — a reorg, a systems migration, a new regulation. You could weather the disruption, stabilize things, and eventually get back to something that felt normal.

What’s different now is that the disruption doesn’t resolve. AI is accelerating things, but the shift was already underway. Markets, information, customer expectations – everything moves faster than it did ten years ago, and the problems worth solving have gotten more tangled and interconnected.

In that kind of environment, the leader who personally absorbs all the complexity becomes a bottleneck. A team that’s been trained to wait for direction can’t respond fast enough. And an organization that routes every meaningful decision through three levels of approval will consistently be outpaced by one where people closer to the work are trusted to make good calls.

I want to be clear: this isn’t anyone’s fault. The old model worked. Leaders were rewarded for being the person with the answers, the one who translated chaos into clarity. Letting go of that identity is genuinely hard, especially when it’s been the thing that made you successful. But the environment has changed enough that the same instincts that served leaders well are now, in many cases, holding their teams back.

Five things I’m working through with the teams I coach

1. Let your team see the mess.

There’s a reflex most leaders have, where you take messy, ambiguous information and clean it up before sharing it with your team. You filter the noise, resolve the contradictions, and present something crisp. It feels like good leadership. Your people get clarity so they can execute.

The problem is that when your team only sees the polished version, they can follow instructions well enough but they can’t make good calls on their own. They don’t have enough context. The moment something shifts (which it will) they’re stuck waiting for you to re-process and re-translate before they can move.

What I’m encouraging leaders to do is share more of the raw picture with their teams. Competing priorities, uncertain data, and trade-offs that don’t have a clean answer yet. Stanley McChrystal wrote about this in Team of Teams — he called it “shared consciousness.” The idea is straightforward: people at the edges of an organization can only make smart, independent decisions if they have access to real information about what’s going on.

For leaders who came up in hierarchical this feels uncomfortable. It can feel like admitting you haven’t figured it all out. In my experience, it’s one of the most productive shifts a leader can make right now.

2. Look at tasks before you start rethinking headcount.

There’s a lot of fear right now about whether teams will shrink. Will we need fewer people? That anxiety is real, and it’s freezing a lot of leaders in place. We are already seeing leaders from manufacturing, to non-profit, hold on filling open positions without any real plan of attack. 

I think the more productive starting point is to get granular about what your team actually spends its time doing. Not at the job description level, but at the task level. 

AI tends to change the time cost of specific tasks well before it eliminates entire roles. Research that used to take a full day might take an hour. A first draft that took an afternoon might take twenty minutes. Data synthesis, scheduling, routine reporting are all compressing.

That compression creates a gap in your team’s week. And what happens with that gap is a leadership decision that most people aren’t making deliberately. Some organizations are cutting positions reactively. Others are just piling on more of the same work. Both of those miss the opportunity.

The leaders I think are getting it right are looking at that reclaimed time and asking: what has my team been unable to get to because nobody had bandwidth? Usually the answer is the high-value stuff like deepening client relationships, strategic thinking, or complex problem-solving that requires someone who actually understands the context. Redirecting freed-up capacity toward that kind of work is how AI makes a team more valuable rather than smaller.

3. Build your team’s capacity to make decisions.

This is the one that keeps coming up with our clients.

Years of hierarchical culture have trained most teams to escalate. If it’s ambiguous, send it up the chain. If there’s risk, get approval. If you’re not sure, check with your manager first.

That habit made sense when the pace of work allowed for it. At the speed things move now, it creates a real problem, because when AI puts information and first-pass analysis in everyone’s hands, the constraint isn’t access to answers anymore. It’s whether people feel equipped and permitted to make a call.

Most teams are out of practice.

What I’m suggesting to leaders is to start small. Look at the decisions that currently come across your desk that your team could reasonably handle. Push those back with clear permission to make the call, and give explicit room to get it wrong sometimes. Then debrief together. What did you weigh? What would you do differently? What did you learn?

The US Navy ran an interesting experiment along these lines. Out of hundreds of thousands of personnel, they selected eight people and assigned them the eight biggest unsolved problems in the organization. There was no approval chain – just one directive: go figure it out. And it worked. But the important detail is that they started with eight people, not eight hundred. They built trust incrementally.

You can do the same thing at the team level. Start with low-stakes decisions. Build the muscle. Widen the circle as confidence and trust develop.

4. Bring AI use out into the open.

Most people right now are experimenting with AI at work quietly. They’re not talking about what they’re trying, what’s working, or more importantly, where the output falls short. There are understandable reasons for that. People worry about being judged, or about signaling that they can be replaced by a tool.

But the secrecy is causing real damage. For one thing, everyone on your team is independently solving the same problems – figuring out what AI handles well and where it misses – without the benefit of each other’s experience. That’s a lot of duplicated effort.

The other issue is trust. Research from Stanford and BetterUp found that when people receive work they suspect was generated by AI without any acknowledgment, they judge the person who sent it as less competent and less trustworthy. The content itself might be fine. It’s the lack of honesty about the process that erodes the relationship.

This is a place where the leader has to go first. When you say something like, “I used Claude to help me draft this, here’s what I kept and here’s what I reworked,” you’re normalizing an honest conversation about AI, and you’re modeling the kind of critical engagement with the tools that you want your team to develop.

Over time, a team that talks openly about its AI use develops something more valuable than any individual getting better at prompting. They build shared judgment about where these tools fit their specific work and where they don’t.

5. Invest in your people with the same seriousness you invest in the technology.

Organizations are spending real money on AI platforms. Most are spending very little on the team dynamics that determine whether those platforms do any good.

The research on this is pretty clear. Psychological safety, meaning a team’s willingness to be honest with each other, give real feedback, and surface problems without fear, is one of the strongest predictors of whether AI helps a team’s work or makes it worse.

When people trust their colleagues, they think critically about AI output before passing it along. They flag issues. They experiment more freely because they’re not worried about looking foolish if something doesn’t work.

When that trust isn’t there, people default to the path of least resistance. They use AI as a shortcut, forward work they haven’t really vetted, and the people on the receiving end spend their time cleaning up output that looked finished but wasn’t. That cycle quietly corrodes a team from the inside. You can read more about this dynamic in the recent HBR article “The Rise of AI Workslop”.

The leaders I see navigating this well are doing straightforward things: having regular coaching conversations, giving honest feedback, making genuine investments in their people’s development. They’re treating that work as the foundation the AI rollout sits on top of, rather than something they’ll get around to once the technology settles.

In most organizations, the AI tools line item is visible and protected. The investment in team health is the first thing that gets squeezed. That’s a mistake.

A few places to start

The five ideas above are ways of thinking about the leadership challenge. But leaders often ask for something more concrete — something they can actually do in the next thirty days. Here are three.

Clarify decision rights.

Pick three to five decisions that currently land on your desk and ask yourself honestly: does this actually need me? For each one, define clearly who owns the call, what guardrails apply, and what “getting it wrong” looks like so people know the real risk level. Write it down and share it with the team. When people know exactly where their authority begins and ends, they stop waiting for permission and start building the judgment you need them to have.

Run a regular AI share-out.

Carve out fifteen minutes in an existing team meeting once a month for people to share how they’ve been using AI — what worked, what didn’t, and where they’re still skeptical. The format matters less than the habit. When the leader goes first and is honest about their own experiments and missteps, it signals that this is a safe conversation to have. Over time, your team builds collective judgment about what these tools actually do well in your specific context, which is far more useful than any generic training.

Do a task-level audit with your team

Ask each person to track their work by task for one week, not by role or project, and then look at it together. You’re not building a case for cuts; you’re finding where AI is already compressing time and where that capacity is going by default. It’s the most grounded starting point we’ve found for making AI adoption a real leadership decision rather than something that just happens to your team.

The bottom line

AI is changing what teams can accomplish in a given week. It hasn’t changed what teams need in order to work well together. People still need enough context to make sound decisions. They still need to trust each other enough to be honest. They still need leaders who take their development seriously.

The leaders who come through this era well will be the ones who’ve been building those things all along — and the ones who start building them now.

→ Read the companion piece: Career Security in the Age of AI by Devin Lemoine

March 24, 2026

SHARE THIS POST:

Leadership Insights,
Straight to Your Inbox 

Get people-first strategies, practical tools, and the kind of leadership insight that actually moves teams forward.


Leadership Insights,
Straight to Your Inbox 

Get people-first strategies, practical tools, and the kind of leadership insight that actually moves teams forward.


Thank you

For 40 years, we’ve partnered with companies to develop high-potential talent, design succession strategies, and transform leadership at every level.  

Explore Our Solutions

Work With Us

Leadership doesn’t wait. Neither should you.

Get Started