Your next direct report might not be human. Are you ready to lead it?

We’ve spent decades building frameworks to help people lead teams: courses, certifications, coaching, culture decks. All aimed at shaping better managers of humans. But that’s no longer enough. Because for many workers, their first report won’t be a person. It’ll be an agent.
In June BNY Mellon onboarded 1,000 digital workers while JPMorgan Chase is building AI teams at scale. This isn’t theoretical. The new direct reports are already clocked in and they don’t need coffee, feedback, or PTO.
The problem? Most organizations are still running on legacy management models built for human hierarchies and not set up to manage machines.
Leading humans versus governing agents
When you manage people, you guide behavior. You motivate, delegate, coach, and course correct. It’s a loop built on trust and conversation.
When you manage an AI, none of that applies. You don’t coach a model. You govern it. You define inputs, monitor outputs, escalate issues, and answer for the consequences. And you do that in real time.
In AI-led teams, leadership is less about motivation and more about judgment. The ability to assess, adjust, and act across decision chains is what separates performance from liability.
It’s knowing what good looks like. It’s catching the drift, asking the right question before the system generates the wrong answer, and being accountable for outcomes, even when you didn’t directly produce them.
The HR model is out of sync
HR isn’t ready for this shift. Most performance frameworks still assume linear paths, human reports, and long-term role tenure. But digital agents break that logic.
They don’t climb ladders. They execute tasks. They can outperform junior staff one day and be outpaced by a new model the next. You don’t manage their growth. You manage the conditions in which they operate.
That shift puts pressure on organizational design itself. Hierarchies built for human oversight don’t hold when decision loops involve systems acting faster than approvals can be processed.
That means rethinking how we define productivity, collaboration, and leadership. It means building new metrics for how human employees interact with agents, not just what they produce on their own.
Are they designing good prompts? Are they escalating ethical concerns? Are they reviewing outputs critically or rubber-stamping them? These are the new leadership signals. Most performance reviews aren’t built to detect them.
Prompting is a leadership act
Prompting isn’t a technical skill; it’s a management one.
The way you frame a prompt shapes what an agent does. Vague prompts lead to vague results. Biased prompts produce biased outcomes. And poor prompting isn’t just inefficient. It can become a legal or reputational risk.
Yet most companies treat prompting like its keyboard wizardry. Something for the engineers or the “AI power users.” That’s a mistake. Everyone managing agents, from interns to executives, needs to learn how to design clear, intentional instructions. Because prompts are decisions in disguise, shaped by where they sit in the organizational context and why they’re being made.
The ethics chain is breaking
In traditional teams, ethics and escalation follow a chain of command. Something goes wrong, someone flags it, and a manager gets involved. But with agents acting independently and often invisibly, the chain breaks.
You can’t escalate what you don’t notice. And too often, companies haven’t defined what ethical escalation looks like when the actor is synthetic.
Who’s accountable when an AI produces a discriminatory recommendation? Or leaks sensitive information? Or makes a decision a human wouldn’t? If your answer is “the tech team,” you’re not ready.
Governance can’t sit in the back office. It needs to be built into team workflows. The best companies are training their people to pause, question and report, not just accept what the system spits out.
Chain of thought and chain of reasoning aren’t just cognitive tricks. They’re how human teams will spot drift, bias, and breakpoints in the AI value chain. And that skillset is only going to grow in importance.
The bottom line
AI won’t replace all managers, but it will redefine what management means. Leading agents demands flexing a different muscle and most organizations haven’t trained for it.
This isn’t about replacing soft skills with hard skills, but rather it’s replacing passive management with active stewardship: less people-pleasing and more decision accountability, fewer status meetings and more escalation pathways.
Managing machines still means leading people. But the people you lead need new tools, new rules, and a different playbook.
The companies that get this right won’t be the ones with the flashiest tech. They’ll be the ones that know how to change the game by managing what they’ve built.
What's Your Reaction?






