When AI Stops Waiting for Your Instructions
I’ve been a vocal advocate for AI in project management. I wrote about it, experimented with it, even convinced skeptical team members to give it a chance. But my entire understanding of what AI could do in a project setting was built on one assumption. That I would always be the one asking the questions. The AI would respond. I would decide. That was the deal. Then, during a particularly complex phase of a large scale platform migration, that assumption quietly fell apart.
We had been running AI assisted workflows for months. Automated documentation, drafted stakeholder updates, risk summaries pulled from project data. But when we started piloting agentic AI workflows, systems that could monitor progress across workstreams and surface recommended actions without anyone prompting them, the dynamic changed entirely. The system flagged a scheduling conflict between two parallel workstreams that none of us had caught. Nobody asked it to look. It just did. The room went quiet, and I could tell everyone was thinking the same thing I was. This isn’t a tool anymore. This feels like a new team member who just showed up to the standup.
That moment taught me something important about managing agentic AI. When the system isn’t waiting for your prompt, you need a clear governance model. In every project I’ve managed, the accountability chain has been straightforward. Tasks get assigned, people execute, someone reviews. But when an agentic system autonomously reprioritizes a backlog item or drafts a status update based on patterns it detected, who owns that decision? The solution we found was defining three lanes. Things the system could act on independently, things it could recommend but needed approval for, and things it was explicitly kept away from. Simple in theory. Much harder to enforce when the system keeps being right.
Here’s what surprised me the most. The resistance didn’t come from where I expected. I had braced for pushback from team members worried about being replaced. That barely came up. The real friction came from senior stakeholders who were uncomfortable with the idea that an AI had spotted a risk before they did. There’s something deeply human about wanting to be the person who catches the problem. When an autonomous system beats you to it, it doesn’t just save time. It challenges the identity that many experienced project leaders have built over years.
The lesson I took away is that the shift from AI as tool to AI as agent is not primarily a technology challenge. It’s a leadership challenge. You wouldn’t bring a new team member into a complex program without defining their role, setting boundaries, and establishing how they escalate. Agentic AI needs the exact same onboarding. I spent years learning to delegate effectively to people, understanding when to let go, when to trust, when to step back in. Now I’m applying those same instincts to a system that never gets tired and never forgets a dependency. But it also never reads the room and never knows when the right move is to slow down instead of optimize. That’s still where we earn our keep.


Leave a comment