Project Management for the Agentic Stakeholder

Table of contents

I was reading my own inbox folder when it clicked. Not the email kind, the one that lives at _inbox/ inside all projects developed with Take AI Bite, where messages from one repository land for another to pick up at the next session start. I had built it months earlier to stop losing observations between projects, populated it without ceremony, processed it without ceremony, moved entries to done/ when finished. It worked. I had not given it a name beyond “the inbox,” and I had not asked where the pattern came from.

The pattern came from a chapter I had read in a different life. In 2016 I sat for the Project Management Professional certification, the PMI exam that walks you through ten knowledge areas, each with its own processes, inputs, outputs, and tools. One of those knowledge areas is Communication Management. It is the discipline of moving information between stakeholders deliberately rather than hoping it shows up where it needs to be, and PMI treats it with the seriousness it deserves: who needs to know, in what form, by when, through which channel, and how you confirm receipt.

The inbox folder is Communication Management. I had built a Communication Management Plan inside a software project and not noticed.

That recognition turned into a question. If one knowledge area was already running in the project under a different name, were the others?

The honest answer is more interesting than the recognition, because it points back in time. Communication Management was the first knowledge area I recognized inside the methodology, but it was not the first I had intended to put there. The first was Scope. Before Take AI Bite existed, before DSM 6.0 named the principles, before any of this had the shape it has now, I had already written DSM_2.0, the Project Management Guidelines, as one of the earliest documents in the methodology. Scope was the layer I started from, deliberately, because I knew from the PMP that a project without a defined scope is a project that will drift. The funny part is that the file number gives it away: DSM 2 came right after DSM_1.0_Data_Science_Collaboration_Methodology I built the PM chapter before I had thought to name any of this as a framework. I was writing project management into a data science methodology because I knew the data science work needed it, and I did not yet see that the same layer would hold the rest of the structure up later.

Fun note: DSM was initially for “Data Science Methodology” – later it evolver to “Deliberate Systematic Methodology” – the acronym as not only everywhere in the documentation but is also grew on me, so I kept it.

So the recognition is not that PM showed up uninvited. It is that PM was the first thing I invited in, and the rest of the knowledge areas walked in quietly behind it, over months, without announcing themselves.

The credential I thought I had retired

The path that took me from a Colombian engineering school to deploying machine learning models and building LLM application is not a straight line, and the project management chapter in the middle of it is not a chapter I expected to come back to with this much weight. I took the PMP because I was leading teams of engineers and designers and the discipline of running projects deliberately seemed worth learning from people who had codified it. I took it seriously while I was studying for it, used it in client engagements, layered Agile practice on top of it, and moved on. It was the ground under the work, not the work itself.

The part I liked the most of Project Management was the analytical part where the processes and deliberables were broken into work packages. This brought me later to learn about Lean Management, then Six Sigma going deeper into statistical analysis, and then came Python… and with it Data Science. It all makes so much sense now!

When I started building Take AI Bite, the framework I now use to deliver client work and contribute to open source, I did not think I was returning to the PMP curriculum. I thought I was solving a different problem: how does the human stay meaningfully present when the machine can produce faster than anyone can read? The principles I codified, the protocols I wrote, the operational channels I built, all came from sessions where something failed and I wanted it to fail less the next time. None of it felt like project management. It felt like figuring out how to collaborate with an agent that does not get tired and does not always know what to do or when to stop, but it will always have something to say; even if it’s is wrong: we’ve all hear of halucinations.

Then I noticed the inbox. And then I started looking at the rest of the system through the lens of the curriculum I thought I had retired, and I kept finding the same disciplines doing work I had attributed to something else.

Ten knowledge areas, most of them already in view

The PMP curriculum organizes project management into ten knowledge areas. When I went back and mapped each of them against what the methodology was actually doing, I found full coverage in seven of the ten, partial coverage in two more, and one area where the mapping is genuinely weak. I will not walk through them with the rigor a PMP study guide deserves. The point is not to claim mastery; the point is that once you see one of these, you start seeing the others, and the coverage is closer to complete than to sparse.

Scope Management is where the story started. DSM 2, written early in the methodology’s history, defined how scope gets set and reviewed for a project: the MUST, SHOULD, COULD framework, the scope review checkpoint at sprint boundaries, the rule that every backlog item gets a single topic and splits when it tries to become two. Those are not PM rituals transplanted into software; they are scope discipline applied to the work the methodology exists to govern. PMI would recognize them without prompting.

Communication Management is the inbox, the feedback files that route methodology observations from spoke projects back to the central methodology repository (the Hub, in the public-facing language), and the blog itself, which is the outbound channel for what the work produces. Each of those is a deliberate path with a sender, a receiver, a format (template), and a lifecycle. PMI would recognize the structure on a flowchart.

Integration Management is the discipline of holding the system together when its parts try to drift. In Take AI Bite that is the @ reference chain, the line of inheritance that lets every project automatically pick up the protocols defined in the Hub, and the mirror sync mechanism that pushes methodology updates to the public repositories that distribute them. There is also a second layer to it, and I want to be honest about how I got there, because it was a question before it was a tool. The methodology grew past the size where any single human, or any single context window, can hold the connections in mind. The question that started bothering me was simple. If I cannot keep the topology of the methodology in my head, and the agent cannot keep it in its context window either, who is keeping it? Where does it live? Graph Explorer came out of that question, not the other way around. It parses the documentation network, extracts cross-references and dependencies, and represents the whole methodology as a logical graph: nodes for documents, edges for the connections between them. The graph is an integration artifact in its own right. It is the topology of how the parts relate, kept current as the parts change, and it is the first step toward a persistent memory that does not depend on re-reading individual files at the start of every session. PMI treats Integration Management as the knowledge area that coordinates all the others into a coherent whole, and this is the area where the methodology does something the PMP curriculum did not have to do at this scale: coordinate changes across a multi-repository ecosystem with an inheritance chain, a propagation mechanism, and a queryable representation of the system’s own structure. Of all ten areas, this is the one that most clearly extends the curriculum rather than reusing it.

Schedule Management is the session lifecycle. Every working session has a start: dsm-go (a command that initializes context, checks for pending messages, verifies the branch state, loads the memory file) and a wrap-up: dsm-wrap-up (a command that saves the memory, pushes the feedback, archives the transcript). Sessions are not “open the editor and start typing.” They are time-boxed units of work with declared scope and a discipline at each boundary, which is exactly what schedule management asks of any team that wants its calendar to mean something.

Cost Management is the context budget. The agent’s context window is finite. Every file read, every tool call, every back-and-forth turn consumes a resource that runs out, and when it runs out the second half of the session degrades silently. PMI calls this cost; the methodology calls it context budget; the discipline is identical. Estimate before you spend, warn when you are running low, scope the work to the resource you actually have.

Quality Management in this work happens at the input stage, not the output stage. The methodology builds quality in by standardizing the shape of every deliverable before it is written, through templates. There is a sprint plan template, a backlog item template, a daily checkpoint template, a feedback file template, a blog post template, the MUST/SHOULD/COULD framework that structures scope decisions, and a long list of others. Each template makes the missing pieces visible before they become defects: the fields are named, the sections are sequenced, and the protocols that govern how a piece of work should look are encoded into the structure of the file the work goes into. When the next BL gets created, the template tells the agent and the human what a complete BL contains, so the conversation is about the substance, not the format. The Graph Explorer I described in the Integration paragraph also runs as the inspection layer on top of this prevention work, validating cross-references and structural integrity before any change merges. PMI’s Quality Management is about preventing defects from reaching the deliverable; templates are how the methodology applies that discipline to instruction artifacts at the moment of creation, the same way a construction spec prevents defects before the inspector ever shows up on site.

Risk Management is the failure mode catalog and the response protocol that goes with it. When something deviates from the prescribed behavior, there is a three-step response: fix the immediate issue, identify the root cause, prevent the recurrence. PMI builds risk registers for the same purpose. The names differ; the structure does not.

Those are seven, and they are the ones I would be comfortable defending in front of a study group. Two more are partially covered and deserve their own honest paragraph rather than a clean claim.

Resource Management in the PMP sense is about allocating people, tools, and materials to the work. In the methodology it shows up as the session configuration recommendation (which model, which effort level, which thinking mode, matched to the planned scope), the environment preflight that checks for the tools the session will need, and the ecosystem path registry that tracks which repositories exist and how they relate. These are the right shapes for Resource Management, but the discipline is less developed than the others. I know what the extension should look like; I have not finished building it.

Stakeholder Management is where the PMP framing and the methodology framing start to diverge in an interesting way, and I will come back to this one in the next section, because it is the area where the curriculum needed extension, not just translation.

Which leaves one area where the mapping is weak; at lear in Take AI Bite. Procurement Management, in the curriculum, governs how a project acquires external services, vendors, licenses, and contracts. The methodology has a setup checklist for verifying that the tools a project needs are present and reachable, but a setup checklist is not a procurement discipline. This is the one knowledge area where I do not yet have a mapping that earns itself, and I would rather name the gap than paper it over. It is the most interesting open question in the set, because the agentic work has its own version of procurement (which model, which subscription tier, which third-party service or skill, with what licensing terms for the outputs), and the discipline that would govern it does not exist in the methodology yet.

Seven full, two partial, one open. The disciplines transferred almost intact, and I had been using them for months before I recognized whose vocabulary they came from.

The stakeholder the curriculum did not have

This is where the mapping stops being a translation exercise and starts being an extension.

PMI’s framing of project management has a stakeholder model at its core. A stakeholder is anyone whose interests, expectations, or actions affect or are affected by the project. Sponsors, customers, team members, regulators, the people next door to the construction site. The discipline is built around managing them: identifying them, understanding their interests, communicating with them at the right cadence, getting their input into the decisions that affect them. The whole curriculum assumes the stakeholders are human.

The work I do now has two kinds of stakeholders. The human collaborator is one of them: me, the client, the reviewer, the eventual reader of the deliverable. The AI agent is the other one. It has expectations (what counts as a valid response, what counts as a violation), it has interests in the methodology sense (the protocols it operates under shape what it can do and how it does it), and its actions affect the project in ways that need to be communicated, reviewed, and integrated. It is not a tool in the way a compiler is a tool. It is a participant whose behavior is governed by an instruction system that I author and maintain, and that instruction system needs to be managed with the same care PMI asks me to give to a human stakeholder.

This is the part of the PMP curriculum I could not have known I would need. The 2016 exam did not have a chapter on managing an agent that responds to a system prompt, that inherits behavioral protocols through a reference chain, that needs feedback loops to improve, that has its own failure modes when its context drifts. None of that vocabulary existed in the form it has now. But the underlying disciplines, the ones the curriculum spent weeks teaching me, transferred almost without modification.

This is not a stretch I came to on my own. When I scoped out the mapping between PMP and Systems Prompt Engineering inside the methodology, the analysis landed on the same conclusion under its own momentum: the PMP’s Stakeholder Management area reframes cleanly for the agentic case, and the instruction system that governs the agent is, structurally, a stakeholder management plan for a stakeholder the curriculum did not have to consider. That finding has been sitting in the methodology’s backlog archive for weeks before I put it in a sentence here. The reframing is in the record; I am describing it, not inventing it.

Traditional project management manages one kind of stakeholder; this work manages two. The instruction system that governs the agent is not a side artifact; it is the project management plan for the second stakeholder. CLAUDE.md is a stakeholder communication document. The session-start command is a stakeholder onboarding protocol. The feedback files are a stakeholder feedback channel. The version updates are stakeholder change management.

I am not stretching the metaphor. The artifacts are doing the work the metaphor describes.

What this means for the credential, and for the discipline

I used to think of the PMP as a chapter that closed when I moved into data science. The work I did after, the BI systems, the process mining, the machine learning deployments, the framework I now build and use every day, none of it was project management in the form I had been certified for. It felt like a different field with different conventions and a different vocabulary, and the certification was a credential I had earned and moved past.

It turns out the credential was not closing. It was waiting for the second stakeholder that didn’t exist is 2016.

Systems Prompt Engineering is not project management with a new name, and I want to be careful about that. PMI does not have the concept of an instruction system that governs a non-human participant; it does not have the concept of a context window or a system prompt or a reference chain or a feedback loop between an agent and the documents that shape its behavior. These are real additions, not relabelings. But the disciplines that PMI codified for managing complexity, communication, integration, schedule, cost, quality, risk, are the disciplines that hold the second stakeholder’s project together too. The curriculum was not wrong about what work needs to happen. It was incomplete about who the work would eventually need to happen for.

A decade of structured project delivery is the ground this stands on, and I wrote that line in my own About page months ago without fully understanding what it would mean to write it. It meant that when I built Take AI Bite I was not building from nothing. I was building the missing chapter of a curriculum I had already studied, and the work I had thought was a detour had actually been preparation. The credential is not being repurposed. It is being completed.

I should be careful here, because PMI has not been standing still. Since I took the exam in 2016, the institute has launched a dedicated credential for managing AI delivery projects (PMI-CPMAI), shipped its own GenAI assistant for project managers (PMI Infinity), and the upcoming PMBOK 8th edition embeds AI as a tool and technique across the performance domains, including the use of natural language processing to read stakeholder communications. PMI is already in the AI conversation, and seriously so. The direction it has taken is twofold: credentialing the projects that deliver AI systems, and putting AI in the hands of the project manager as a tool for doing existing PM work better. Both are real and useful directions. Neither of them is the one this post is pointing at. The framing here is a third direction: treating the instruction system that governs the agent as a stakeholder management plan for a stakeholder the curriculum did not have to consider. PMI uses AI to help the PM manage human stakeholders. This work extends Stakeholder Management to cover the agent itself, as a participant whose behavior is shaped by an artifact the human authors and maintains. Different framings, complementary rather than competing, but distinct enough to be worth naming.

There is something I find honest about that. Not many things in a working life turn out to fit together that cleanly, and I am wary of stories where the past lines up too neatly with the present. This one lines up because the underlying disciplines are general, not because I was clever about anticipating where the work would go. PMI codified what it takes to hold a complex project together with multiple stakeholders, evolving requirements, distributed information, finite resources, and recurring failure modes. Of course those disciplines transfer to a project where one of the stakeholders happens to be an agent… an AI agent to be precise. The surprise is that I had to spend a year building the second half of the curriculum from scratch before I noticed which curriculum it was.

I think the most accurate way to locate this work is to place it in a lineage. PMBOK and the PMP sit in a family of methodologies that exists to improve practitioner outcomes by providing shared structure: Scrum, the scientific method, the encyclopedia tradition, the whole history of bodies of knowledge that human beings build so that the next human does not have to start from zero. None of those methodologies do the work for the practitioner. They organize the work so the practitioner can do it well, communicate it faithfully, and hand it to someone else without losing what matters. Systems Prompt Engineering belongs in that family. It is a methodology in the same sense the PMBOK is a methodology, and it inherits the same obligation: to organize the work so the human stays in control of direction and the AI stays useful without running away with the delivery.

The Procurement gap is the honest reminder that the inheritance is incomplete. I would rather name that than varnish it. But the rest of the mapping holds, and it holds in a direction I did not plan for. Seven knowledge areas in full view, two more in partial view, one open question, and one stakeholder the 2016 exam never had to think about. The discipline that learned how to manage the first kind of stakeholder has more to teach about managing the second than the project management community has yet gone in this specific direction.

The PMP I took in 2016 is running my AI collaboration. I just did not have the vocabulary.