AI Policy is Not Agent Policy
Most enterprises have an AI policy. Almost none have an agent policy. The two govern different things, and treating them as the same is how autonomous software ends up running in production under an acceptable-use clause written for people.
Most enterprises now have an AI policy. They wrote it sometime in the last eighteen months, usually in response to a board question, a regulator nudge, or the day they noticed someone in finance pasting customer data into ChatGPT.
It is a good document. It says reasonable things. Do not paste confidential data into public models. Use the approved enterprise tier. Disclose AI-generated content. Get InfoSec sign-off before training a model on customer data.
And it is almost completely useless for the thing that is actually coming next.
Because an AI policy governs how humans use AI. An agent policy governs what AI does on its own.
Those are not the same problem, and the controls you need are not the same controls.
Where AI Policy Stops
An AI policy is, at its core, an acceptable-use policy with a few extra clauses. It assumes a human in the driver's seat. A person opens a chat window, types a prompt, reads the output, decides what to do with it. The policy tells that person what is in bounds.
The control surface is the human. You train them. You audit their behavior. You revoke their access if they misuse it. The model itself is mostly passive, a tool the human picks up and puts down.
Typical AI policy clauses:
- Which models employees may use, and at which data classifications
- What data may be shared with which model tier
- Training data restrictions and IP considerations
- Disclosure requirements for AI-generated content
- Vendor review and model approval workflow
- Acceptable-use boundaries (no legal advice, no medical advice, no autonomous decisions about people)
All of this is real work and worth doing. None of it survives contact with an autonomous agent.
What Changes With Agents
An agent is not a person using AI. An agent is software that uses AI to decide what to do next, and then does it.
That sentence sounds small. It is not.
The moment you put an agent into production, three things change:
- The agent has its own identity. It logs into systems. It holds credentials. It makes API calls. It is, for all practical purposes, a non-human user of your stack.
- The agent takes actions, not just answers questions. It moves data. It creates tickets. It updates records. It sends emails. It calls other agents.
- The agent operates without a human in the loop on every step. That is the entire value proposition. If a human had to approve every action, you would not need an agent, you would need a faster intern.
An acceptable-use policy aimed at humans cannot govern any of this. There is no human at the keyboard to discipline. There is no prompt to review. There is no "did the employee follow the policy" because there is no employee.
You need a different document. Worse, you need different controls.
What an Agent Policy Actually Covers
An agent policy is closer in spirit to an identity and access policy than to an AI policy. It governs a non-human worker. It has to answer questions that an AI policy does not even ask.
The core sections of a usable agent policy:
1. Identity and provenance. Every agent has a unique, attributable identity. Not a shared service account. Not a developer's personal token. A registered agent identity with an owner, a purpose, and a lifecycle. You should be able to ask, at any moment, "what is this agent, who owns it, and why does it exist."
2. Scope of authority. Which systems the agent may touch. Which data classifications it may read. Which actions it may take without human approval, which require approval, and which are flatly prohibited. This is the agent's job description, written in policy form.
3. Action limits and rate boundaries. An agent that can call an API once per hour is fundamentally different from one that can call it ten thousand times per minute. Agent policy sets the upper bound. It also sets escalation thresholds: at what volume, value, or sensitivity does an action require a human signature.
4. Human-in-the-loop requirements. Not all actions are equal. Reading a public web page is different from sending money. Agent policy defines, by action class, where a human must approve before execution. This is the difference between an agent that books a meeting and one that wires funds.
5. Audit and attribution. Every action the agent takes is logged in a form that survives a regulator, a board, or a lawsuit. Not "the AI did it." Not "the model produced this output." Which agent, acting on whose behalf, took which action against which system at which time, and what was the basis for the decision. If you cannot produce that record, you cannot defend the program.
6. Decommissioning. Agents must be able to be turned off cleanly. Credentials revoked. In-flight work either completed or rolled back. Logs preserved. This is the part everyone forgets until they need it.
7. Inter-agent governance. Once you have more than one agent, you have a coordination problem. Which agent can invoke which other agent. What happens when two agents disagree. Who arbitrates when an agent calls a tool that another agent owns. This is the part nobody had to think about a year ago and everyone will be thinking about a year from now.
A Concrete Example
Consider a single use case: an agent that handles inbound vendor invoices.
Under an AI policy, you would write rules like: "Employees may use the approved AI model to summarize vendor invoices for internal review." Done. The human is still doing the work.
Under an agent policy, the same use case explodes into a real document:
- This agent is named "AP-Intake-01." It is owned by the Finance Operations team. Its purpose is to process inbound vendor invoices.
- It may read mail from the ap-inbox@ shared mailbox. It may not read any other mailbox.
- It may create draft invoice records in the AP system. It may not approve them.
- It may classify vendors against the existing vendor master. It may not create new vendors.
- It may flag invoices over fifty thousand dollars for human review. It must flag invoices from a vendor not seen in the last twelve months.
- It may not initiate any payment under any circumstances.
- Every action is logged with the source email message ID, the model version that produced the classification, and the confidence score.
- The agent is reviewed quarterly. Its scope may not be expanded without a change ticket signed by the Finance Operations owner and the CISO.
That is not a paragraph in an AI policy. That is a governance artifact for a non-human employee. It looks a lot more like an offer letter than an acceptable-use clause.
Why This Distinction Matters Now
Most of the agents already running inside enterprises today are governed under an AI policy by default, which is to say, not really governed at all. The policy was written for humans typing into chat windows, and the agent is doing something completely different.
This is fine until it isn't. The moment an agent takes an action that produces a meaningful business consequence (a wrong payment, a leaked record, a deleted ticket, an embarrassing email), the question gets asked: under what policy was this thing operating, who approved it, and what are the controls.
"The AI policy covers it" is not going to be a satisfying answer to a regulator, a customer, or a board.
The work is not to throw away your AI policy. The work is to recognize that you now need two documents, governing two different things:
- AI Policy governs how humans use AI as a tool.
- Agent Policy governs how AI operates as a non-human actor inside your enterprise.
Both are necessary. Neither covers what the other covers. Treating them as the same document is how you end up with autonomous software running in production under an acceptable-use clause written for people.
Where to Start
If you have an AI policy and you do not yet have an agent policy, the practical first steps:
- Inventory your agents. You probably have more than you think. Anything in your SaaS stack with "autopilot," "auto-action," "auto-resolve," or "agent" in the marketing copy. Anything your engineering team built on top of an LLM API. Anything embedded in your low-code platform that takes actions on its own.
- Assign each one an owner. A real human name. Not a team alias.
- Write down what each agent is allowed to do. Not what it can do. What it is allowed to do. The gap between those two is where your governance problem lives.
- Decide where humans must stay in the loop. Not by feel. By policy. Documented. Reviewable.
- Make sure every action is logged in a way you could hand to a regulator. If you cannot, you do not have an agent program, you have an agent risk.
This is the work. It is not glamorous. It is not the part of AI that gets press releases. But it is the difference between agents that quietly do useful work and agents that quietly create your next incident report.
An AI policy got you to the starting line. An agent policy is how you actually run the race.