Back to Blog
Engineering2026-04-01 · 10 min read

Agentic AI Needs a Control Plane, Not Just Prompts

Google Workspace's agentic expansion and NVIDIA's BlueField-4 STX point to the same shift: AI is becoming an operations-architecture problem. This post reframes AI workloads through a control-plane lens.


Agentic AI Needs a Control Plane, Not Just Prompts

Recent AI product announcements point in the same direction. The industry is moving beyond smarter models and toward systems that can connect tools, read state, create documents, and execute work.

Google is expanding Gemini across Docs, Sheets, Slides, and Drive to automate more of the work loop. NVIDIA is addressing the long-context and state-heavy side of agentic inference with infrastructure like BlueField-4 STX. OpenAI is also emphasizing controllability and monitorability for reasoning models.

The signal is pretty clear:

The bottleneck in AI systems is shifting from model quality to operating structure.


1) Agentic AI is a coordination problem before it is a conversation problem

Traditional chatbots only need to handle input and output well.
Agentic systems have a much longer path:

  • interpret intent
  • decompose the task
  • select tools
  • verify permissions
  • read external state
  • validate the result
  • retry or roll back on failure

If any step in that path is brittle, the whole experience breaks.
So the real issue is not prompt engineering. It is the orchestration layer.


2) Why the control plane deserves its own category

When teams adopt agentic AI, they often focus on the model and the tools.
In practice, these layers matter more:

  • Policy layer: what can run automatically, and what requires approval
  • Permission layer: how read, write, payment, and deployment access are separated
  • State layer: where conversation context, work context, and external state live
  • Observability layer: how every action is traced and audited
  • Recovery layer: how the system restarts, retries, or safely degrades

Without these layers, the model can be smart and the system can still be unstable.
That is why I prefer to think of agentic AI as a distributed system with a control plane, not just a feature on top of a model.


3) MCP is a connection standard, not an operating policy

Model Context Protocol (MCP) is extremely useful for standardizing tool connectivity.
For teams wiring multiple tools together, it is close to table stakes.

But MCP does not finish the architecture for you.

What MCP solves is mostly the connection layer.
What usually breaks in production is elsewhere:

  • which servers are trusted
  • which tools are opened in which context
  • who approves write operations
  • where logs and audit trails are stored
  • how failures are isolated

So MCP is the plumbing. The control plane is the operating system. Plumbing alone does not make a factory run.


4) Infrastructure is already changing for agentic inference

NVIDIA's BlueField-4 STX is not just a hardware announcement.
The important part is that agentic AI is becoming heavier on long context, persistent state, and complex I/O.

That means infrastructure can no longer be optimized only for "fast token generation."

You now need more than GPU throughput:

  • memory and storage designs that can sustain long context
  • paths that reduce tool-call and data-movement overhead
  • cache and batching strategies that keep costs under control
  • network and permission boundaries that contain failures and latency spikes

AI performance is becoming a function not only of model capability, but also of data movement cost and state management quality.


5) What good agentic architecture should answer

A good system should be able to answer questions like these:

  • What is this agent allowed to automate, and what is explicitly off limits?
  • How are tool permissions minimized?
  • Where are the human approval gates?
  • Is recovery automatic or manual when things fail?
  • How do we evaluate quality?
  • Who can audit the system later?

If you can answer those questions, the system is operational.
If you cannot, the demo may look nice, but production will eventually expose the gaps.


6) A practical way to start

If your team is introducing agentic AI for the first time, start here:

  1. Define the work boundary first.

    • read only
    • summarize/classify
    • execute after approval
    • fully automated execution
  2. Split tool permissions.

    • read-only tools
    • write tools
    • deployment/payment tools
  3. Structure execution logs.

    • user intent
    • tool calls
    • results
    • failure reasons
  4. Evaluate the flow, not just the prompt.

    • success rate
    • retry rate
    • average runtime
    • incorrect automation rate
  5. Design approval and blocking into the product.

    • "can do it" matters less than "only does it when it should."

Conclusion

The next step for agentic AI is not a bigger model.
It is a better control plane.

Models will keep getting stronger.
But for strong models to behave safely and predictably, they need policy, permissions, observability, and recovery on top of them.

So the question is no longer "which prompt should we use?" It is this:

Where is the operating layer of our AI system?


References