AI agents aren’t people. They’re robots.
JoeFletcher
20 January 2026
AI agents aren’t people. They’re robots.
Our Director of Engineering, Lukasz Szczesiak, explains why that matters:
Agents don’t actually “use” software like a person would, and they don’t interpret nuance or subtle cues particularly well either. They thrive on structured data, contracts, and clear tool definitions they can reference.
That’s why building AI agents becomes significantly easier when your organisation is, or becomes, API-first.
When you expose your workflows as discoverable APIs / MCP tools, everything changes:
- Reliability: mapping “please do X” into a tool call with a structured response, instead of evaluating a complicated system prompt and then interpreting an unstructured response.
- Safer autonomy: controlling (and clearly explaining) what an agent can and can’t do using scopes and permissions.
- Observability: every individual action can be logged, is traceable, and potentially reversible.
- Reusability: the same API endpoints power your people, your scripts, and your agents.
Implementation might become less “magical”, true – but also much less unpredictable, more operationally sound, and simply less risky. Agents are just another client of your platform.
While unpredictability, hallucinating, and making mistakes might feel more “human”, it’s missing the point entirely.
If you want to “make money while you sleep”, your agentic workflows need to be structured, callable tools – not (even a very well written) instruction hidden inside some massive system prompt.
But don’t even get me started… we’ll probably talk about evals next time.
Here at pubX, we’re building toward an Agentic AI future where curation and deal execution can scale without publishers losing control – with API-first workflows, clear data contracts, and guardrails designed for transparency and safety.