Why “Agent Experience” (AX) deserves a seat next to UX and DX – and how to make your product a great place for agents to work
Written by Vihan Patel, Principal AI/ML consultant, Mantel
Agents don’t live in a vacuum
Every application is looking to add an agentic feature or interface, but not every application is ready for one – for a simple reason: agents are only as effective as the environment they operate in.
Think of a self-driving car operating on Melbourne’s neat grids (hook turns included) versus the chaos of New Delhi. One is clearly going to have an easier time than the other – and agents are no different. While we can’t control road conditions, we can make our digital environments agent-friendly. The same agent can excel in one product and fail in another purely due to how the product is organised, named, documented, and exposed.
Agent Experience (AX) is about making that environment agent-friendly. If UX (User Experience) serves humans and DX (Developer Experience) serves developers, AX serves the AI doing work on our behalf. Improve AX, and every agent you build benefits. The parallel in traditional AI is data foundations: high-quality, governed data that powers future use cases.
At Mantel, our hypothesis is that organisations tend to fund agentic features rather than agentic foundations, leading to poorer outcomes for all agentic initiatives.
This article explains the core ideas of AX, and some practical steps you can take to improve AX based on our real experiences engineering agentic systems, whether you’re starting fresh or improving what you have.
What an agent actually ‘sees’
Humans draw context from pages, labels, and learned rules. Agents rely on:
If prompts are vague, tools incoherent, and data inconsistent, the agent spends its “attention” guessing (and therefore hallucinating), causing delays, errors, and unhelpful back-and-forth with users. Below are some practical examples on how to improve AX across these elements.
Shape your APIs for tasks, not tables
Most APIs grew up around database tables: create, read, update, delete. The kinds of agents most organisations are trying to enable, however, are trying to complete tasks.
Mantel recommends designing new interfaces to reflect the tasks your agents (and ultimately, users) will actually want to accomplish.
Add endpoints that accept natural task inputs (dates, person, location) and return the outcome plus context (what changed, why it was allowed, what to check next). Small choices and details matter:
In addition, there are several techniques during solution development that we’ve found to be effective:
“For large enterprises busy enabling and democratising agentic AI across their business, we expect a more direct investment in AX to compound in value quickly, particularly as they relate to commonly used source systems, documentation and information.”
Vihan Patel | MantelPrincipal AI/ML consultant
Go straight to the source for business rules, don’t duplicate them
Don’t force agents to learn policy from prompts or PDFs. In an HR case – ‘no more than 10 hours in any 24-hour period unless there’s an exception’ – if the rule only lives in a document, the agent must re-read and interpret it every time.
Satya Nadella notably called SaaS just “CRUD databases with a bunch of business logic”, and that the “business logic is all going to these AI agents”. We know better than to argue with Satya Nadella in the long-term, but in the immediate future large enterprises won’t easily be able to unravel decades of enterprise software and business logic encoded therein, unless you’re willing to invest in rebuilding these rules and making them agent-friendly (more on “agentic foundations” in future content).
Mantel recommends keeping business rules and logic in the source systems where they’re already coded; don’t duplicate them in prompts and workflows.
We note one drawback – when agents depend on APIs to provide them with feedback as to whether actions that violate business rules, a simple problem emerges – you only get the feedback on warnings or rule violations after committing to the action, at which point it may have been processed in the system, and undoing the step introduces further latency and complexity.
To address this, a better pattern is to invest in making the rule callable ahead of the action. Add a quick ‘check before you act’ endpoint – think of it as a dress rehearsal for whichever actions are being taken. The agent submits a proposed change, and gets a clear pass/fail with reasons before actually making the change.
Clear names beat clever systems
Ambiguous language is one of the quietest sources of agent friction. If you can’t tell the difference between a “business unit” and “operational unit”, neither can your agent. If a location is “store,” “site,” and “cost centre,” the agent must learn three names for one thing.
You don’t need a grand taxonomy. Start with a short, public glossary of the handful of concepts that most often cause confusion, and use those names consistently in the UI, docs and prompts. When you must keep legacy terms, write down the mapping in one place and point the agent to it.
Make your surfaces readable – for people and software
Agents use APIs, but they also crawl help centres, policy pages, and sometimes app HTML. Make those surfaces easy to parse:
- Keep pages short and single-purpose with clear headings; agents excel at finding specific answers when they aren’t buried
- Implement llms.txt to provide curated, crawlable guidance you actually want surfaced
- Be explicit about what automated assistants may access and at what rate, and measure their activity separately so analytics and observability aren’t distorted by agent traffic
Govern your metadata appropriately
We’ve focused on agents that interact with APIs, however a whole host of agents will also interact with databases via SQL or other kinds of direct queries. What we’ve found is that typically metadata, even if it’s human readable, does not contain enough business context to provide an agent with enough detail to truly understand the most relevant fields for a given query.
In projects where we’ve had to deliver AI-for-BI or AI-generated SQL, improved metadata management is one of the first outcomes we deliver. Look at creating metadata that is verbose and clear, assumes little business knowledge and avoids acronyms. More common business context can live in the prompt.
Conclusion
Agentic AI isn’t a bolt-on; it’s a design constraint. Invest in AX once and you’ll enable many agents to do good work safely and quickly. Most organisations only recognise AX gaps mid-delivery and try to engineer their way around it – often too late. For large enterprises busy enabling and democratising agentic AI across their business, we expect a more direct investment in AX to compound in value quickly, particularly as they relate to commonly used source systems, documentation and information.
If you’d like a deeper dive into preparing for agentic AI, Mantel can share a checklist to get your first AX improvements, and agentic solutions, into production. Get in touch via the form below.
Ready to get started? We’re ready to listen
Whether you have a specific challenge in mind or are looking to explore the possibilities of technology for your business, our team of experts are here to help.
Please provide your details in the form, and we’ll be in touch to discuss how we can achieve great things together.