We’re just getting our heads around the impact, power and usefulness of Large Language Models (LLMs) such as ChatGPT, perhaps getting our first couple of real use cases off the ground… and then the CEO pushes his head around the door; “Are we on top of this Agentic thing? I heard it’s meant to completely change the game!”
At Mantel Group, we’re certainly suddenly fielding a lot of questions around Agentic AI:
- “Is it different to the LLM approach we’ve just got into production?”
- “What’s the actual added benefit from a business perspective?”
- “Does it cost more?”
- “If I go down this path, will there be something else new and shiny in 12 months’ time?”
- “How can I trust an autonomous tool?”
… and we wanted to provide a bit of a real-world perspective that balances the hype with the practical implications. To do this, we’re starting a series on Agentic AI that will answer the above questions, among many others.
To start, we’re going to zoom out and look at the birds-eye view. This may seem simplistic, but with the amount of hype in this space, we wanted to start by getting back to basics.
What is Agentic AI?
Agentic AI refers to the collection of techniques that leverage AI agents to perform functions across our digital and physical environments.
You can think of this as an upgrade from straight Generative AI and LLMs, because the agents take an instruction as a goal and then solve the problem in an autonomous and dynamic manner. Agents can suss out how best to get to an efficient solution independent of pre-coded fixed logic and adapt their approach based on environmental responses.
A real-world example of Agentic AI might be speaking to an online shopping agent: “Find me a great watch for my 10-year-old son’s birthday. His favourite colour is blue, and we would prefer it to be analogue, not digital. By the way, my budget is $50. Oh, and he likes to swim, so a waterproof one, please.”
An agent will understand the intent of this set of instructions and add some context (it would know where you live, perhaps be accessible across a number of your devices, and it may even have access to your credit card to make the purchase).
To achieve this, multiple agents would likely operate together (e.g., a web searching agent, a purchasing agent, and an agent that orchestrates the end-to-end process). What’s really impressive in all this is the autonomy; effectively outsourcing a relatively monotonous human task entirely. This raises some broader points:
- We’re getting into the territory of “trust” with this level of autonomous AI. Humans need to trust that if we’re going to allow AI to take care of tasks entirely, then the outcome will be within the tolerance of “human acceptability”.
- This can be mitigated with a step back from the edge of the autonomous AI cliff by introducing a “human-in-the-loop”. In the above example, that might be a prompt back to the user with a recommendation (plus alternatives) and permission to proceed.
We’ll elaborate further on the topic of trust in the future.
Where’s Agentic AI going to make an impact on my business?
Compared to LLMs, we are seeing a significant increase in the range of business challenges that can be tackled through Agentic systems, as well as the level of support and automation they can provide.
- Efficiency gains. Traditional workflow automation typically only works well with very clearly defined, repetitive tasks, whilst Agentic Systems can work more autonomously and deliver an end-to-end process (vs. an LLM, which might just solve part of a process). We expect to see Agentic AI making a big impact on human-led business processes and significantly decrease operational costs.
- Decision support. Having all the right information at hand at the snap of a finger continues to be a challenge, especially in the context of fast-growing mass data availability within the organisation and in its market context. Creating AI agents that can proactively and dynamically gather and update insights and suggest alternative ways to proceed will be a powerful tool in the hands of executives.
- Code assistants. We are already seeing significant gains in the use of the Agentic code creation and frameworks and expect this to continue to grow at a significant pace over the next 12 months.
Agentic AI considerations for executives and business leaders
Agents are pretty new, and just like in the early days of LLMs, general understanding, trust, and readiness have a long way to go. Having said that, there are some opportunities for business leaders today that we believe are worth considering.
- Build something small, cheap and generalised. Having a small footprint of Agentic AI capability is a huge opportunity for organisations to make early gains in what will almost certainly become a race for capability in almost every industry. By enabling a non-production small and general use-case, a sand pit of sorts, it plants a seed of capability and technical resource that will rapidly grow over the coming years.
- It’s expensive but getting cheaper. The recent media splash around DeepSeek R1 reminds us of the ever-constant Moore’s Law of decreasing compute costs. With the amount of R&D going into the area, Huang’s law (NVIDIA CEO) has emerged that asserts that performance improvement and, inversely, cost decrease in compute is happening much quicker than Moore’s Law! This doesn’t mean we shouldn’t be doing a business case for compute-intensive LLMs, but we can rely on this trend continuing at pace.
- We do need to start building trust, otherwise we won’t see the benefits. AI Agents require humans to have high trust in the model system, and this won’t come automatically from day one. Building solutions gradually, and allowing time to experiment and evaluate outcomes is critical to gaining that trust. Guardrail maturity is also key, putting boundaries to the behaviour of the agents; as well as AgentOps to ensure transparency and explainability, avoiding ‘black box’ concerns.