Written by Samuel Irvine Casey | Principal AI/MLE consultant, Mantel
“Agents are all you need”, the title of the last session I attended at Google Next 2025 and a fitting conclusion to a jam-packed week in Las Vegas. Despite sharing product announcements and customer case studies across all areas of Google Cloud, there was an overwhelming theme at the conference this year – the rise of agentic AI. Starting from the Keynote and continuing into the talk tracks and expo booths, it felt like every second conversation directly or indirectly covered agentic AI.
For those unfamiliar with the topic, check out our in-depth guide to AI agents.
Given this is such a new area of AI and it is moving at such a fast pace, there was definitely an element of uncertainty at the conference when I tried to delve deeper into the more complicated technical applications of agentic AI. However, one thing was clear – Google are going all in on this technology.
Having been a user of Google Cloud for just shy of a decade, they have always been strong at packaging new advancements in AI into user-friendly products for all levels of technical ability. This year was no different, as they announced updates to, and new releases for, a suite of agentic tools and products aimed at a diverse array of users, from the inquisitive business user to the deep technical guru.
One of the clear benefits of agentic AI is increasing people’s efficiency by supporting them with agents that can undertake simple, yet repetitive or time-intensive tasks. Google has made accessing and building these agents easier than ever with the release of Agent Space, a no/low code agent building platform that makes it easy to built simple agents that can access internal or external datasets and basic tools.
In addition, Google announced a number of pre-built agents that natively integrate into their cloud ecosystem, which can be used to help speed up data analysis, data engineering and data science tasks.
For the moderately technical, Google have fully integrated agent-building technology into Vertex AI and provided an all-in-one experience for building, deploying and monitoring AI agents. I was particularly excited by some of the agent evaluation and monitoring functionality that was announced, as it is clear Google are thinking about the long-term management of these agents in production.
Google also detailed updates to their recently released Model Armour product, which, among other things, protects LLM solutions from malicious or undesirable prompts. I’m looking forward to this being released in Australia, as it is a problem we have tackled manually in several of our engagements in the past.
For the deep technical, Google announced their own agent-building framework – Agent Development Kit (ADK). This looked very impressive, with a Python SDK and now Java SDK being released. Google are also positioning itself as a cloud-agnostic agent builder, given there were pre-built integrations with other cloud databases and third-party product APIs (e.g Slack or Salesforce), which is cool to see.
Finally, Google announced its new open-source Agent-to-Agent (A2A) protocol, designed to address the challenges of multi-agent communication. Positioning this as a complement to Anthropic’s Model Context Protocol (MCP), this has the potential to be a cornerstone of more complicated agent ecosystems, should it get sufficiently adopted over the coming year.
”A core tenet of a true ‘agentic’ system is that the agent has full autonomy and access to tools/data to make a decision or complete an action, and in my week of exploring, I struggled to find many examples of production agents that truly met this definition.
If it is a truly complicated multi-step process with an uncertain decision path, that requires a lot of assumed information and autonomous decision making, then yes, an agent might be the right approach. Throughout the week, in both conference sessions and on the expo floor, customers were talking through their Agent use cases. However, the majority of these use cases fell within three brackets:
- A reasoning agent for research,
- A digital assistant agent for scheduling and making bookings,
- And, an operations agent for simplistic manual processes.
When speaking with the customer after the session or to the Google team associated with the project, it became clear that a large portion of these use cases followed a very simplistic agent structure, and many could have been built as a guided LLM workflow with tool or database access.
I have no doubt that Agents will play a huge part in the future of AI, and we are already seeing innovative companies across the world invest in this technology and automate both simple and highly complicated processes. However, it’s good to remember that agents are merely the latest technology in a long line of AI technologies, and it’s more important to select the right technology for the use case rather than opting for the biggest weapon in the arsenal for every task.
“If your process or workflow can be solved with deterministic rules, use that. If it can be solved with a simple LLM call, do that. If it requires more complicated reasoning but is still a fixed-step process, then maybe an LLM workflow with tool access is enough.
Samuel Irvine Casey | MantelPrincipal AI/MLE consultant
If it is a truly complicated multi-step process with an uncertain decision path, that requires a lot of assumed information and autonomous decision making, then yes, an agent might be the right approach.”
It was a whirlwind trip to Google Cloud Next 25, and I learnt a huge amount about how to build and run successful agents in Google Cloud. However, I also learned that agents can be expensive, complicated and hard to evaluate compared with traditional software or other AI approaches. So, while I am very excited by the future of Agents and building them on Google, I think it is important to consider; when looking for a solution to a process or workflow, are agents all you need?