Stop the start-stop project cycle and turn AI into an everyday edge
Written by Cath Jordan, Principal Data & AI Consultant, Mantel
Executive summary
The AI Capability Gap: Why strategy often fails
Executives are making significant investments in Generative AI (GenAI) and Agentic systems, viewing them as essential to future productivity and competitiveness. Yet, beneath the enthusiasm, a troubling strategic disconnect exists: we’re approaching this continuous, evolutionary challenge with a fixed, legacy project mindset. This isn’t just inefficient; it’s creating a systemic capability lag that starves your AI investments of sustained value and unnecessarily increases risk.
The core premise for thought leaders is this: AI enablement is not a temporary item on a checklist; it’s a foundational, enduring operating capability. Furthermore, as new technologies like Agentic systems emerge, leaders must exercise strategic discernment – understanding when an expensive, complex Agent is necessary versus when a simpler LLM workflow is sufficient. The failure to make this distinction, often driven by the project-based rush to acquire the “biggest weapon” for simpler tasks like digital assistance or research reasoning (Mantel Insights 9), is where budgets are misspent and pilots stall.
The challenge for the c-suite lies in shifting the paradigm of investment and governance. For technology leaders, it’s about pioneering adaptive, secure environments. For HR and change professionals, it’s about redesigning the architecture of work itself. We must accept that our organisational capability needs to evolve at the same speed as the algorithms if we want to secure a lasting competitive edge.
To close this gap and transform your AI investments into enduring competitive advantage, we outline a fresh perspective. Read on to discover specific mandates for c-suite, technology, and HR leaders, along with actionable interventions you can initiate today to secure your organisation’s continuous evolution.
Section 1
Keeping up with the pace of AI: Why organisations struggle to adapt
The nature of technological disruption has changed. GenAI and Agentic systems are not like legacy IT rollouts. They are characterised by: a continuous update cycle, often autonomous operation, and decentralised, bottom up adoption by employees.
This velocity exposes a systemic challenge: the capability gap. Organisations are acquiring world-class algorithms, yet their employees lack the up-to-the-minute literacy, secure environments, and evolved processes needed to manage them effectively. This human and process lag exists alongside a significant technical requirement: the foundational data element.
Organisational readiness must, therefore, be tackled concurrently across people, process, and data. Mantel are actively seeing—and supporting—a strong market demand for robust AI-ready data pipelines, high-quality data, and scalable cloud platforms as companies recognise this fundamental technical prerequisite.
However, crucially, this technical work must be accompanied by parallel evolution in people and processes!! If your data foundation is ready but your people and processes are not, your investment will still fail to transition and the capability lag will continue.
The consequences for Australian businesses are tangible:
- Pilot paralysis and misspent potential: While there is widespread experimentation, reports from the Australian market show that 53% of companies have slowed or stopped GenAI initiatives due to difficulties proving business value or managing data challenges2. We are stuck in pilot mode because the infrastructure for continuous scaling—the people and the process—was treated as an afterthought.
- The Agentic disconnect and over-engineering: As Mantel research highlights, while the hype suggests “Agents are all you need,” the reality is that many complex, expensive Agent deployments could be solved with simpler, guided LLM workflows9. The drive to deploy the “biggest weapon in the arsenal for every task”—often resulting in high cost Agents for use cases like simplistic manual operations—is leading to misallocated budgets and unnecessary complexity.
- Unmanaged risk amplified: The shift to Agentic systems introduces a new layer of risk, requiring full autonomy for decision making. If governance structures are static, they fail instantly when faced with systems designed to act independently.
The old sequential change model simply cannot absorb this level of continuous, often unpredictable, evolution.
Section 2
AI Enablement: How to build an effective operating model
To close the capability gap, AI Enablement must be philosophically redefined as the function dedicated to building continuous organisational fitness.
Shifting the human perspective
The future professional skill set is defined by its relationship with AI. This is a move beyond training to cultivating professional maturity:
- Strategic discernment and skill adjacency: The value lies in cultivating the human judgement required5 to select the appropriate technology. We must actively cultivate skills that sit adjacent to AI: advanced critical verification, ethical judgement, and problem framing. This involves teaching people how to be effective directors and auditors of AI output.
- The culture of continuous curiosity: Organisational momentum requires a culture where safe, structured experimentation is championed by leaders. Curiosity must be seen not as a distraction, but as the engine of continuous learning and innovation6.
- Evolving literacy: AI literacy is now a non-negotiable professional competence. It must be a dynamic, modular learning stream7, continuously updated to align with new model capabilities and internal policy changes, particularly regarding the complexity of deploying and monitoring Agentic systems, which are “hard to evaluate” 9.
Shifting the Process Perspective
Our processes must transition from rigid compliance to adaptive, living frameworks:
- Adaptive governance4: We need an adaptive framework3, championed by a dedicated, cross functional AI Stewardship group. This group’s mandate is the continuous review and adjustment of guardrails, ensuring compliance with evolving Australian standards. This is essential when dealing with Agents, where the governance needs to manage the autonomy inherent in the system.
- Embedded feedback loops: Continuous evolution demands continuous data. Every AI interaction should be designed to solicit feedback—on output quality, bias, and efficiency—and route that data back to both the technology team for refinement and the enablement function for training adjustments.
- Workflow redesign as a discipline: Value is realised when AI prompts a radical reimagining of the entire workflow. AI enablement mandates that continuous workflow auditing and redesign—focusing on elevating the human role—becomes a permanent operational discipline, rather than a one off project outcome.
Section 3
Why AI projects fail and how to move from pilot to scale
This is the strategic decision point for Australian business leaders: perpetuating a flawed model or committing to a necessary evolution.
The problem with the project model
The traditional change project (fixed scope, fixed budget, end date) is the primary driver of the capability lag because it is structurally antithetical to AI’s nature 6, 7:
- Misallocated budget risk: The project model, driven by the pressure to deliver a headline grabbing “solution,” encourages the deployment of complex, high cost technologies even when simpler deterministic rules would suffice. For instance, using a complex, expensive Agent for internal digital assistant scheduling and bookings (a common use case observed in the market) is often a gross overspend when a guided LLM workflow could achieve the same outcome with far less governance overhead9.
- Unaccountable complexity: Agents are “complicated and hard to evaluate” 9. The hardest work—long term monitoring and evaluation of an Agent deployed for complex research reasoning—begins only after project closure. The project model guarantees this crucial sustainment phase is starved of funds and resources. The failure rate of AI initiatives is exceptionally high, with some reports suggesting 95% of GenAI pilots bring zero return 1 and up to 90% of analytically immature organisations fail to achieve their project goals8.
- Episodic investment: Funding stops when the technology is launched. This starves the necessary sustaining activities—the continuous policy updates, the dynamic reskilling, the workflow maintenance—guaranteeing that the system of capability collapses under the weight of continuous technological change.
Embracing the evolution imperative
AI Enablement must be framed as an Evolutionary imperative—a permanent, strategic function that operates with a continuous mandate:
- Perpetual funding: Capability building must be funded as a standing operational expense (OPEX). This financial commitment signals that human evolution is inseparable from technology acquisition.
- A living strategy: The capability roadmap is a living document, reviewed and pivoted quarterly, informed by technology updates, regulatory shifts, and internal data on proficiency. It exists in constant dialogue with the business and technology roadmaps.
- Measuring velocity, not milestones: Success is measured by the continuous velocity of adaptation—tracking sustained proficiency, ethical usage rates, and the verifiable business value derived from human-AI augmentation.
Failing to make this distinction is not a change management oversight; it is a strategic failure to adapt to the speed of the modern economy.
Section 4
Strategic priorities for scaling AI across your organisation
Effective execution of the Evolution Imperative demands coordinated, continuous ownership from all layers of senior leadership.
C-suite executives (CEO, COO, CFO)
The mandate: Architecting trust and sustained investment
The C suite sets the moral and financial architecture. Their focus is on perpetual investment and establishing adaptive governance that moves at the speed of the technology. They must champion a cultural narrative of augmentation, managing the internal climate to ensure that employees are motivated by evolution, not paralysed by fear. They are the ultimate custodians of long term trust and risk.
Technology leaders (CIO, CTO)
The mandate: Enabling secure and agile absorption
Technology leadership must transform from being a deployment function to being the architect of continuous capability. This means building the secure, internal ecosystem: providing governed sandboxes for safe experimentation and designing all deployment pipelines to include embedded user feedback mechanisms. Their primary value is in making the secure, continuous absorption of new AI features easy and safe for the business.
HR and Change Management professionals
The mandate: The engine of human and process redesign
HR and Change are the stewards of the new operating model. They must move to managing dynamic capability streams—constantly updating learning content to meet the immediate needs of new models. Crucially, they must formalise the discipline of continuous workflow auditing and redesign, partnering with operational leaders to ensure that the human role is always being elevated to the domain of judgement.
Section 5
Overcoming organisational and cultural barriers to AI adoption
The transition to a continuous capability requires targeted, high level interventions to overcome key organisational inertia.
1. Mitigating the anxiety of obsolescence
Actionable tip: Institute a Role augmentation map and a designated augmentation time (DAT) policy.
Mechanism: Transparently map all key roles, showing which tasks are augmented and explicitly defining the new, higher value skills required to manage the AI. Formally allocate paid working time (DAT) for employees to focus on mastering these skills. This replaces passive fear with an active, funded pathway for professional evolution.
2. Operationalising responsible AI (RAI)
Actionable tip: Embed ethical and compliance guardrails directly into the user interface and workflow, specifically targeting Agent autonomy.
Mechanism: Given that a true Agentic system requires full autonomy and access to tools/data9, the risk is exponentially higher. For systems designed to perform simplistic manual operations autonomously—a task often observed in early Agent deployments—enforce Mandatory Verification Checkpoints or Human in the Loop (HITL) checkpoints before the Agent executes critical, irreversible actions. This ensures that the governance matches the level of machine autonomy, preventing small autonomous errors from scaling into large organisational failures.
3. Scaling workflow change at speed
Actionable tip: Decentralise the authority and expertise for redesign to the front line of the business.
Mechanism: Establish small, permanent, cross functional AI Capability Teams (ACTs) within key operational units. These teams are continuously mandated to redesign and propagate one high impact, AI integrated workflow per quarter. This moves the engine of evolution from a slow, distant central project office to a rapid, peer driven operational function.
Conclusion
The capability lag is not an inevitable side effect of technological change; it is the direct outcome of a strategic choice to manage continuous evolution episodically. The Australian organisations that will secure a sustained competitive advantage are those that recognise AI Enablement for what it truly is: a permanent, foundational business capability. Furthermore, they will exercise strategic discernment—understanding when to use the right, simpler tool instead of chasing the latest complex technology.
This requires courage to abandon the fixed project model and the foresight to fund and govern for perpetuity.
Call to action: Initiate the executive mandate today: commission the establishment of a permanent AI capability function within your organisation, securing its operational budget and its charter for continuous evolution. Stop managing AI as a project you complete, and start managing it as the essential capability you sustain.
AI Literacy for Executives
Our Executive AI Literacy Program is designed to help leaders build the confidence and fluency needed to make informed, impactful decisions about AI.
Ready to get started? We’re ready to listen
We help you engineer a permanent capability to transform AI from a fragmented investment into a sustained operational asset. Ready to end the lag? Contact us to schedule a strategic discussion about your needs.
Sources
The following sources, or studies referenced within them, underpin the data points and 4strategic framing in this wWhitep Paper:
- “95% of Gen AI Initiatives Fail—Yours Don’t Need To,” CFO Magazine Australia, September 30, 2025, https://cfomagazine.com.au/95-of-gen-ai-initiatives-failyours-dont-need-to/ (accessed October 24, 2025).
- Andrea Hill, “Why 95% Of AI Pilots Fail—And What Business Leaders Should Do Instead,” Forbes, August 21, 2025, https://www.forbes.com/sites/andreahill/2025/08/21/why-95-of-ai-pilots-fail-and-what-business-leaders-should-do-instead/ (accessed October 24, 2025).
- Aditya Challapally, Chris Pease, Ramesh Raskar, and Pradyumna Chari, “The GenAI Divide: State of AI in Business 2025,” MIT NANDA, July 2025, https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf (accessed October 24, 2025).
- “Australian Firms Struggle to Transition GenAI Pilots to Full Use,” CFOTech, October 24, 2025, https://www.google.com/search?q=https://cfotech.com.au/story/australian-firms-struggle-to-transition-genai-pilots-to-full-use/ (accessed October 24, 2025).CFOtech Australia / Informatica (2025)
- MinterEllison, “National Framework for the Assurance of AI in Government,” October 24, 2025, https://www.minterellison.com/articles/national-framework-for-the-assurance-of-ai-in-government (accessed October 24, 2025).
- Digital Transformation Agency (DTA), “Policy for the Responsible Use of AI in Government,” October 24, 2025, https://www.digital.gov.au/policy/ai/policy (accessed October 24, 2025).
- “CFO AI Strategy: Value, Data, People Framework,” UNSW BusinessThink, October 24, 2025,
https://www.businessthink.unsw.edu.au/articles/cfo-ai-strategy-value-data-people-framework (accessed October 24, 2025). - University of Technology Sydney (UTS), “Human Technology Institute,” accessed October 24, 2025, https://www.uts.edu.au/research/centres/human-technology-institute.
- Ed Husic, Minister for Industry and Science, “Unlocking the Potential of AI in Australian Industry,” accessed October 24, 2025, https://www.minister.industry.gov.au/ministers/husic/speeches/unlocking-potential-ai-australian-industry.
- “Why Do Analytics and AI Projects Fail?,” Melbourne Business School (MBS), October 24, 2025, https://mbs.edu/news/why-do-analytics-and-ai-projects-fail/ (accessed October 24, 2025).
- Mantel Group, “Are Agents Really All You Need?,” October 24, 2025, https://mantelgroup.com.au/are-agents-really-all-you-need (accessed October 24, 2025).