Written by: Emma Bromet, Samantha Scott, Edwin Kurniawan, Angel Chan & Nick Koleits, part of the Data & AI team at Mantel
With the rise and proliferation of natural language-powered insights, many businesses are rethinking their BI strategy and considering moving away from traditional dashboards.
Do these new tools represent a complete paradigm shift in traditional BI, or will these new tools face the same challenges in promising the holy grail of ‘self-serve analytics’, the same way that dashboards have been for decades?
Below, our team unpack some of the arguments for and against whether dashboards are dead.
Dashboards are dead
Focusing on interaction
The increasing sophistication of natural language processing (NLP) and large language models (LLMs) allows users to ask questions and get real-time, context-aware answers without navigating a pre-defined dashboard. For many ad-hoc or exploratory questions, a conversational interface is far more efficient and intuitive than filtering and drilling down on a static dashboard. This model can also surface unexpected insights or connect data points in ways a traditional dashboard cannot.
Data democratisation
Natural language interfaces democratise data access for non-technical users. A business user who may be intimidated by complex filters, charts, and data models on a dashboard can simply ask a question in plain English. This eliminates the need for extensive training on a new BI tool and lowers the barrier to entry for data-driven decision-making across an entire organisation.
Improving data governance and consistency
“We often get asked to come in and assess a BI environment prior to a migration, and what we find is hundreds (sometimes thousands) of half-formed dashboards, built to answer a single question, or structure the data in a slightly different way that has not been reviewed by the business. This is where Natural Language Analytics tools can really shine, by giving users the flexibility to go beyond standard questions while still using the same validated data model.”
Samantha ScottInsights Capability Lead | Mantel
Dashboards often answer only narrow, repetitive questions. This leads to multiple versions for different teams, creating data silos and inconsistencies. The result: a potential data governance nightmare that could be avoided with strong underlying data models.
Natural language interfaces or conversational AI can help by allowing users to ask questions directly, retrieving insights from a single, trusted data source, and reducing the need for multiple, static dashboards.
The flexibility to question
Dashboards struggle to adapt to evolving questions. They’re fast to build for a specific scenario, but once leadership asks “what if?” or deeper questions, the dashboard often falls short, leaving decision-makers without the insights they need or leaving the dashboards useless (often leading to data/dashboard swamps). Conversational analytics is better equipped to deal with these more exploratory lines of thought.
Shifting the dynamics
“When business users self-serve insights through natural language, it reduces ad-hoc requests and lets analytics teams focus on strategic work.”
Nick KoleitsLead Data & AI Consultant | Mantel
Natural language insights and personalised data feeds can deliver insights directly to users based on their role, responsibilities, and past queries. Instead of a user having to “pull” information by going to a dashboard, the system can “push” relevant, personalised alerts and summaries to them. This shifts the paradigm from a reactive, “look-up” model to a proactive, “be-informed” model, which is often more valuable for busy executives and managers.
Bringing order to unstructured data
Traditional dashboards and business intelligence tools struggle with unstructured data like customer feedback emails, support tickets, or meeting notes. An LLM, with its natural language capabilities, can process this data and provide insights that would be impossible with a standard dashboard. For example, it could analyse thousands of support tickets and summarise the top five recurring issues, even if a key metric for that doesn’t exist.
Dashboards are alive
The case for consistency
“In cases where a user needs the same information, in the same way, on a daily basis to start their day, a well-designed dashboard will still be far more efficient than asking an LLM the same questions and waiting for it to return each result individually.”
Samantha ScottInsights Capability Lead | Mantel
Despite new advancements, for many business functions, a consistent, standardised view of key metrics is non-negotiable. Dashboards provide a single source of truth and a common framework for comparison, ensuring everyone is looking at the same numbers in the same way. This is critical for internal governance, regulatory compliance, and day-to-day operational monitoring.
We are also seeing a mismatch between expectations and reality with these tools. Business users will prompt AI for BI tools using business terminology and natural language, which is typically not how we present data in lower layers in our data warehouse. We cannot rely on the underlying LLM to infer how that maps to the data model.
Focusing on semantics
Instead of natural language, data needs to be modelled in business terminology. This means semantic layers will become more important than ever. Conversational AI exacerbates the problem of data silos and a lack of data governance. When different departments have their own definitions for key metrics (e.g., “customer” means something different to sales vs. marketing), an LLM has no way of knowing which definition to use. This leads to conflicting answers and a breakdown of trust in the system. While a traditional dashboard forces a single, standardised view, an LLM allows users to ask ad-hoc questions that expose these underlying inconsistencies.
Avoiding data overloads
“Natural Language insights tools can’t simply point the model at the data and shoot - we need to give agents and LLMs the necessary context to understand the data and make good, informed decisions to generate correct SQL. This means rich business metadata, well-named columns, semantic layers with business terminology embedded, and data modelled in a way to make querying (for the LLM) simple (more specifically to reduce the number of potential points of failure).”
Edwin KurniawanPrincipal Data Consultant | Mantel
In addition, we don’t want to overload LLMs / agents with too much data – I see a need for domain/function-aligned agents which specialise in query data from models purpose-built for specific use cases. For example, you don’t want to point an agent / LLM at an enterprise data model; that’s far too large and complex. The approach needs to be more targeted with building semantic models for specific groups of semantically related use cases (e.g. a semantic model for our HR data). Getting really good, high-quality business metadata (e.g. column descriptions, semantic meaning, example values, semantically related terminology) will be crucial for successful AI for BI implementations.
AI for BI can’t simply point the model at the data and shoot – we need to give agents / LLMs the necessary context to understand the data and make good, informed decisions to generate correct SQL. This means rich business metadata, well-named columns, semantic layers with business terminology embedded, and data modelled in a way to make querying (for the LLM) simple (more specifically to reduce the number of potential points of failure).
Preventing data hallucination
“While Natural Language Insights tools improve accessibility, users still need basic SQL and data structure knowledge. Without it, small logic errors can confidently produce wrong insights.”
Nick KoleitsLead Data & AI Consultant | Mantel
The other really important point is that business users need to understand the SQL statements generated and check if they make sense. A core challenge with LLMs is their tendency to “hallucinate” or generate plausible-sounding but factually incorrect information. This can be especially dangerous when dealing with business-critical data. If a user asks a question the system can’t answer from the underlying data, the LLM might invent an answer based on its training data, which could be outdated, irrelevant, or simply wrong. This risk is mitigated but not eliminated by connecting the LLM to an internal knowledge base.
If we make these tools accessible to the broader business users and they lack the technical ability to validate the SQL generated and executed, then they will always assume the outputs are correct. Key skills that will become more important for the non-technical business user include prompt engineering, understanding data models and understanding SQL – these are non-negotiable (at least with the current AI maturity).
Speed, adjustments, and security
“As someone who built dashboards as an Analyst, then later used dashboards as an Executive, I don't want dashboards to go away completely. I need to be able to see my critical business metrics at a glance. What I'd like is a tool to complement the dashboards, letting me interrogate the data more flexibly when I see anomalies or points of interest. If I can use natural language rather than writing my own SQL - ideally while on the go, through my mobile - it would be like having an analyst at my fingertips.”
Angel ChanData Enablement Leadership | Mantel
Dashboards excel at providing a quick, high-level overview of business performance. A user can scan a well-designed dashboard in seconds to see if key performance indicators (KPIs) are on track, identify anomalies, and understand the overall health of an operation. This is far more efficient for routine checks than a conversational interface, which requires multiple back-and-forth interactions to get a comprehensive view.
Many business users are accustomed to and prefer the visual, organised structure of a dashboard. They have built a mental model around where to find specific metrics and how to interpret the charts. Disrupting this established workflow can increase cognitive load and resistance to change. For a user who needs to check the same set of metrics every day, navigating a familiar dashboard is often the most efficient and comfortable process.
The ability to customise and limit what people can and can’t see on traditional dashboards can be achieved much more easily compared to doing it in conversational analytics tools. Since dashboards are often built for specific / narrow use cases, making adjustments – whether it’s managing user permissions, changing filters, or modifying data views – is generally more intuitive, flexible and straightforward than in AI-driven BI tools. Example. Tools like Power BI come with their own security model to handle this, I highly doubt any AI for BI tool right now provides the same security measures natively.
Speed, adjustments, and security
Dashboards excel at providing a quick, high-level overview of business performance. A user can scan a well-designed dashboard in seconds to see if key performance indicators (KPIs) are on track, identify anomalies, and understand the overall health of an operation. This is far more efficient for routine checks than a conversational interface, which requires multiple back-and-forth interactions to get a comprehensive view.
Many business users are accustomed to and prefer the visual, organised structure of a dashboard. They have built a mental model around where to find specific metrics and how to interpret the charts. Disrupting this established workflow can increase cognitive load and resistance to change. For a user who needs to check the same set of metrics every day, navigating a familiar dashboard is often the most efficient and comfortable process.
The ability to customise and limit what people can and can’t see on traditional dashboards can be achieved much more easily compared to doing it in conversational analytics tools. Since dashboards are often built for specific / narrow use cases, making adjustments – whether it’s managing user permissions, changing filters, or modifying data views – is generally more intuitive, flexible and straightforward than in AI-driven BI tools. Example. Tools like Power BI come with their own security model to handle this, I highly doubt any AI for BI tool right now provides the same security measures natively.
Our view: dashboards aren’t dead… yet
We don’t believe that dashboards are dead, at least not yet. Instead, we believe that AI for BI will help solve the dashboard glut problem, but only if managed effectively.
Every data platform or reporting migration we’ve worked on faces a common issue: dashboard proliferation. Power and business users create one-off reports for their specific needs, then discard them after just a few uses. This leads to a massive collection of ungoverned and untrusted reports.
AI for BI helps to solve this ad-hoc problem. In its current state, AI for BI is great for answering simple one-off questions, provided the underlying data is clean and well-modelled, which is a big challenge (and rarely the case).
We believe vetted and centrally-built dashboards will always have their place, due to their consistent use. Why use AI for BI to replicate the functionality of a dashboard when the same metrics and visualisations are reported every day? Using AI to replace dashboards seems like overkill for static, consistent reporting use cases, where user familiarity, speed and consistency of results is important.
Instead, we see what is happening in the BI space mirroring what is happening with agents. As Samuel Irvine Casey explored previously, everyone assumes their use cases are better with an agentic solution but, in fact, some might benefit more from deterministic code. It’s possible that down the track AI might get so good that it could replace dashboards, but we are nowhere near that right now.