Simon Poulton
Partner, Google Cloud
Insights from Google Next 2024, Las Vegas
”“If we do nothing, and our competition creates a 20% margin gap through efficiency gains, or a 3x gain in customer service, where does that leave our business?”
In April, while at Google Cloud Next, I found myself seated through yet another presentation about AI (this being number 30-something), and I couldn’t help but feel torn.
One part of my mind was busy geeking out on the vast possibilities of what was being discussed. However, amid my geeky excitement, the rational part of my mind couldn’t help but wonder: why do we witness so many Generative AI experiments, but comparatively little application in production? Reflecting on recent conversations with business owners in regulated enterprises, it occurred to me that many organisations hesitate to move experiments into production due to a lack of support from the board and senior management.
I realised that most board members and executives I’ve spoken to may not have enough knowledge of Artificial Intelligence, and the intricacies of model creation, to know which questions to ask to fulfil their responsibilities in assessing the technology’s risks.
If you’re reading this and you’re pressed for time, the one thing I want you to take away is: the power of AI is changing the game at an incredible rate, and all businesses will feel those effects.
Here is a list of questions that boards and senior executives should ask each other and their technology teams, to ensure some of the major risks are considered.
In the tech world, whenever a new technology rolls onto the scene, there’s an intrinsic risk in doing nothing. In the context of AI, we can try and understand it in this framing:
“If we do nothing, and our competition creates a 20% margin gap through efficiency gains, or a 3x gain in customer service, where does that leave our business?”
The proliferation of AI will impact all businesses in some form.
Boardroom and senior executives should be asking:
When considering AI solutions, we see time and again that the models implemented are so crucial in delivering expected results. To this end, organisations need to have baselined models, and a mature ML Ops strategy (which might take some explaining). Boards and senior execs may not need to know all the of the granular details but they should know the high-level questions to ask, such as:
Additionally, models have an intrinsic reputational risk of perceived bias, or unethical use of AI and data. Examples of such bias are a banking group AI leaving out a minority group, or automatically declining mobile plans for certain socio-economic groups. Questions and accusations of such practices relate to the datasets that AI models are trained and run on, so it’s important to ensure that the datasets being used are balanced in their representation, and appropriate for use.
To address such questions, boards and senior execs should be asking:
Another point to address here is Adversarial AI – I won’t go into detail in this article, but if you’re interested in reading further about Adversarial AI and our take on combatting it, you can find that here.
One of the key components of models is data. I’m not just talking about the technology underpinning data, but the quality of data available, ie. data grounding, where data is validated against real-world observations, standards or requirements, to establish credibility and usefulness. Without quality data, it’s impossible to train accurate AI models.
Data is the new crude oil; that is to say, a precious resource with the potential of underpinning an incredible amount of capability but requiring the right methods of refinement to do so. Lack of data maturity as one of the biggest challenges to AI usage, alongside further problems such as understanding what data exists, who owns it, and whether it holds relevance to your business use cases.
Boards and senior executives should be asking:
If you take a step back and look at the broader picture, it’s evident that AI capabilities are becoming increasingly integrated into various tools and technologies. Examples of this are the adoption of natural language query interfaces instead of typical business tools, data analysis to generate automatic metadata, and the automated creation of various media formats such as images, videos, and documents.
As this trend continues, it’s expected that most organisations will incorporate AI into their production workflows within the next year.
Boards and senior executives should be asking:
Traditional methods of threat detection and mitigation may prove inadequate in addressing the sheer scale and sophistication of cybersecurity attacks using AI. Gen AI can be used to create an overwhelming volume of interactions that appear convincingly genuine, like this deepfake attack in Hong Kong. These interactions, whether in the form of text, audio or visuals, can easily deceive unsuspecting individuals and automated systems alike. In response, utilising AI algorithms of anomaly detection and real-time analysis can help enable swift and precise responses to potential threats.
Board and senior execs should be asking:
If these are all quite difficult questions to answer, then we can reduce them down to three key ones:
If you want to learn more, or are struggling to answer these questions, reach out to our AI and data experts to scale new heights using the power of Data and AI.
Changing how the world
works for the better.
© 2024 Mantel Group Pty Ltd (ABN: 38 622 268 240) Mantel Operations Pty Ltd (ABN: 12 656 235 559)