Skip to main content

Understand your AI investment: Ask the right questions

Insights from Google Next 2024, Las Vegas

“If we do nothing, and our competition creates a 20% margin gap through efficiency gains, or a 3x gain in customer service, where does that leave our business?”

In April, while at Google Cloud Next, I found myself seated through yet another presentation about AI (this being number 30-something), and I couldn’t help but feel torn.

One part of my mind was busy geeking out on the vast possibilities of what was being discussed. However, amid my geeky excitement, the rational part of my mind couldn’t help but wonder: why do we witness so many Generative AI experiments, but comparatively little application in production? Reflecting on recent conversations with business owners in regulated enterprises, it occurred to me that many organisations hesitate to move experiments into production due to a lack of support from the board and senior management.

I realised that most board members and executives I’ve spoken to may not have enough knowledge of Artificial Intelligence, and the intricacies of model creation, to know which questions to ask to fulfil their responsibilities in assessing the technology’s risks.

If you’re reading this and you’re pressed for time, the one thing I want you to take away is: the power of AI is changing the game at an incredible rate, and all businesses will feel those effects.

Here is a list of questions that boards and senior executives should ask each other and their technology teams, to ensure some of the major risks are considered.

The risk vs reward of investment in AI

The risk of doing nothing

In the tech world, whenever a new technology rolls onto the scene, there’s an intrinsic risk in doing nothing. In the context of AI, we can try and understand it in this framing:

“If we do nothing, and our competition creates a 20% margin gap through efficiency gains, or a 3x gain in customer service, where does that leave our business?”

The proliferation of AI will impact all businesses in some form.

Boardroom and senior executives should be asking:

  • What is our assessment of risk Vs reward for AI in our sector?
  • What is the risk a new company could disrupt our key competitive advantage with AI?
  • How should we update our strategy and risk appetite as a result?
  • How do we create an AI experimentation and innovation environment within the business, while maintaining robust risk management practices?

The risk of unintended consequences

When considering AI solutions, we see time and again that the models implemented are so crucial in delivering expected results. To this end, organisations need to have baselined models, and a mature ML Ops strategy (which might take some explaining). Boards and senior execs may not need to know all the of the granular details but they should know the high-level questions to ask, such as:

  • Who is responsible for ensuring our baseline and model security and surety?
  • What controls do we have in place to ensure compliance with relevant regulations?

Additionally, models have an intrinsic reputational risk of perceived bias, or unethical use of AI and data. Examples of such bias are a banking group AI leaving out a minority group, or automatically declining mobile plans for certain socio-economic groups. Questions and accusations of such practices relate to the datasets that AI models are trained and run on, so it’s important to ensure that the datasets being used are balanced in their representation, and appropriate for use.

To address such questions, boards and senior execs should be asking:

  • What is our approach to ensuring that we can explain the results our models come to?
  • How do we ensure our models are grounded in our enterprise data?
  • Do we have an AI/data ethics governance forum in our organisation? If so, who sits on it?

Another point to address here is Adversarial AI – I won’t go into detail in this article, but if you’re interested in reading further about Adversarial AI and our take on combatting it, you can find that here.

The impact of data

One of the key components of models is data. I’m not just talking about the technology underpinning data, but the quality of data available, ie. data grounding, where data is validated against real-world observations, standards or requirements, to establish credibility and usefulness. Without quality data, it’s impossible to train accurate AI models.

Data is the new crude oil; that is to say, a precious resource with the potential of underpinning an incredible amount of capability but requiring the right methods of refinement to do so. Lack of data maturity as one of the biggest challenges to AI usage, alongside further problems such as understanding what data exists, who owns it, and whether it holds relevance to your business use cases.

Boards and senior executives should be asking:

  • What controls and mechanisms do we use to ensure the quality of our data?
  • Do we have a culture that promotes generating and acquiring high-quality data?
  • Are our BU’s/CGU’s held accountable by the rest of the organisation for the quality of the data and its usability?
  • Does our organisation have the tools to track this data?

The pace of AI implementation

If you take a step back and look at the broader picture, it’s evident that AI capabilities are becoming increasingly integrated into various tools and technologies. Examples of this are the adoption of natural language query interfaces instead of typical business tools, data analysis to generate automatic metadata, and the automated creation of various media formats such as images, videos, and documents.

As this trend continues, it’s expected that most organisations will incorporate AI into their production workflows within the next year.

Boards and senior executives should be asking:

  • What risks are we taking on board via embedded AI in software and technology acquisitions?
  • What are the downstream risks associated with our suppliers’ use of AI?
  • Do we have the necessary controls and mechanisms to use embedded AI from software providers responsibly?

AI Attack Vector risks

Traditional methods of threat detection and mitigation may prove inadequate in addressing the sheer scale and sophistication of cybersecurity attacks using AI. Gen AI can be used to create an overwhelming volume of interactions that appear convincingly genuine, like this deepfake attack in Hong Kong. These interactions, whether in the form of text, audio or visuals, can easily deceive unsuspecting individuals and automated systems alike. In response, utilising AI algorithms of anomaly detection and real-time analysis can help enable swift and precise responses to potential threats.

Board and senior execs should be asking:

  • Are we effectively utilising AI in our cybersecurity strategy to minimise breach likelihood, or mitigate its impact if one occurs?

If these are all quite difficult questions to answer, then we can reduce them down to three key ones:

  • Do we have the appropriate relationships and technology to make a digital transformation our journey, not our destination?
  • Can we move at the pace of others in our industry?
  • Do we have the appropriate workforce and talent to responsibly harness the opportunities AI brings to our organisation?

 

If you want to learn more, or are struggling to answer these questions, reach out to our AI and data experts to scale new heights using the power of Data and AI.

Connect with our award winning Google Cloud experts

Simon Poulton

Partner, Google Cloud

Troy Bebee

Partner, Google Cloud

Ben Bloomfield

Chief Client Officer