Skip to main content

From Algorithms to Answers: Why Explainability Matters

It really is an interesting time. Artificial intelligence is no longer the stuff of science fiction, but a force that is rapidly gaining momentum and with the potential to not only impact the business world — but the very fabric of our society. Let me explain.

The other night I went for a walk, AirPods in, and immediately found myself frustrated with Siri’s inability to locate the artist I requested. I sarcastically asked if she had ever heard of ChatGPT, and she proceeded to open the ChatGPT app on my iPhone. Not quite what I expected, but I quickly found myself walking and talking to OpenAI’s latest and greatest GPT-4o model.

It really did remind me of the movie “Her”.

We had a wide ranging conversation that I found genuinely interesting, and apart from not having the ability to interrupt her while she was explaining something to me — perhaps in greater detail than was necessary — it felt somewhat close to a human conversation.

Make no mistake, there’s still a way to go before you’d call it a natural conversation, but it’s pretty close and a great way to learn new things! No question is too dumb, and the AI’s patience is as infinite as the API tokens that your payment plan includes 😉

And as we move into this brave new world, and people increasingly trust the output of these Large Language Models (LLMs), there’s one thing that must be non-negotiable—transparency in AI decision-making and reasoning.

Beyond simply telling me about the latest US Open tennis results (it looked up the results of the Semi Finals as we spoke) and explaining the rules of Pickleball to me, AI is already making decisions that affect every aspect of our lives—decisions about who gets a loan, who receives medical treatment, and who’s deemed worthy of a job.

These aren’t trivial matters; they’re the building blocks of our economy, our health, and our society. Yet, the frightening truth is that too often, we’re left in the dark about how these decisions are made, and what source data was referenced along the way. The algorithms chug away in the background, crunching data and then spitting out decisions with no explanation.

“How does this AI decision-making process work?”

“What reasoning did the model apply to come up with this decision?”

“Why did the system suggest this outcome?”

These are crucial questions that must have simple answers as AI takes on more decision making responsibilities across the companies and government agencies that we depend on.

The call for explainability in AI isn’t just a technical requirement; it’s a fundamental principle of accountability. It’s about ensuring that the power of these machines is exercised in a way that’s fair, transparent, and, above all, understandable.

Understanding AI Explainability

In simple terms, explainability is about making sure that when AI makes a decision, it doesn’t just spit out an answer. This is why the language models of today such as ChatGPT aren’t quite ready to be let loose on important tasks. We need to know the “why” behind those decisions, and that’s where explainability comes in. And when you’re a government department or a company in a regulated industry, explainability isn’t optional – it’s imperative.

Explainability is the ability of an AI system to break down its thought process in a way that humans can understand. It’s about taking those complex algorithms and models and turning them into something more transparent, something we can actually make sense of.

Why does this matter? Trust, plain and simple.

If we’re going to let AI have a say in important decisions—whether it’s diagnosing a medical condition, approving a loan, or even helping out in a courtroom—we need to trust that it’s doing the right thing.

Trust doesn’t come from blind faith; it comes from understanding. When we know how an AI system works, we’re more likely to trust it. And when we trust it, we’re more likely to use it, and use it responsibly.

But here’s the catch: not all AI models are easy to explain. Some, like simple decision trees, are pretty straightforward—they’re like a flowchart that you can follow from start to finish. But others, like deep learning neural networks, are a whole different beast. These are the heavy hitters of the AI world, capable of processing massive amounts of data and finding patterns that are way beyond human capability. But with great power comes great opacity. These models are often described as “black boxes” because their inner workings are so complex that even the people who build them sometimes struggle to explain how they come to a particular decision.

There are techniques being deployed to improve trust when using private instances of the Large Language Models such as ChatGPT, Claude, Gemini (and others) using “Context Grounding” or “Retrieval Augmented Generation” (RAG). These approaches can help, with links given to source data from which the answers have been inferred, but they are unable to provide true explainability – such as the reasoning the model used to provide its answer.

This presents a real challenge. On one hand, we want to take advantage of the incredible power of these sophisticated AI models. On the other hand, we need them to be transparent and understandable. It’s a balancing act that’s becoming more and more critical as AI continues to permeate every corner of our lives.

In short, AI explainability isn’t just some abstract concept for tech geeks to debate. It’s about making sure that as AI gets smarter and more powerful, it also remains accountable and trustworthy. Because if we can’t understand why AI makes the decisions it does, then how can we ever fully trust it? And if we can’t trust it, what’s the point?

The Role of Explainability in Compliance and Trust

One of the most compelling reasons to prioritise explainability in AI is compliance. Regulations like the European GDPR, the Australian Government’s AI Ethics Framework and NSW Government’s AI Assurance Framework, place strict requirements on how government departments and other organisations handle data and make automated decisions. Explainability helps ensure that your intelligent process automation is not only effective but also compliant with these regulations.

But beyond compliance, explainability is vital for building trust with your customers and stakeholders. Imagine a scenario where your AI-driven process rejects a customer’s application for a service you provide. If you can’t explain why the decision was made, the customer will almost certainly lose trust in your brand. On the other hand, if you can provide a clear, understandable explanation, the customer is more likely to accept the decision, even if it’s not in their favour.

Challenges of Explainability

Despite its importance, achieving explainability in AI isn’t always straightforward. AI models, particularly those based on deep learning, are incredibly complex. They often involve millions of parameters working together in non-linear ways. Explaining how these models arrive at a specific decision can be challenging, even for the experts who design them.

Another challenge is the trade-off between accuracy and explainability. Some of the most accurate AI models, like deep neural networks, are also the least explainable. Simplifying these models to make them more understandable can sometimes reduce their accuracy, leading to a difficult balancing act.

Making Explainability a Priority in Your AI Strategy

So, how do you ensure that explainability is a priority in your AI initiatives? Here are a few strategies:

  1. Choose the Right Tools: Many AI platforms now offer built-in explainability features. Look for tools that provide insights into how their models work and make this information accessible to non-experts.
  2. Invest in Education: Educate your team about the importance of explainability. Ensure that they understand not just how to use AI tools, but how to interpret and explain the results.
  3. Prioritise Simpler Models When Possible: In cases where accuracy is not the only priority, consider using simpler, more explainable models. Knowledge Graphs and rule-based systems are often easier to interpret.
  4. Integrate Explainability into Your Compliance Strategy: Make explainability a key part of your compliance strategy. Ensure that your AI systems can provide the necessary explanations to meet regulatory requirements.

Explainability is Essential

Incorporating explainability into your intelligent process automation efforts is not just about ticking a box; it’s about building systems that are transparent, trustworthy, and compliant. As AI continues to play a larger role in business operations, ensuring that these systems are explainable will be crucial for success. Remember, the goal is not just to automate processes but to do so in a way that is understandable, responsible, and ultimately beneficial for your business.

For more insights into AI, intelligent process automation, and how to implement explainable AI, explore our resources or get in touch with our team below to discuss your needs.

Ensuring Transparency - The Case for Explainable AI

Is AI the End of RPA? Agentic Process Automation (APA) is Here

| Business Blog | No Comments
Is AI the End of RPA? Agentic Process Automation (APA) is Here and What it Means for Your Organisation Robotic Process Automation (RPA) has been the go-to solution for businesses…

Ensuring Transparency – The Case for Explainable AI

| Business Blog | No Comments
From Algorithms to Answers: Why Explainability Matters It really is an interesting time. Artificial intelligence is no longer the stuff of science fiction, but a force that is rapidly gaining…

Contact Us