From Algorithms to Answers: Why Explainability Matters
It really is an interesting time. Artificial intelligence is no longer the stuff of science fiction, but a force that is rapidly gaining momentum and with the potential to not only impact the business world — but the very fabric of our society. Let me explain.
The other night I went for a walk, AirPods in, and immediately found myself frustrated with Siri’s inability to locate the artist I requested. I sarcastically asked if she had ever heard of ChatGPT, and she proceeded to open the ChatGPT app on my iPhone. Not quite what I expected, but I quickly found myself walking and talking to OpenAI’s latest and greatest GPT-4o model.
It really did remind me of the movie “Her”.
We had a wide ranging conversation that I found genuinely interesting, and apart from not having the ability to interrupt her while she was explaining something to me — perhaps in greater detail than was necessary — it felt somewhat close to a human conversation.
Make no mistake, there’s still a way to go before you’d call it a natural conversation, but it’s pretty close and a great way to learn new things! No question is too dumb, and the AI’s patience is as infinite as the API tokens that your payment plan includes đ
And as we move into this brave new world, and people increasingly trust the output of these Large Language Models (LLMs), thereâs one thing that must be non-negotiableâtransparency in AI decision-making and reasoning.
Beyond simply telling me about the latest US Open tennis results (it looked up the results of the Semi Finals as we spoke) and explaining the rules of Pickleball to me, AI is already making decisions that affect every aspect of our livesâdecisions about who gets a loan, who receives medical treatment, and whoâs deemed worthy of a job.
These arenât trivial matters; theyâre the building blocks of our economy, our health, and our society. Yet, the frightening truth is that too often, weâre left in the dark about how these decisions are made, and what source data was referenced along the way. The algorithms chug away in the background, crunching data and then spitting out decisions with no explanation.
âHow does this AI decision-making process work?â
“What reasoning did the model apply to come up with this decision?”
âWhy did the system suggest this outcome?â
These are crucial questions that must have simple answers as AI takes on more decision making responsibilities across the companies and government agencies that we depend on.
The call for explainability in AI isnât just a technical requirement; itâs a fundamental principle of accountability. Itâs about ensuring that the power of these machines is exercised in a way thatâs fair, transparent, and, above all, understandable.
Understanding AI Explainability
In simple terms, explainability is about making sure that when AI makes a decision, it doesnât just spit out an answer. This is why the language models of today such as ChatGPT aren’t quite ready to be let loose on important tasks. We need to know the âwhyâ behind those decisions, and thatâs where explainability comes in. And when you’re a government department or a company in a regulated industry, explainability isn’t optional – it’s imperative.
Explainability is the ability of an AI system to break down its thought process in a way that humans can understand. Itâs about taking those complex algorithms and models and turning them into something more transparent, something we can actually make sense of.
Why does this matter? Trust, plain and simple.
If weâre going to let AI have a say in important decisionsâwhether itâs diagnosing a medical condition, approving a loan, or even helping out in a courtroomâwe need to trust that itâs doing the right thing.
Trust doesnât come from blind faith; it comes from understanding. When we know how an AI system works, weâre more likely to trust it. And when we trust it, weâre more likely to use it, and use it responsibly.
But hereâs the catch: not all AI models are easy to explain. Some, like simple decision trees, are pretty straightforwardâtheyâre like a flowchart that you can follow from start to finish. But others, like deep learning neural networks, are a whole different beast. These are the heavy hitters of the AI world, capable of processing massive amounts of data and finding patterns that are way beyond human capability. But with great power comes great opacity. These models are often described as âblack boxesâ because their inner workings are so complex that even the people who build them sometimes struggle to explain how they come to a particular decision.
There are techniques being deployed to improve trust when using private instances of the Large Language Models such as ChatGPT, Claude, Gemini (and others) using “Context Grounding” or “Retrieval Augmented Generation” (RAG). These approaches can help, with links given to source data from which the answers have been inferred, but they are unable to provide true explainability – such as the reasoning the model used to provide its answer.
This presents a real challenge. On one hand, we want to take advantage of the incredible power of these sophisticated AI models. On the other hand, we need them to be transparent and understandable. Itâs a balancing act thatâs becoming more and more critical as AI continues to permeate every corner of our lives.
In short, AI explainability isnât just some abstract concept for tech geeks to debate. Itâs about making sure that as AI gets smarter and more powerful, it also remains accountable and trustworthy. Because if we canât understand why AI makes the decisions it does, then how can we ever fully trust it? And if we canât trust it, whatâs the point?
The Role of Explainability in Compliance and Trust
One of the most compelling reasons to prioritise explainability in AI is compliance. Regulations like the European GDPR, the Australian Government’s AI Ethics Framework and NSW Government’s AI Assurance Framework, place strict requirements on how government departments and other organisations handle data and make automated decisions. Explainability helps ensure that your intelligent process automation is not only effective but also compliant with these regulations.
But beyond compliance, explainability is vital for building trust with your customers and stakeholders. Imagine a scenario where your AI-driven process rejects a customerâs application for a service you provide. If you canât explain why the decision was made, the customer will almost certainly lose trust in your brand. On the other hand, if you can provide a clear, understandable explanation, the customer is more likely to accept the decision, even if itâs not in their favour.
Challenges of Explainability
Despite its importance, achieving explainability in AI isnât always straightforward. AI models, particularly those based on deep learning, are incredibly complex. They often involve millions of parameters working together in non-linear ways. Explaining how these models arrive at a specific decision can be challenging, even for the experts who design them.
Another challenge is the trade-off between accuracy and explainability. Some of the most accurate AI models, like deep neural networks, are also the least explainable. Simplifying these models to make them more understandable can sometimes reduce their accuracy, leading to a difficult balancing act.
Making Explainability a Priority in Your AI Strategy
So, how do you ensure that explainability is a priority in your AI initiatives? Here are a few strategies:
- Choose the Right Tools: Many AI platforms now offer built-in explainability features. Look for tools that provide insights into how their models work and make this information accessible to non-experts.
- Invest in Education: Educate your team about the importance of explainability. Ensure that they understand not just how to use AI tools, but how to interpret and explain the results.
- Prioritise Simpler Models When Possible: In cases where accuracy is not the only priority, consider using simpler, more explainable models. Knowledge Graphs and rule-based systems are often easier to interpret.
- Integrate Explainability into Your Compliance Strategy: Make explainability a key part of your compliance strategy. Ensure that your AI systems can provide the necessary explanations to meet regulatory requirements.
Explainability is Essential
Incorporating explainability into your intelligent process automation efforts is not just about ticking a box; itâs about building systems that are transparent, trustworthy, and compliant. As AI continues to play a larger role in business operations, ensuring that these systems are explainable will be crucial for success. Remember, the goal is not just to automate processes but to do so in a way that is understandable, responsible, and ultimately beneficial for your business.
For more insights into AI, intelligent process automation, and how to implement explainable AI, explore our resources or get in touch with our team below to discuss your needs.