How to create truly explainable, explainable AI
Explainable AI, or XAI, is becoming a trending topic as users and developers grow more aware of the scepticism around AI trustworthiness. But there are some challenges with how XAI is currently being navigated, specifically around complexity and post-hoc justifications. Today, we’ll talk more about how to create truly explainable, explainable AI; as we’ve done with our platforms.
Explainable AI often is not
The current and popular XAI methods, like SHAP or LIME, are still limited. There is also a theme around deep learning where, as it gets deeper, it becomes less transparent, as a byproduct. Post-hoc rationalisations are not the same as true interpretability, and there’s still a glaze of tech jargon over a lot of these models that just confounds end users. This leads to a misalignment with user goals or business needs. However, we’re doing it differently. We have developed methods, especially for image processing, that address the issue of transparency in novel ways.
How to create truly explainable, explainable AI
Explainable means something different to developers, end users, regulators and domain experts. So, it’s important to cater for all these requirements and also account for the following key XAI markers:
- Transparency – can all stakeholders see how it works?
- Causality – do they understand why it works?
- Consistency – when they put the same prompts in, do they get the same explanation?
- Comprehensibility – can all stakeholders understand it fully?
- Actionability – can they do something useful with it?
Why do you have to design for XAI?
Explainability is a design principle, not an afterthought. Developers need to use inherently interpretable models where possible, like decision trees and linear models, setting them against the backdrop of modular and human-readable architectures. Best practice is to use domain ontologies and concept bottlenecks for semantic clarity and avoid black-box layers in critical paths where your subject matter experts need to evaluate the logic. Sure, there will always be tradeoffs between accuracy and interpretability, but when you bring users on the journey, they’re more informed and can make educated decisions on what actions to take with the output.
User-centric explanations
Human-centred design is at the heart of XAI. When we create platforms, we’re focused on tailoring explanations to the audience, using a mix of:
- Visual vs verbal explanations
- Varying levels of abstraction
- Interactive explanation systems
And then it gets tested… Are users understanding and trusting the model? If not, it’s back to the refinement phase.
XAI and the law
We are big proponents of XAI because it’s fundamental to our values, first and foremost. But, we’d be remiss to overlook the regulatory drivers of XAI, too. GDPR’s “right to explanation” is a consideration that’s receiving a lot of attention, as of late. While not explicit, the language in GDPR around automated profiling and information held on a subject could be interpreted as a fundamental right to XAI within data processing contexts. So, building for explainability now, means a more future-proof model when XAI becomes codified in law fully.
We’re pioneering in truly explainable, explainable AI. Have a look at our recent work in the space.