The Fifth of the Six Key Challenges
AI systems exhibit different levels of explainability. Some can effectively introspect and explain why decisions were made, others less so.
If an AI system’s accuracy and decision-making process cannot be understood by a human being, then it is hard to assess the potential risk for high-liability industries.
What evidence or documentation, in the various stages of problem definition, design, and development stages, improves interpretation?
There is a popular misconception that machine learning models are necessarily inscrutable black boxes. However, several classes of machine learning models can be examined introspectively and their reasoning explained.
Decision trees and other rule-based algorithms are the classic examples, producing machine learning models that are cascading chains of "if-then" rules.
DarwinAI is using explainable AI to better pinpoint early signs of COVID-19 infection in patients on an intelligent X-Ray system powered by Arm processors.
The field of eXplainable AI (XAI) is steadily developing and introducing new machine learning algorithms for which model introspection and explainability is a first-class concern.29, 30
In a machine learning context, explainability is important for several reasons: it can be used to fully document the software engineering process, deduce the data and the training regimen used during model learning, and helps in evaluating the overall system performance.
Moreover, a machine learning model that can explain its own reasoning is much easier to audit for compliance with relevant regulations. And, if the output of a machine learning model leads to bad or unwarranted outcomes in the real world, an explainable model can be used to pinpoint the reasoning that led to these outcomes, which can then be modified and fixed.
However, despite recent advances in XAI, many state-of-the-art machine learning techniques do, in fact, act as impenetrable black boxes to external observers. This may be because the training algorithm was not designed with model explainability in mind.
Alternatively, an algorithm could, in principle, be designed with explainability in mind, but the size of any learned model is so gargantuan, or otherwise complex, that the chances of comprehension of any explanation by a human are slim. Many modern deep learning techniques, such as convolutional neural networks (CNNs),31 long short-term memory networks (LSTM),32 and others, fall into this pattern.
Unfortunately, these techniques also represent the state of the art in several application areas of modern machine learning. We must therefore recognize that for the near future, there will be different levels of explainability for different machine learning systems. Careful analysis is needed to understand the appropriate level of explainability on an application-by-application basis, and the use of explainable models should be preferred where this is appropriate and possible.
Seeking Assurance
How did the AI system give this outcome, and what is the reasoning behind the decision?
Why can this AI system be approved for a high-risk or high-liability industry?
Providing Detailed Information
Has the system integrator or service provider considered the explainability requirements for the use case?
Has the system integrator or service provider considered the model chosen in the context of explainability requirements?