jarikoister

Explainable AI: The need for trust and validation

Blog Post created by jarikoister on May 16, 2017

"Go to your room.”

“Why?”

“Because I said so.”

 

This classic reasoning that is oh so familiar to most of us just does not cut it when it comes to artificial intelligence. When receiving instructions, advice, or even descriptions, questioning the reasons behind the decision is almost compulsory to understanding. It is not enough to simply predict an outcome or recommend best action without showing the connection to the data used to reach it.

 

All decisions are made by analyzing some data and determining an outcome; ‘because I said so’ almost always has an explanation, whether it is shared or not. Knowing the inputs used to reach the final decision is helpful beyond understanding the specific situation. In the context of decisions and analytics, receiving an explanation is valuable for several reasons, including: validation of models, trusting the outcomes from a model, and establishing that a model meets fairness and regulatory criteria.

 

Being aware of the data used to make a decision is instrumental to the ability to deliver consistent outcomes. This data can be used to build predictive models to move beyond prescriptive analytics to actionable insights.

 

When data and decision models are exposed, the reason the outcome was reached is suddenly illuminated and the process is no longer a black box.

 

“We are hosting a dinner party and you are a child. We do not want you to interrupt, therefore you must go to your room.”

 

“We are watching TV and you have not finished your homework. You must finish your homework therefore you must go to your room.”

 

The more data used to train a model, the more important it is to make the model explainable. While most personal decisions can be justified with a simple recount of a thought process, large scale, automated decisions are made by first analyzing thousands of data points to create a training model, and then applying insights from that data to drive desirable outcomes. This type of machine learning efficiently provides answers, but can leave the user ignorant to the data, such as event flows or clusters, that drive the final decision.

 

Explainable AI is useful in various scenarios. Now, I’ll review some examples of explainable AI applications and the corresponding benefits. I look at this from two key perspectives. First, I explore how a data scientist can obtain confidence that a model provides reasonable outcomes. This is called validation. Second, I examine what is necessary for an end user to trust the outcome of a model.

 

xAI 2.png

Validation

To trust the outcome of a model, a human must understand why an algorithm comes to a conclusion. Machine learning can be applied to anything if there’s a corresponding data set—but not every algorithm should be trusted. Models study data to identify similarities, that when detected together, lead to a consistent conclusion. However, correlation does not always equal causation. The reasons must be validated to avoid computers classifying similar things together for the wrong reasons.

 

Researchers at Shanghai Jiao Tong University created an automated interface on criminality to analyze face images with the goal of identifying criminals and non-criminals based solely on still pictures. The study used machine learning to find that criminals can indeed be identified through facial analysis. While their classification algorithms seemed to work, the researchers could not begin to explain why these people were classified as criminals beyond nose angle or distance between eyes. The algorithms found similarities to classify criminals, but these data sets alone are not enough to lock someone away. With the progress of machine learning, there will be countless algorithms that work for the wrong reasons. People cannot trust just any algorithm without validating the process. To truly trust a model, there needs to be an easy way to extract reasons behind the decision through efficient validation, which begins with understanding and explaining the model.

 

Trusting Outcomes

Consider a doctor making a diagnosis. As a trained professional, a doctor gains insight from their experiences. Once they have seen just a couple kids with the chickenpox, they can draw on those experiences to quickly diagnose a similar case. Adding artificial intelligence to the loop expands a single doctor’s experience tenfold. A model can be trained with diagnostic data to then assess a single patient’s symptoms and attributes, compare them to historical data, and produce a diagnosis. However, a doctor cannot simply trust an algorithm to make a potentially lifesaving or life ending diagnosis. The machine learning that occurred to reach the diagnosis cannot be a black box.

 

In order to trust the outcome, a doctor must be able to investigate the reasons a diagnosis was made. A human must be able to collaborate with the machine making the decisions. When providing treatment, the results of a machine learning algorithm cannot be taken at face value. An end user must be able to scrutinize the reasons behind a decision—shedding light on the interworkings of the processes that create the artificial intelligence.

 

Traditionally, these kinds of systems have been limited to use models that can be easily explained. This has, however, limited the types of models that can be used and also the extent of data that can be leveraged.

 

If the AI is explainable, it is possible for a human to investigate, understand, and ultimately trust a decision even if a sophisticated model such as a neural network is used.

 

xAI 1.png

“Why Should I Trust you” Explaining the Predictions of Any Classifier. M.T. Ribeiro et al. 2016

 

Fairness and Regularity Criteria

Legislation is already being put in place to ensure responsible use of artificial intelligence. The European Union Parliament passed the EU General Data Protection Regulation (GDPR) on April 14, 2016. The GDPR will be enforced beginning on May 25, 2018. This regulation represents an important step that must be observed as computer driven algorithms gain a larger part in making decisions that affect human beings. Article 22 of the GDPR states that people “have the right not to be subject to a decision based solely on automated processing,” meaning that a data controller must investigate the machine’s process before any of its decisions are used to decide a legal matter.

 

While the intent of the law is clear, the execution of explainable AI is not straightforward nor consistently possible. In order to effectively use machine learning we need to understand it. As the years go by, more and more systems will be making decisions about people with less and less human intervention.

 

What must be done to responsibly use this powerful technology? Machine learning needs to be made understandable and explainable. Professor Daniel Dennett describes the explainable AI problem well: “If [a machine] can’t do better than us at explaining what it’s doing…then don’t trust it.” The DARPA Explainable AI program outlines a program to address these challenges.

 

This problem reaches everyone: let’s keep the conversation going. I will continue my discussion of explainable AI in a follow up blog about the scope and approaches to xAI. In the meantime, how are you going to ensure that AI is explainable? What other risks will occur if it is not?

Outcomes