The Explainable Machine Learning Challenge Overview

Document created by Makenna.Brei Advocate on Dec 22, 2017Last modified by Makenna.Brei Advocate on Jan 2, 2018
Version 2Show Document
  • View in full screen mode

The Explainable Machine Learning Challenge

FICO teams up with Google, UC Berkeley, Oxford, Imperial, MIT and UC Irvine to sponsor a contest to generate new research in the area of algorithmic explain-ability.

 

When business decisions are made based on artificial intelligence, machine learning and algorithmic models, there is a need to provide a clear explanation of why and how that particular decision was made. This not only helps human users understand and trust the decision, it is often required due to regulatory and compliance constraints.

 

The Explainable Machine Learning Challenge is designed to encourage continued research and innovation in making machine learning algorithms more explainable and to expand the set of explainable AI methods used today. Without explanations, these algorithms often cannot meet regulatory requirements, human scrutiny, or customer expectations.

 

Teams will be challenged to create machine learning models with both high accuracy and correct explanations, using a real-world dataset provided by FICO.

 

Click here to sign up for updates.

Attachments

    Outcomes