If you haven’t already heard, FICO is collaborating with Google, Berkeley, Oxford, Imperial, MIT and UC Irvine to host an Explainable Machine Learning (xML) Challenge.
FICO has been at the forefront of driving innovation surrounding explainable AI for the last twenty-five years. To continue with this, we are inviting teams to create machine learning models that are both highly accurate and explainable based on a real-world financial dataset. The explanations will be qualitatively evaluated by data scientists at FICO and will be used to generate new research in the area of algorithmic explainability.
It is important the data scientists can understand and interpret the models that they fit. We must look at biases, make improvements, and make the business case for adopting them. However, the current black box nature of machine learning algorithms makes them neither interpretable nor explainable. Without explanations, these innovative algorithms cannot meet regulatory requirements and, as a result, cannot be adopted by financial institutions.
More sophisticated machine learning techniques that offer both accuracy and explainability should mean greater collaboration between humans and machines. We encourage you and your team to enter this challenge and help improve the understanding and interpretation of complex machine learning models.
Get started with the details, dataset and rules on the xML Challenge page.