Skip navigation
1 2 3 Previous Next

FICO Community Articles

69 posts

Abstract

After observing a steady upward trend in the US credit card delinquency rate in a time of historic lows for credit card delinquency, I set out to investigate the account default rates by analyzing the performance in lower bands of FICO Score and behavior score with current accounts. The results point to an opportunity for lenders to shift to an early collections intervention strategy via improved customer management strategies. In this blog, I explain this study in 3 parts:

 

The Issue

US lenders are seeing an increase in credit card delinquencies over recent quarters, fueled in part by an overall rise in total household debt. From Q2 2017 to Q3 2017, total household debt in the US increased by $116 billion to $12.96 trillion. Debt levels increased across mortgages, student loans, auto loans, and credit cards; however the largest percentage increase (3.1%) was in credit card debt. In Q3 2017, the Federal Reserve Board of Governors reported a credit card delinquency rate of 2.49% (seasonally-adjusted) for the 100 largest banks, continuing a steady upward trend for the sixth consecutive quarter.

 

Given that these results are far from levels seen during the financial crisis of 2008-2009 (with 30+ day card delinquencies that approached 7%) and a strong economy with near full employment, lenders aren’t ready to hit the panic button just yet.  These current historic lows for card delinquency have led some lenders to accept higher risk applicants, contributing to the slight delinquency uptick.

 

Curious about this continual increase, I examined opportunities related to early collections intervention for a prominent US card lender with approximately 6 million accounts as a means to mitigate risk introduced via aggressive balance growth programs instituted over the prior two years.

 

Most card issuers today have early-stage collection efforts that initiate after the billing cycle for the highest risk delinquent customer segments, leaving a period of four to five days between a missed payment due date and the onset of customer contact strategies.

Intuitively, failure to accelerate customer contact in these high-risk segments represents a missed opportunity. This point of contact may ultimately be the tipping point in collecting the debt and keeping an account from going into default.

 

A relatively simple analytic exercise can help lenders identify those customers who not only have a very high likelihood of rolling delinquent subsequent to a missed payment due date, but also have a significantly elevated probability of reaching a severe stage of default in the following six to twelve months.

 

The Analysis

Leveraging FICO’s Analytic Modeler Decision Tree Professional, a component of the FICO® Decision Management Suite, I studied performance for non-delinquent accounts that had drifted to lower bands of FICO Score and behavior.  Specifically, my goal was to quantify the account default rates, defined as 3+ cycles past due, bankrupt, or charged-off over the subsequent twelve months (i.e., “bad” performance definition).  The analysis showed that the lowest 10% of FICO Scores and behavior scores captures only 5%(*) of the total current account population, but identifies over 40% (**) of the future bad accounts originating from this current account segment.

 

table 1.png

(*) The sum of the “% of total”  in the upper left highlighted 4 quadrants: lowest 5% and the lowest 6-10% (2% + 1% + 1% + 1% = 5%).

(**) The sum of the “% of total bads”  in the upper left highlighted 4 quadrants: lowest 5% and the lowest 6-10% (24% + 8% + 5% + 3% = 40%)

 

This study leveraged cycle-based data for current accounts (accounts in good standing) as opposed to isolating only those accounts that miss their required minimum payment, which would have resulted in considerably higher bad rates across defined score intervals.  However, the analysis validates the appropriateness and necessity of accelerating collections intervention – at the missed payment due date – within high-risk segments rather than initiating customer contact after the grace period.

 

The Results and Recommendations

Lenders will need to shift their business practices and consider operational implications when making these strategic decisions and they need to be vigilant about tracking performance shifts across key portfolio segments.  Collections capacity planning and ensuring adequate staff training is critical to success.

 

From a tactical sense, adaptive control technologies such as FICO’s TRIAD Customer Manager and Strategy Director are easily configured to enable “pre-delinquent” account treatment which allows users to orient customer contact strategies around the payment due date. Notably, the introduction of self-resolution strategies via mobile application, automated voice, SMS, and email has gained remarkable traction in recent years as lenders strive to match engagement channel to customer preferences.

 

Both lenders and customers have benefitted from the proliferation of digital communications.  Lenders have observed significant improvements in delinquent roll rates and dramatically lower collections-related expenses while customer satisfaction has improved. However, customers who do not react to these automated payment reminders and self-resolution channels are often signaling a state of financial distress that warrants rapid live-agent intervention.

 

I’m happy to field questions about this study and corresponding findings; comment here on the blog or contact Fair Isaac Advisors if you want to discuss innovative ways to create effective risk mitigation strategies which include early intervention tactics abating delinquencies. You can also reach FICO experts and other users in the TRIAD User Forum.

My dishwasher tabs at home are promoted as a "three-phase system." They feature components to carry out three different tasks of cleaning dishes: softening the water (essential for the next two phases), dissolving grease (the main bit I, as a user, am interested in) and finally, some rinse aid (for a nice finish). While solving optimization problems is not a typical household chore, a Mixed-Integer Programming (MIP) solver, like FICO Xpress, can be viewed as a three-phase system.

 

The solution process consists of three distinct phases:

  1. Finding a first feasible solution: essential for the next two phases.
  2. Improving the solution quality: the main bit I, as a user, am interested in.
  3. Proving optimality: a nice finish.

 

In a recent research paper, together with our research collaboration partners from MODAL/ZIB, we show that the entire solving process can be improved by adapting the search strategy with respect to phase-specific aims using different control tunings. Since MIP is solved by a tree search algorithm, it comes as no surprise that the major tweaks were done for tree search controls. During the first phase, we used special branching and node-selection rules to quickly explore different regions of the search tree. The second phase used modified local search heuristics. During the third phase, we switched to pure depth-first-search, disabled all primal heuristics, and activated local cutting plane separation to aggressively prune the search tree.

 

phase.pngSolving phases of a MIP solution process. Point t1 marks the transition from

feasibility to improvement, t2 marks the transition from improvement to proof.

 

Additionally, we provided criteria to predict the transition between the individual phases. Just like a dishwasher, a MIP solver is a bit of a black box; sometimes it's hard to tell what's going on inside. In our case, the challenging bit was to predict the transition from the improvement phase to the proof phase. How do you know that the best solution found so far is optimal when you have not yet proven optimality? Well, you can't, but we found a good approximate criterion: we define the set of rank-1 nodes as the set of all nodes with a node estimate at least as good as the best-evaluated node at the same depth. The term rank-1 is motivated by ranking the nodes of a depth level of the search tree by an estimate of their expected solution quality, with the most promising nodes being ranked first. We trigger the transition to the last solving phase when the set of rank-1 nodes becomes empty, loosely speaking: when we start processing nodes of poor quality.

 

The combination of applying named phase transition criterion and the tuned controls for each phase led to an average increase in speed up of 14% on hard MIP instances. We also found that when you are only interested in one of the phases, like finding any feasible solution or finding good solutions without proving optimality, there is room for improvement by control tuning. You can look forward to an upcoming blog post on our performance tuner.

 

If you are interested in the math behind all of this, the article has been published in Optimization Methods and Software. A preprint is available here. The work is based on results from Gregor Hendel's Master thesis, which was supervised by me, Timo Berthold. For the experiments, we used the academic software SCIP.

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

 

I hope you enjoyed the read. I have to go home now and do the dishes.

Abstract: Learn about FICO's latest innovations in prescriptive analytic software and solutions at our webinar on 2/14 at 11am PT / 2pm ET. REGISTER NOW!

 

According to Gartner, the prescriptive analytics software market will reach $1.57 billion by 2021. Despite
this tremendous growth, most analytics programs fail to deliver business results. In one McKinsey
survey
, 86 percent of executives said their organizations were “only somewhat effective” at meeting the primary objective of their data and analytics programs.

                   

Why is that? What is the secret to success? FICO goes beyond experimentation and delivers the future of prescriptive analytics today. Our prescriptive analytics solutions have helped deliver hundreds of millions in incremental revenue and have helped power the world’s most efficient companies, such as Southwest, Shell, and Nestle.

 

Join us at 11am PT/2pm ET on February 14th to learn how FICO Xpress Insight helps you bridge the gap between analytic concepts and business value. Read data in any format from any source, bring your own machine learning,
bring your own solver or use Xpress Solvers, collaborate with your business users, deploy decision support or automated solutions, and do it all in 75% less time.

 

For operations researchers and data scientists:

  • We will present OPEN and FREE Xpress Mosel, the mathematical modeling, programming, and analytic
    orchestration language, now available to everyone.
  • We will discuss how you can bring your own solver to deploy optimization solutions using the powerful Xpress Insight platform.
  • We will cover the OPEN Xpress Mosel native solver interface which allows solver developers to extend Xpress Mosel with your own solvers giving you an instant audience of thousands of users.

 

For business management and executives:

  • We will reveal a world in which machine learning and optimization techniques can be deployed into production
    in weeks, not months or years.
  • We will showcase a world in which changes are welcomed by your analytic experts and by IT. You
    can change your requirements and then implement them in minutes or hours instead of months.
  • You will see a world where you can explore new avenues to gain more efficiency. Your stakeholders will want to get together to collaborate because they can see how the solution benefits them as individuals and as an organization.

 

Register today!

 

Don’t miss our next webinar on 2/21/2018 on Pattern Discovery

Make sure to register for our final webinar on 2/28/2018 about our Forecasting Tool

 

Visit the FICO Xpress Optimization Community to learn more about Xpress Optimization, download free software, get support and talk to the experts!

In his blog aptly titled “Analytic Predictions 2018: 31 Flavors of AI,” FICO Chief Analytics Officer Scott Zoldi made the bold pronouncement that, among other things, 2018 will be the year that “AI will have to explain itself.” Indeed, as regulations such as the EU’s General Data Protection Regulation (GDPR) attempt to direct banks to explain their decision processes (including those produced by AI and ML systems), the heat is on to explain why decisions are being made.

 

As more decisions become automated – and the types of decisions that are being automated become more complex – the need to explain decisions will expand at a dramatic pace, and not just for regulatory reasons. In fact, as organizations across multiple industries laser-focus on shrinking profit margins and a drive to become more customer-centric, explainability becomes more than just a compliance exercise.

 

The need for explainability transcends beyond AI and automated decisions. Let’s take airlines, for example. If something drastic happens (no pilots being assigned to work during the holidays, a passenger dragged off a plane, or some other event), there has to be transparency into what happens next.

 

Even if human behaviors trigger an event, more often than not, you can surmise that there’s some kind of policy (analytic and/or algorithmic) behind the scenes trying to call the shots. Again, complexity – and the need for speed given the proliferation of channels requiring fast decisions – creates additional burden on businesses to explain the why, what and how behind every decision.

fernando headshot.png

Todd headshot.png

 

In our upcoming webinar titled “Unlocking the Full Potential of Your Business Outcomes with Explainable Decisions,” my colleague Fernando Jorge and I will discuss some of the strategies for unlocking explainability across the decision spectrum, highlighting the powerful capabilities we now offer with Decision Modeler (and the upcoming Blaze Advisor 7.5 release).

 

By breaking customer touch points down to a series of micro-interactions and then applying decision analysis to these activities, you can rapidly ramp up your explainability DNA. In the race to intelligent, automated and explainable decisions – powered by micro-interactions – this is a discussion you can’t afford to miss. Click here to register for this free webinar.

Have you ever wanted to make immediate use of unstructured data while bypassing the hassle of tagging it all? This can be done through collaborative behavioral profiling. Collaborative behavioral profiles are created by applying a text analytics approach to real-time, adaptive, unsupervised learning. These collaborative behavioral profiles can be used to identify anomalies in customer behavior without the need to apply tags to the data. For example, each action a customer can take is translated into a unique symbol; this creates a unique string of symbols (defined by a fixed action in a symbol dictionary) which defines each customer’s behavior and tracks changes in real time as new events occur.

 

Bayesian Learning to Derive Archetypes

We use Bayesian learning to derive archetypes in the latent feature space of symbol loadings. These archetypes act to boil the symbol history into a set of real-time adapting behavioral archetypes. These archetypes help us understand customers based on latent features to recognize which other customers they are most similar too, and to detect if an individual’s actions deviate from what’s expected. 

 

data streams of words.png

It is important to note here that archetypes, in this example, are not predefined customer segmentations. Instead, each customer is represented as a mixture of archetypes, not fixed into a single rigid classification. The probabilistic interpretation is important because it captures probability densities associated with the different archetype assignments; they are not rigidly assigned. This mixture can be updated in real time based on new transactions and other customer information, so every unique customer’s collaborative profile is changing as their individual data evolves.

 

Using Archetypes to Flag Behaviors

Archetypes are powerful concepts as they are latent features that have physically interpretable meaning and value. We often use images of archetypes to represent actions and behaviors that are typical of certain people who are strongly aligned with a single archetype. When something occurs that causes an individual to deviate from their assigned archetype distribution, an interesting fraud identification application arises; you can see an example of this in the chart below. This individual’s spending habits changed, raising a red flag in the unsupervised adaptive model. This indicates that their behavior is out of sorts with the historical behavioral archetypes/latent features.

 

Bayesian learning archetypes example.png

The Bayesian learning algorithm of the “words” to archetypes is unsupervised with no targets (meaning there are no tags that indicate fraud or not fraud). Nonetheless, it can detect anomalies. See how the distribution of percentages changes drastically in September? This shift from the established archetype distribution is detected by the unsupervised analytics to cause an alert of deviant behavior. It doesn’t need to occur monthly, or weekly, or daily, in fact, in FICO fraud applications it occurs in real-time as the event occurs.

 

Outlier Detection

Outlier detection adds another dimension of learning in addition to traditional supervised analytic methods based on tags. After clustering customers in the archetype space to identify which are similar, we can see if an individual strays from their peers in this archetype space; this indicates deviance. Given the real-time, adaptive, self-learning nature of the techniques, the model can further adapt to population changes while providing real-time, recursive quantile estimations of features. This can even occur with no offline historical dataset storage.

 

These unsupervised methods provide tremendous flexibility to solve business problems where lack of historical data or lack of modeling tags bring supervised model development to a stand-still. FICO uses this technology in Cyber Analytics, AML Analytics, Marketing Analytics, and Financial Crime Analytics; this technology is patent pending. Check out the Analytics Community and follow me on Twitter @ScottZoldi for more discussion about the possibilities machine learning brings.

By Ryan Burton and Matt Nissen of Capital Services, and Jill Deckert of FICO

 

What is net lift modeling? It’s a predictive modeling technique that tells you the incremental impact of a treatment on an individual's behavior. You might have heard of this in a marketing context, used to identify who would be most likely to respond positively to an advertising campaign. It also helps in political campaigns to understand how to market to the right voter using the best messaging and channel. We have decided to apply net lift modeling to collections in order to determine how to contact the right customer using the preferred channel and message at the best time to improve collections success.

 

Why use net lift modeling? Our path to net lift modeling started when we wanted to learn more about our collections success than what our traditional measurements and reports told us. Reporting gave us the payment rate outcome of a collections call but we wanted to dig in further. We needed to learn if there are optimal treatments that we can apply to different types of customers that would make them more likely to pay. For example, we wanted to know what communication channel or level of aggression to use when communicating with high risk customers and which to use for the medium risk or low risk customers. The first step toward understanding this was using randomized testing to measure causality from a treatment –what resulted from a collections call and what resulted from no call. Then, net lift modeling leverages that randomized test data by targeting the segments with the highest likelihood to be positively influenced by the test. As a result, we can find out the best treatments for customer segments.

 

Traditional Measurement to Randomized Testing to Net Lift Analysis

trad to randomized to net lift.png

You’ve heard it before and I’ll tell you again: correlation does not imply causation. This is important to remember when understanding how net lift analysis helps wean out the deliberately influenced from the coincidentally correlated. For example, traditional collections reporting gives us the payment rate over time but does not tell us if a change in treatment is influencing that payment rate. If the payment rate increases over time, it may be because of collections efforts, or an outside force like a macroeconomic change. Using experimental design to test a treatment allows us to measure the impacts of different treatments. Taking it a step further, we can split our population into segments and see how the treatment impacted individual segments instead of the population in general.

 

Creating interesting segments can be tedious because you have to guess which segments will add value. Net lift analysis provides a way to systematically segment the population into groups that are most likely to be positively influenced by a treatment. Once we know which segments to assign a certain treatment we can optimally prescribe the treatment and maximize our desired response. Net lift analysis gives us a deeper understanding to identify which collections treatment caused an outcome compared to traditional methods that simply show correlation between a call and an outcome. Let’s play a quick game to drive this concept home.

 

Question 1: Do you think ice cream prevents the flu? (hint: see graph below)

ice cream flu graph.png

Looks like the number of flu patients consistently decreased as ice cream production increased. I guess you don’t need that flu shot after all—let’s rejoice in a bowl of mint chip!

 

Question 2: Do you think this tree influenced Detroit’s population? (hint: see graph below)

population tree graph.png

Looks like the population of Detroit rose and fell with the branches of the tree. Somebody grab the trimmers!

 

I hope you enjoyed the game as much as I did; let’s check our work. It turns out that ice cream production does not actually prevent the flu, and that a random picture of a tree does not influence the population of major US cities. While these variables correlate on the graphs, they are pointedly not the cause of the other action. In the first graph, the confounding variable causing the flu patients to decrease and the ice cream production to increase is summer time. The second graph shows that patterns can be a pure coincidence or even the result of an analyst’s creativity, not necessarily indicative of a meaningful relationship. Randomized experiments can stop you from mistaking correlation for causation, and net lift analysis allows us to extract all of the useful information of the experiment by segmenting customers and identifying which segments are likely to be influenced by the variable being tested.

 

Now that you know the advantages and applications, ask us more about our methods and strategy or how you can apply net lift modeling using your TRIAD or ACS data. Comment here or take your questions to the TRIAD Community.

FICO is a cloud first company. We’ve invested significantly in FICO Decision Modeler, a product that brings the FICO Blaze Advisor Decision Rules Management System to the cloud. We’ve added capabilities to the product that eliminate the need to use the Blaze Adviser Integrated Development Environment (IDE). This makes Decision Modeler quicker & easier to use in the cloud.

 

Our FICO Xpress Optimization product line has several new and exciting enhancements. Our biggest news is about Xpress Mosel, FICO's market-leading analytic orchestration, algebraic modeling, and programming language. Xpress Mosel is now FREE and OPEN. Not only is Xpress Mosel FREE, but it is now OPEN to connect to 3rd party solvers (not just FICO Xpress Solver). Anyone can download Xpress Mosel through the FICO Xpress Community license here. FICO is responding to strong customer demand for these features and we believe they will have a positive effect on the market, benefitting academic users and commercial enterprises alike.

 

Here are some of the additional highlights of this release:

  • Greater cross-component integration
  • Increased capability to better track decision performance for campaigns or treatments designed across the suite
  • Expanded analytics support for R and open source ML/AI libraries.
  • Significant performance improvements for our optimization components.
  • Introduction of the DMS Hub, which provides new collaboration features

 

The products with updated features in the cloud include:

 

FICO Decision Management Platform 2.3

Visit the DMP Community

FICO Decision Management Platform 2.3 introduces DMS Hub, a new and expanded algorithmic collaboration capability. The DMS Hub is a collaboration capability that allows data scientists to leverage the DMP to post, download, edit, share and collaborate analytic algorithms and best practices. DMS Hub, in conjunction with FICO Drive (the built-in cloud storage facility to share non-algorithmic assets, data files and other decision assets), provides the FICO Decision Management Suite a unique and highly leverageable solution for cross-organizational collaboration, best practices, and decision management solutions scale. Stay tuned for more information on DMS Hub, as well as further integration announcements around Analytics Workbench and Xpress Optimization.

 

DMP 2.3 also introduces a new Event Data Access API for better integration with Strategy Director.  This API allows Strategy Director to read and write execution data to coordinate decision logic execution leveraging the Decision Management Platform.

 

FICO Decision Modeler 2.3

Visit the Blaze Advisor and Decision Modeler Community

Decision Modeler 2.3 provides new capabilities to further expand its functionality in the cloud by adding features that eliminate the need to use the Blaze Advisor IDE. Included in 2.3 functionality is native support for double-axis decision tables and native support for SAS program importation.

 

In addition, Decision Modeler 2.3 provides:

  • Decision tree data profiling: customers can now upload custom data into a decision tree to see how data will flow into and through decision tree logic, the distribution of assigned actions, and how they are proportioned in each node in the tree.
  • Quick search: new and easy search functionality that will make it much faster to search for and identify decision logic and attributes directly from the Decision Modeler navigation bar.
  • Support for large decision tables: an optimized way to generate code for decision tables. This will enable much larger decision tables to be created, compiled more quickly with a smaller memory footprint, and executed with a significant performance increase.
  • Use of decision data from the Analytic Datamart in decision testing and analysis: in addition to being able to upload data directly to Decision Testing, a user can now access and leverage data from previous decisions stored in the Analytic Datamart to run through testing and analysis scenarios.

 

FICO Analytics Workbench 2.0 Trial

Visit the Analytics Community

FICO® Analytics Workbench, introduced last June, is a cloud-based analytics toolkit that powers business users and data scientists with sophisticated yet easy-to-use data exploration, visual data wrangling, decision strategy design and machine learning. Analytics Workbench is designed for teams of varying skill sets, as it can be used to tackle a diverse set of high-value modeling problems. Users can quickly build analytic models that can be easily deployed independently as a web service or as a service on the Decision Management Suite. The cloud-based/SaaS environment improves time-to-value and user collaboration. Analytics Workbench supports seamless integration with other analytic solutions and, ultimately, with the Decision Management Platform, allowing for high-speed model execution.

 

FICO® Analytics Workbench 2.0 continues to drive a vision of delivering a unified data science experience -- including data ingestion, wrangling, analytic modeling and algorithm development -- on the FICO Decision Management Platform. The new version of Analytics Workbench delivers:

  • Target driven strategy design to enable customers to better segment populations by variables they care about.  A target-driven tree will result in segmentation that better separates the targets of choice (good vs. bad, responder vs. non-responder, etc.).
  • Support for expanded machine learning and artificial intelligence libraries -- specifically xgboost and H2O -- to provide greater utility and integration of artificial intelligence algorithms, as well as support for R and the scalable SparkR package in Analytics Workbench. These two new capabilities will help yield much more precise decisions.

 

FICO Xpress Optimization

Visit the Xpress Optimization Community

As a reminder, FICO Xpress Optimization is now comprised of four core components: Xpress Insight, Xpress Executor, Xpress Solver and Xpress Workbench. Xpress Insight enables businesses to rapidly deploy optimization models as powerful applications. It allows users to interact with models in business terms and runs all FICO Xpress Optimization Solutions, including Decision Optimizer. Xpress Executor provides standalone support for optimization execution services, allowing businesses to deploy and execute optimization models quickly and easily.  Xpress Solver provides the widest breadth of industry leading optimization algorithms and technologies to solve linear, mixed integer and non-linear problems. FICO® Xpress Workbench is an Integrated Development Environment (IDE) for developing optimization models and complete solutions. FICO® Xpress Workbench integrates with Xpress Insight for seamless development and deployment of complete optimization solutions. It includes Xpress Mosel, the market-leading modeling, programming language and orchestration language. While this release delivers new capabilities to all components, these are the major advancements:

 

Xpress Insight 4.9

  • Xpress Insight includes usability and performance enhancements specifically designed to make solutions development easier and faster for our customers.  Please see the Xpress Insight 4.9 Release Blog

 

Xpress Executor 2.1

  • Xpress Executor now includes rules execution for the fast and repeated execution of rules within an optimization model run. This feature is used in solutions that need to run a very high number of rules with a very low latency while running an optimization model in the cloud.

 

Xpress Solver 8.4

  • Xpress Solver includes significant performance enhancements. With these algorithmic enhancements, FICO runs up to 25% faster in linear programming benchmarks and up to 20% faster in mixed integer programming benchmarks compared to 6 months ago. With this release, FICO has regained its #1 position in LP benchmarks.

 

Xpress Workbench 2.1

  • FICO Xpress Mosel, the premier analytic orchestration, algebraic modeling, and programming language is now FREE and OPEN
    • FICO is deeply committed to the field of mathematical optimization. By providing Xpress Mosel FREE to our users, the industry-leading modeling language is now accessible to everyone. We believe that every optimization or analytics project can benefit from the power of Xpress Mosel.
    • Not only is Xpress Mosel FREE, but it is now OPEN to connect to 3rd party solvers (not just FICO Xpress Solver). We believe OPEN Xpress Mosel is the best choice for modeling any problem type and then solving it with any solver technology available.
    • For more details, read the full OPEN and FREE Xpress Mosel announcement.
  • The 2.1 update also includes several additional usability and performance enhancements, including an updated IDE (Integrated Development Environment). For technical details, please see the Xpress Workbench 2.1 Blog.

 

Decision Optimizer 7.2

Visit the Xpress Optimization Community

  • Time series equation component support: functionality that allows users to create equations that undertake a series of calculations, such as a Time Series calculation for example, to calculate deprecation of an asset over a number of years or collections action over a number of days. In addition to bringing back this functionality from 6.0, the support in 7.2 also adds the ability to identify outputs created during the series calculation as temporary. This allows users to remove data created during the different series steps, which helps to reduce clutter and improve performance.
  • Tree aware optimization treatment constraints: this fully replicates the 6.0 functionality, but also provides a simpler, more visual user interface for setting criteria. Tree aware optimization allows users to define both eligibility criteria and consistency criteria.
      • Eligibility criteria allows users to define which end nodes of a tree qualify for which treatments.
      • Consistency criteria allows users to define which treatments can be assigned to each end node based on the rank ordering set on a decision tree variable or variables.
    • This functionality will help users get to a final refined Decision Tree Strategy more quickly, potentially saving days during a professional services development project.
  • Project and scenario package importation and export: this new capability will significantly help users export and import whole DO7 projects for sharing, review, etc. A Project Export / Import will now support all the core details of the Decision Impact Model, Scenarios and Results. It will make the moving and upgrading of DO7 projects much more efficient.

 

Strategy Director 2.2

Strategy Director has been updated to version 2.2. This release includes a number of enhancements to make decision configuration and testing easier to help Strategy Director become an integral part of the DM Suite best practices. The key enhancements include:

  • Home page and usability improvements that are based on usability studies. These improvements not only make the user experience more visually appealing, but also dramatically improve navigation to ensure key features are easier to find and access.
  • Strategy Flow: Strategy Director 2.2 enhances strategy flows to connect multiple trees to better articulate and manage strategy objectives.
  • Decision area and strategy parameters augment the inputs that users can configure to parameterize settings into decision components.
  • Parameter families: Historically, parameters would be single values that are sometimes augmented to include multiple values across multiple parameters. In Strategy Director 2.2, users can now create arrays of parameters and configure longitudinal actions with ease, providing a lot more flexibility and accuracy in outline decision parameters and outcomes.
  • Simulation capabilities provide users with the ability to trigger simulation jobs to test what-if scenarios.
  • Write to the underlying DM Suite Analytic Data Mart and provide decision reporting as a native DMP component.

 

DMP Streaming 3.5

DMP Streaming is FICO's industry leading in-stream analytics solution. FICO Decision Management Platform Streaming should appeal to customers who are considering:

  • Acquiring or building high-performance event processing systems for use cases such as: operational monitoring, Internet-of-Things (IoT), real-time anomaly detection or contextually aware marketing solutions.
  • Organizations looking for alternatives to spending hundreds of hours on traditional code-heavy approaches to developing and deploying enterprise-grade, high-performance solutions. These organizations will see value in the integration of FICO's streaming analytics and job-step approach, along with the integration with FICO Application Studio.
  • Solutions to accelerate and improve decisions by integrating existing applications to consume decision-ready data without rip and replace changes to the application ecosystem.

 

Want to try these products?

This quarterly update, and the formal GA notification, is only applicable to the FICO Decision Management Suite implementations for on-premises, unmanaged Amazon Web Services (AWS) implementations (e.g. customer deployed AWS installations).  The AWS hosted, FICO managed offering general availability is scheduled for the near future. You can find and trial the majority of these products on the FICO Analytic Cloud.

As covered in this blog series, my colleague Lamar set to help managers make informed decisions about employee compensation and promotions. Beginning with a Kaggle dataset of employee data, she created an Employee Attrition Score, decision tree and analytic model-powered strategy.

 

Of course, running the analysis is one thing – it is what you do after that, ultimately, really counts. In this blog series, I am going to walk you through how to implement the model and strategy into a software system, so managers can actually use the analytic models to make decisions. Before I get into the implementation details, I want to describe what decision services are and how we use them at FICO.

 

Invoking a Decision Service

 

This is an exciting time for software developers, who now have myriad low-friction, rapid options for integration and deployment into the cloud, and can benefit from the growing number of platform services available by the cloud vendors. For this project, I used the FICO Decision Management Suite (DMS), which is available on AWS, Amazon’s Cloud Platform (as well as other configurations).

 

software implimentation.pngI implemented my decision service as an AWS Lambda function, which is auto scaled, meaning the service is available whenever it’s needed – no matter how many people are using it. However, it is not sitting idle when it is unused, so you do not have to pay for the computing resources you do not need.

 

Many of us are using voice-activated devices, like Amazon Echo or Google Home, to control the lights in our homes, play music, or even order dinner. Imagine if you could access your operationalized decisions just as easily? That is, wouldn’t it be nice if you could just ask “Alexa, what is the attrition risk for employee X?” and then engage in further dialog to help you arrive at the best decision to make to retain your employee? That’s the ease of use I’m aiming for with this implementation.

 

In DMS, we are exploring the use of Lambda functions for our DMS RESTful web service deployment. Think of a RESTful web service as a process running somewhere in the Cloud that takes data in and spits data out. You are probably using them without even knowing every time you order something on the internet, as most web applications now use RESTful services to get the work done. The decision services built in FICO DMS, and using Lambda functions, help me to make the exact same decision many times in many ways.

 

When I need a quick response, I build a web application to invoke my RESTful decision service using FICO Application Studio. Managers can build user-friendly web applications quickly and easily using FICO Application Studio to determine the best immediate action.

 

RESTful services can also be called from other services or applications. For example, suppose I create a RESTful decision service that would automatically compensate my development team after a sale. I can include this RESTful decision service as part of the sales fulfillment process. During the testing phase, I use common open source utilities such as SoapUI and Jmeter, which provide a developer-friendly way to invoke a decision service.

 

At quarterly performance review time, suppose I want to mail analytically-derived recommendations to all of my managers for their teams; I can drop a file into a secure location that triggers my decision service to be called in Apache Spark for fast batch processing.

 

FICO decision services can also be executed in any JAVA container if no internet is available.

 

Deployment is just one aspect of implementing an analytically powered decision service that helps managers make smarter, faster decisions. Stay tuned for my next blog about designing a decision service with FICO DMN Modeler, a powerful and visually intuitive tool for decision modeling. In the meantime, check out our Blaze Advisor and Decision Modeler Community for more information and discussion about decision management and the tools you can use to make it happen.

If you’ve been following my blog series beginning with Analytics for Employee Retention followed by Creating a Score for Employee Attrition, then you know my analysis so far has been all about the data. I made sure the data was clean and representative, and then binned the data to investigate how each variable relates to my target of attrition. From there, I performed distribution analysis to discover just how predictive each variable is. With all that knowledge, I created a single Attrition Score and established a cutoff to leave me with just the employees likely to attrite. Now that I’ve identified a group of employees that are likely to leave the company, it’s time to figure out how to make use of this information.

 

Define a Decision Point, Then Start Simple

In this case, I will use an employee’s annual review as the decision point. During an annual review, a manager can use the score along with other information to inform their actions around promotions and raises.

 

I started with a simple matrix as a way to think about segmentation based on two variables, employee performance and the probability of attrition provided by the score. I plotted these recommended actions for managers, derived from the decision tree, in the matrix below:

Which Employees are Worth Retaining?

employee review matrix.png

A manager will need to take different actions for different employees, this matrix accounts for each employee’s Attrition Score (y axis) along with their performance (x axis) to recommend what the manager should do come annual review time. For example:

  • If an outstanding employee has a high probability of attrition, the manager should strive to retain.
  • If an outstanding employee has a low probability of attrition the manager should continue business as usual.
  • For employee’s with poor performance, the manager should not use resources to retain them, and should manage them out.

 

Build a Decision Tree

A decision tree allows me to do the same segmentation, but with more variables. In this example, I use the Attrition Score plus other factors to further segment the employee population with the end result of a recommended action. I used FICO® Analytics Workbench™ to create a decision tree; my analysis showed that the number of years at the company, combined with the total working years, created effective segmentation. Other helpful variables include overtime and the probability of attrition score (P_Attr_). One major advantage of using a decision tree is the ability to use variables together to profile unique populations and to apply specific actions based on knowledge other than prediction. You can find more discussion on decision trees here.

 

decision tree.png

 

Develop and Refine a Retention Strategy

We know that replacing employees can cost a lot of money, so it could be worth the cost to proactively implement programs.  However, a decision tree can help create even more effective segmentation than just using the score alone, so I applied a high level exclusion of employees with poor performance. This leaves only employees that are meeting or exceeding expectations. Beyond understanding the attrition rate, segmentation on additional variables gives me insight that can help me tailor the actions to be even more effective.

 

retention strategy.png

For employees who have a low number of years at the company and a low number of total working years, a company could set up a “newcomers club.”  This is a way for the employer to make employees feel welcome, and help build social relationships that could make them more “sticky”.

 

For employees with more experience overall, a different approach will be more effective. Presumably, these employees are interested in building their professional network. A company can encourage attendance at conferences or participation in organizations like WITI. This will help keep these employees happy by focusing on helping them grow professionally.

 

For those employees with more years on the job, the data tells us that working overtime has a big impact on their likelihood to leave. Employees who work overtime and have a high attrition score have a whopping 45% attrition rate, so this is where the company should focus its biggest expenditure. From earlier binning analysis, we recall that having stock options resulted in a lower attrition likelihood. Even though stock options are an expensive investment for the company, this segment of employees will likely offer a good return on investment.

 

Those employees who don’t work overtime but have high attrition scores still have a 20% attrition rate. Here, the company must work to understand what would make these employees stay. Perhaps their managers can conduct a personal interview, with the goal of eliciting what could make them happier. Busy people managers don’t have time to intensively meet with all their employees; they can use this decision tree to determine which employees are most at risk, then use that information to determinie the best way to allocate their time to produce the most positive outcome for the company.

 

3 blogs later and we’ve used analytics to address the problem of employee attrition! You should now know how analytics can be applied to an employee retention decision, and from this, you can imagine how anywhere there is data and decisions, there is a way to make decisions more effective through data analysis. FICO® Analytics Workbench™ was my tool of choice to go from data to analysis to action: check it out for yourself or join the conversation in the Analytics Community.

How can you use analytics to sell more sandwiches? I'll walk you through this use case where an independent convenience store owner recently decided to expand her lunch business by adding a deli counter, which provides customers with made-to-order sandwiches.

 

She wanted to analyze what customers were buying so she could improve her offerings. For three months, she captured transaction-level data that was intended to help her understand the contents of her customers’ baskets. In particular, she wanted to profile her deli sandwich buyers. She wanted to understand what add-on items they purchased, in order to help her decide what additional items she could successfully promote to them, and how best to go about it.

 

Step 1: Identify Data

The proprietor decided to use FICO Analytic Modeler Decision Tree Professional. She uploaded a data base where each record represents a customer's basket and got to work.

 

 

Step 2: Create a Project

She created a new project so she could access and analyze her information.

 

Step 3: Create a Tree

Once her project was created, she was ready to create a tree to capture and analyze all her data. Here, she specifies her outcomes of interest and adds predictors so she can assess the relationship among selected variables.

 

Step 4: Draw Conclusions

Now it's time for the fun part. Since she has already used Decision Tree Pro to create a tree, she's ready to draw conclusions and establish her business plans going forward. She can create treatments for different branches, and dig into her data to see what items customers are likely to buy with and without sandwiches. From there, she can strategize about creating combos to increase sales and revenue.

 

Results

The proprietor placed chips and pretzels closer to the deli counter.  This helped streamline the flow of the lunch crowd through the store at its busiest time, which in turn helped to serve more customers and increase sales.

Additionally, she created bundled lunch “meals” including:

  • Sandwich, chips or pretzels, and soda (competitively priced with local sandwich shops)
  • Sandwich, tea, and fruit (promoting the additional anti-oxidants contained in this trio)

 

This helped to promote repeat business from existing customers, to increase the sale of add-on items, and to attract new lunchtime customers.

 

Visit the FICO Analytic Cloud to request free trial access to FICO® Decision Tree Professional and get the dataset used to create this analysis. Visit the Analytics Community to discuss using decision trees and other analytic applications and tools.

robin.d

A BRMS Primer

Posted by robin.d Nov 1, 2017

Business Rules Management System (BRMS): One system merging expertise and simple management.

 

Expert systems have been around for some time now and they are used widely. Each implementation is tailored to a special purpose, and these implementations require attention from experts in order to function properly, as they are very rigid and hard to maintain. Since expert systems are extremely specialized they are not ideal for a scalable business. While the capabilities of an expert system are undoubtedly useful, their distinctive structure requires unique attention, leaving some things to be desired. To address this lack of flexibility, yet still maintain similar function, one can use a BRMS architecture.

 

It is widely accepted to refer to rules based production systems as expert systems; however, it is technically incorrect. It's easy to get confused between the expert system’s architecture and expert systems themselves. The architecture becomes an expert system when an ontological model is applied that represents an expertise field. A BRMS is far more versatile than an expert system. MYCIN is a common example of a medical expert system in which rules are hard coded to prohibit simple modifications. This limits the users, not allowing them to make iterations to address the how and why of the system. The first BRMS was born in the wake of MYCIN to adapt to a more permissive environment. The term Business Rule Engine (BRE) is often used to refer to BRMS, however, it is ambiguous because it can refer to any system that uses business rules in the core of its processes.

 

interface engine.png

The inference engine uses rules and facts to identify the cases to be used to optimize the execution agenda.

 

In describing the architecture of BRMS, one cannot omit algorithms used in chain or back chain rules. BRMS differentiate from BRE because they are more complete and do not limit to backward and forward chaining algorithms. BRMS offer the RETE algorithm, which is a schema based reasoning applied to optimize rules. When activated, the RETE algorithm transforms the BRMS into a hybrid rule and schema based software solution.

 

How exactly does a BRMS allow users to define the right business process? Rules can be created according to different levels of freedom for the final users.

Screen Shot 2017-10-13 at 4.39.22 PM.png

Several different rules development templates exist. There are rulesets to create one or more rules, templates that control rule instantiation, and even decision table templates for more advanced solutions. The more complex the BRMS, the more templates are offered for rules creation. It’s the responsibility of the software architect to choose which template model to use.

 

These rules are kept in the rules repository, which may or may not be under a source control tool such as Apache Subversion. Repositories require special attention; they must maintain resilience and coherence of all rules at any cost. Two types of repositories exist, open or proprietary. The ease of maintenance is related to the operations that are available to manipulate the repository.

 

Should you need a BRMS software for one of your projects, I encourage you to investigate for yourself, as not all products are equals and offer differing capabilities. It’s a matter of budget, time, and effort.

 

Visit our Blaze Advisor and Decision Management group for more information about applying a BRMS.

In my last blog, I explained how I organized the data in a Kaggle HR Attrition dataset to figure out how I could use analytics to improve employee retention. So far, I made sure the data was clean and representative, and then binned the data to investigate how each variable relates to my target of attrition. With a ranked list of each variable’s information value in tow (see below), I moved on to create a visual representation of the attrition likelihood.

 

binned data.png

 

Gain Insight with Distribution Analysis

Just as I expected, working overtime leads to a higher chance of attrition, as you can see by the large blue bar in the chart below. Stock options, however, have the opposite effect, as employees with stocks are more likely to stay put. Looking to the number of companies someone has worked for provides an interesting result. If an employee has worked at 2-4 companies, they are at low risk to attrite. However, having worked at 5 or more companies is related to a spike in attrition. 

 

gain insights.pngA bar to the right means the records have a higher likelihood of attrition, while a bar to the left means a lower likelihood.

 

So, what can we learn from all that? If someone is working loads of overtime, doesn’t have stock options, and on top of that, has worked for over 5 other employers, they are very likely to leave the company.

 

Build an Attrition Score

Now that we understand how each individual variable is likely to affect attrition, we can combine the variables from our dataset in the best way possible to predict how likely an employee is to leave. Below is a visual representation of the variables that contribute to the score I created. The size of each bubble indicates how predictive the variable is, but our scoring algorithms also take into account correlation between variables, so the amount of unique predictive information is shown by the “non-overlapping” amount of each bubble.

 

simplified scorecard.png

This scorecard was built purely for illustrative purposes, but I think it makes enough sense to extrapolate throughout this analysis. Examining the scorecard, I can gain information about the relationships between variables and my attrition target. For example, monthly income is more predictive than distance from home, but more correlated with overtime

 

From this scorecard, I built a single Attrition Score. Accounts with a high score have a high likelihood of attrition, while those with a low score have a low likelihood. Now that we have a score we can rank-order our employees by likelihood of attrition.

 

rank ordered scores.png

 

Graphing the score distribution allows us to identify which individuals are likely to attrite and draw a line to represent a score cutoff. After all that analysis, we know that employees who score high enough to fall to the right of our cutoff are likely to leave. Now what?

 

I started this project with the intention of helping managers make decisions. So, if you’re a manager and it’s end of the year review time, what do you do if your employee’s score indicates a high likelihood of attrition? That’s where decision trees and a retention strategy come in. Stay tuned to the FICO Community Blog for the rest of my analysis plus an explanation of how my results can actually be implemented into a company’s processes.

 

I used FICO® Analytics Workbench™ to analyze the data in this project and FICO Scorecard Professional to build the scorecard. You can get a free trial of FICO Scorecard Professional here.

Mixed-Integer programming (MIP) solvers like FICO Xpress address the most challenging optimization problems. About every seven years a new test library of MIP instances is released, the MIPLIB (see our blog post Call for MIPLIB 2017 Submissions). Besides a benchmarking set, MIPLIB always contains unsolved "challenge" instances which the scientific community tries to tackle for the years after a MIPLIB release. Sometimes, it is new algorithms that are the key to solving an instance, e.g., in December 2016, Xpress 8.1 could solve the previously unsolved instance pigeon-19 in less than a second by the use of special cutting plane techniques.

 

Sometimes, however, it is sheer computation power that makes the difference. Together with our cooperation partners at the Research Campus MODAL in Berlin, we developed a ground-breaking, massively parallel version of Xpress, called ParaXpress. ParaXpress is an academic expert code which can perform awesome feats with a bit of hand-crafting for each individual run. For considerations on parallelizing a MIP solver, see our blog posts on Parallelization of the FICO Xpress-Optimizer and on The Two Faces of Parallelism. ParaXpress runs FICO Xpress on supercomputers with tens of thousands of CPU cores, harnessing the computational power of hundreds of CPU years within a few hours.

 

ws17.jpgParticipants of the 2nd German-Japanese workshop on Mathematical Optimization and Data Analysis

 

At the 2nd German-Japanese workshop on Mathematical Optimization and Data Analysis at ZIB, Prof. Dr. Yuji Shinano  from MODAL, the master brain behind the ParaXpress development proudly presented the latest achievements: Two previously unsolved instances from MIPLIB2010, gmut-75-50 and gmut-77-40, could be solved for the first time ever by ParaXpress. Those are the first MIPLIB challenge instances solved in 2017.

 

To achieve this feat, we used the ISM supercomputer (located in Tokyo) and the HLRN-III supercomputer (located in Berlin), both members of the TOP500 list. We used up to 24576 CPU cores in parallel. Finding and proving the optimal solution took about eight hours per instance (this rolls out to 115000 CPU hours, that's 13 CPU years), exploring about twenty billion branch-and-bound nodes.

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

We've added several features and capability enhancements for rules creation efficiency, the creation of smarter and more collaborative analytics, and optimized decisions. Many of these efficiencies are created through the tighter integration of the Decision Management Suite components and the underlying Decision Management Platform.  A key benefit of this release is the increased empowerment of end users which reduces a dependency on IT for productivity.

 

The products with updated features in the cloud include:

  • FICO Decision Management Platform (DMP)
  • Decision Modeler
  • New packaging for FICO's Optimization Products
  • Decision Optimizer
  • Xpress Insight
  • Xpress Workbench

 

FICO Decision Management Platform 2.2

The FICO Decision Management Platform 2.2 provides new and critical capabilities focused on security and integration with the rest of the Decision Management Suite.  DMP 2.2 updates Java support to Java8.  In addition, DMP 2.2 introduces a new lightweight task orchestration tool which will allow users to articulate and codify business process models as components of a Decision Management solution.

 

FICO Decision Modeler 2.2

Decision Modeler 2.2 expands decision rules capabilities in the cloud by adding support for machine learning model import. It also features enhanced capabilities in testing and  in analyzing decision logic.  The new capabilities of Decision Modeler 2.2 include:

  • The ability to import and export FSML from decision trees.
    • The exported FSML can be consumed by other FICO products including Analytics Workbench and Decision Optimizer 7.x.  T
    • This dramatically improves the integration of these components and strengthens FICO's Decision Management Suite for its customers.
  • The ability to import machine learning models for business users.
    • By adding support for PMML mining models in Decision Modeler, business users can now import these models and incorporate them in their decision logic.
    • With Decision Modeler, citizen data scientists can now use machine learning in their operational decision-making without having to rely on IT staff.
  • The introduction of Decision Analyzer to enable troubleshooting as part of the existing Decision Testing process.
    • Decision Testing has long enabled users to easily pick one entry point and configure the fields they want to use in their test cases, but it has been limited in providing an explanation for the testing results.
    • Decision Analyzer now provides business users with a graphical and interactive representation of their entire execution flow, allowing business users to explore what logic was responsible for the results.  This provides a faster and far more efficient way to identify issues and to fix the decision logic.
  • The import and management of Java classes directly in Decision Modeler which eliminates the need to use the Blaze Advisor IDE.
    • Now anyone can reap the significant performance benefits of using a Java Business Object Model.

 

FICO Decision Optimizer 7.1

The 7.1 update provides additional functionality and performance improvements.  These include:

  • Data-informed strategy design in trees
    • While 7.0 provided significant enhancements to Decision Optimizer, users were still limited in what they could do with strategy trees.
    • Data can now be passed through trees to allow for better visualization as well as tree building, editing, and refining.
    • No longer will users be required to use separate software to build & refine strategy trees but rather can simply leverage the tree refinement capabilities within Decision Optimizer.
  • Enhanced Tree Aware Optimization (TAO)
    • Taking the decision tree value to the next level, 7.1 provides support for multiple TAO Tree Templates used to define tree output criteria, the ability to define granularity criteria from an imported tree, and define parameters to identify desired tree size outputs.
    • TAO allows users to automate the creation of optimal decisions as decision trees, based on predetermined tree constraints and parameters. No other decision optimization software on the market comes close to matching these capabilities as they are unique to FICO.
  • Performance improvements
    • Optimization and TAO scenario run times are 20% to 40% faster than those seen in DO 7.0.

 

FICO Xpress Optimization

This quarterly announcement includes the formal roll-out of new Optimization nomenclature. The new naming unifies the optimization products under one name, FICO Xpress Optimization. It includes four components:

  • FICO® Xpress Insight (formerly Decision Optimizer) enables businesses to rapidly deploy optimization models as powerful applications. It allows business and other users to work with models in easy-to-understand terms.
  • FICO® Xpress Executor provides standalone support for optimization execution services, allowing business to deploy and execute optimization models quickly and easily.
  • FICO® Xpress Solver provides the widest breadth of industry leading optimization algorithms and technologies to solve linear, mixed integer and non-linear problems.
  • FICO® Xpress Workbench is an Integrated Development Environment (IDE) for developing optimization models, services and complete solutions. It is used with and it supports all other FICO optimization components.

In addition, all software solutions developed using FICO Xpress Optimization technology have been grouped together under the name - FICO Xpress Solutions. To learn more about the new naming and packaging of FICO Xpress Optimization, please visit Neill Crossley's blog. This quarterly announcement includes updates to Xpress Workbench and Xpress Insight.


FICO Xpress Workbench 2.0 (formerly Optimization Designer)

  • This is is an IDE for developing optimization models, services, and Insight apps.  As Optimization Designer, it was originally launched in January 2017 as a cloud-only solution. With Xpress Workbench 2.0 this component is now available for on-premises customers, as a Windows install, as well.
  • It delivers automated performance tuning to provide fine grain performance tuning to automate the optimal parameter settings, saving time and rigor normally involved in this process.
  • The Xpress Solver Python API now supports the nonlinear solvers, and provides a full API and modeling environment for all Xpress mathematical programming algorithms.

 

FICO Xpress Insight 4.8 (formerly Optimization Modeler)

  • A new table engine to provide Excel-like usability of tables as well as the faster rendering of sparsely populated tables.
  • Complementing Tableau for FICO, Insight provides VDL (View Definition Language) constructs for creating charts as part of custom views.

 

FICO Xpress Community License

  • A new community license for FICO Xpress Optimization is now available. The community license, which replaces the student license, provides for the personal use of a size restricted version of Xpress.

 

Strategy Director 2.1

FICO Strategy Director 2.1, as announced last February 3, 2017, is now an official and supported application on the FICO Decision Management Platform. FICO Strategy Director is a powerful decision authoring solution that combines market-leading analytics and decision automation, enabling business users in many industries to proactively and more successfully manage their business.  While Strategy Director today delivers specific turnkey solutions for retail banking and telecommunication customers, FICO is looking to expand the solutions packages to other industries and use cases in the near future. By moving fully to the FICO Analytic Cloud on AWS, customers deploying Strategy Director no longer need to consider hardware and software acquisition and overhead costs. In addition, customers can more easily bring in new data and variables into Strategy Director making the solution more analytically sophisticated without adding complexity.

 

Want to try these products?

This quarterly update, and the formal GA notification, is only applicable to the FICO Decision Management Suite implementations for on-premises, unmanaged Amazon Web Services (AWS) implementations (e.g.., customer deployed AWS installations).  The AWS hosted, FICO managed offering general availability is scheduled for the near future. You can find and trial the majority of these products on the FICO Analytic Cloud.

Imagine that your decision service compiles successfully and you are ready to run data through the decision service to see if your decision logic generates the expected results. Now that you have reached this important development milestone, how do you test and debug your decision service?

 

You simply upload a dataset to the decision service using the Decision Testing framework. After the run is completed, any errors or validation warnings are highlighted in a table so you can quickly see which tests passed and which ones need to be reviewed. If your dataset includes expected values, the values are automatically compared to the actual values and any deviations are highlighted.

 

With Decision Modeler 2.2, the Decision Testing framework has been enhanced to include a new module called Decision Analyzer. If an error or validation warning occurs, you can select a particular test and use a trace graph to step through the decision logic to see how the results were generated. During the analysis, you see the result state at various points in the execution flow so you know which conditions in a decision entity were met and the actions that modified any object properties. Conversely, you can select an object property and see where it was modified during the execution. This analysis helps you to determine whether the problem lies with the decision logic or your test data.

 

 

In video above, you can see the Decision Analyzer in action. After the dataset is uploaded to the decision service, one of the tests generates a validation warning. For this test case, it was expected that the student’s application would be denied; instead, the application was accepted. Why did this happen? The developer runs the decision service and then steps through the decision logic to see which object properties were modified during each stage of the execution and finally the source of the problem is located.

 

The example shown in the video is included with the trial version of Decision Modeler along with a dataset that you can modify and run using the Decision Testing framework. Get the latest trial version of Decision Modeler here and see how easy it is to run test data through a decision service.

 

By,

Bonnie Clark & Fernando Donati