Skip navigation
1 2 3 4 Previous Next

FICO Community Articles

60 posts

If you’ve been following my blog series beginning with Analytics for Employee Retention followed by Creating a Score for Employee Attrition, then you know my analysis so far has been all about the data. I made sure the data was clean and representative, and then binned the data to investigate how each variable relates to my target of attrition. From there, I performed distribution analysis to discover just how predictive each variable is. With all that knowledge, I created a single Attrition Score and established a cutoff to leave me with just the employees likely to attrite. Now that I’ve identified a group of employees that are likely to leave the company, it’s time to figure out how to make use of this information.

 

Define a Decision Point, Then Start Simple

In this case, I will use an employee’s annual review as the decision point. During an annual review, a manager can use the score along with other information to inform their actions around promotions and raises.

 

I started with a simple matrix as a way to think about segmentation based on two variables, employee performance and the probability of attrition provided by the score. I plotted these recommended actions for managers, derived from the decision tree, in the matrix below:

Which Employees are Worth Retaining?

employee review matrix.png

A manager will need to take different actions for different employees, this matrix accounts for each employee’s Attrition Score (y axis) along with their performance (x axis) to recommend what the manager should do come annual review time. For example:

  • If an outstanding employee has a high probability of attrition, the manager should strive to retain.
  • If an outstanding employee has a low probability of attrition the manager should continue business as usual.
  • For employee’s with poor performance, the manager should not use resources to retain them, and should manage them out.

 

Build a Decision Tree

A decision tree allows me to do the same segmentation, but with more variables. In this example, I use the Attrition Score plus other factors to further segment the employee population with the end result of a recommended action. I used FICO® Analytics Workbench™ to create a decision tree; my analysis showed that the number of years at the company, combined with the total working years, created effective segmentation. Other helpful variables include overtime and the probability of attrition score (P_Attr_). One major advantage of using a decision tree is the ability to use variables together to profile unique populations and to apply specific actions based on knowledge other than prediction. You can find more discussion on decision trees here.

 

decision tree.png

 

Develop and Refine a Retention Strategy

We know that replacing employees can cost a lot of money, so it could be worth the cost to proactively implement programs.  However, a decision tree can help create even more effective segmentation than just using the score alone, so I applied a high level exclusion of employees with poor performance. This leaves only employees that are meeting or exceeding expectations. Beyond understanding the attrition rate, segmentation on additional variables gives me insight that can help me tailor the actions to be even more effective.

 

retention strategy.png

For employees who have a low number of years at the company and a low number of total working years, a company could set up a “newcomers club.”  This is a way for the employer to make employees feel welcome, and help build social relationships that could make them more “sticky”.

 

For employees with more experience overall, a different approach will be more effective. Presumably, these employees are interested in building their professional network. A company can encourage attendance at conferences or participation in organizations like WITI. This will help keep these employees happy by focusing on helping them grow professionally.

 

For those employees with more years on the job, the data tells us that working overtime has a big impact on their likelihood to leave. Employees who work overtime and have a high attrition score have a whopping 45% attrition rate, so this is where the company should focus its biggest expenditure. From earlier binning analysis, we recall that having stock options resulted in a lower attrition likelihood. Even though stock options are an expensive investment for the company, this segment of employees will likely offer a good return on investment.

 

Those employees who don’t work overtime but have high attrition scores still have a 20% attrition rate. Here, the company must work to understand what would make these employees stay. Perhaps their managers can conduct a personal interview, with the goal of eliciting what could make them happier. Busy people managers don’t have time to intensively meet with all their employees; they can use this decision tree to determine which employees are most at risk, then use that information to determinie the best way to allocate their time to produce the most positive outcome for the company.

 

3 blogs later and we’ve used analytics to address the problem of employee attrition! You should now know how analytics can be applied to an employee retention decision, and from this, you can imagine how anywhere there is data and decisions, there is a way to make decisions more effective through data analysis. FICO® Analytics Workbench™ was my tool of choice to go from data to analysis to action: check it out for yourself.

How can you use analytics to sell more sandwiches? I'll walk you through this use case where an independent convenience store owner recently decided to expand her lunch business by adding a deli counter, which provides customers with made-to-order sandwiches.

 

She wanted to analyze what customers were buying so she could improve her offerings. For three months, she captured transaction-level data that was intended to help her understand the contents of her customers’ baskets. In particular, she wanted to profile her deli sandwich buyers. She wanted to understand what add-on items they purchased, in order to help her decide what additional items she could successfully promote to them, and how best to go about it.

 

Step 1: Identify Data

The proprietor decided to use FICO Analytic Modeler Decision Tree Professional. She uploaded a data base where each record represents a customer's basket and got to work.

 

 

Step 2: Create a Project

She created a new project so she could access and analyze her information.

 

Step 3: Create a Tree

Once her project was created, she was ready to create a tree to capture and analyze all her data. Here, she specifies her outcomes of interest and adds predictors so she can assess the relationship among selected variables.

 

Step 4: Draw Conclusions

Now it's time for the fun part. Since she has already used Decision Tree Pro to create a tree, she's ready to draw conclusions and establish her business plans going forward. She can create treatments for different branches, and dig into her data to see what items customers are likely to buy with and without sandwiches. From there, she can strategize about creating combos to increase sales and revenue.

 

Results

The proprietor placed chips and pretzels closer to the deli counter.  This helped streamline the flow of the lunch crowd through the store at its busiest time, which in turn helped to serve more customers and increase sales.

Additionally, she created bundled lunch “meals” including:

  • Sandwich, chips or pretzels, and soda (competitively priced with local sandwich shops)
  • Sandwich, tea, and fruit (promoting the additional anti-oxidants contained in this trio)

 

This helped to promote repeat business from existing customers, to increase the sale of add-on items, and to attract new lunchtime customers.

 

Visit the FICO Analytic Cloud to request free trial access to FICO® Decision Tree Professional and get the dataset used to create this analysis. Visit the Analytics Community to discuss using decision trees and other analytic applications and tools.

robin.d

A BRMS Primer

Posted by robin.d Nov 1, 2017

Business Rules Management System (BRMS): One system merging expertise and simple management.

 

Expert systems have been around for some time now and they are used widely. Each implementation is tailored to a special purpose, and these implementations require attention from experts in order to function properly, as they are very rigid and hard to maintain. Since expert systems are extremely specialized they are not ideal for a scalable business. While the capabilities of an expert system are undoubtedly useful, their distinctive structure requires unique attention, leaving some things to be desired. To address this lack of flexibility, yet still maintain similar function, one can use a BRMS architecture.

 

It is widely accepted to refer to rules based production systems as expert systems; however, it is technically incorrect. It's easy to get confused between the expert system’s architecture and expert systems themselves. The architecture becomes an expert system when an ontological model is applied that represents an expertise field. A BRMS is far more versatile than an expert system. MYCIN is a common example of a medical expert system in which rules are hard coded to prohibit simple modifications. This limits the users, not allowing them to make iterations to address the how and why of the system. The first BRMS was born in the wake of MYCIN to adapt to a more permissive environment. The term Business Rule Engine (BRE) is often used to refer to BRMS, however, it is ambiguous because it can refer to any system that uses business rules in the core of its processes.

 

interface engine.png

The inference engine uses rules and facts to identify the cases to be used to optimize the execution agenda.

 

In describing the architecture of BRMS, one cannot omit algorithms used in chain or back chain rules. BRMS differentiate from BRE because they are more complete and do not limit to backward and forward chaining algorithms. BRMS offer the RETE algorithm, which is a schema based reasoning applied to optimize rules. When activated, the RETE algorithm transforms the BRMS into a hybrid rule and schema based software solution.

 

How exactly does a BRMS allow users to define the right business process? Rules can be created according to different levels of freedom for the final users.

Screen Shot 2017-10-13 at 4.39.22 PM.png

Several different rules development templates exist. There are rulesets to create one or more rules, templates that control rule instantiation, and even decision table templates for more advanced solutions. The more complex the BRMS, the more templates are offered for rules creation. It’s the responsibility of the software architect to choose which template model to use.

 

These rules are kept in the rules repository, which may or may not be under a source control tool such as Apache Subversion. Repositories require special attention; they must maintain resilience and coherence of all rules at any cost. Two types of repositories exist, open or proprietary. The ease of maintenance is related to the operations that are available to manipulate the repository.

 

Should you need a BRMS software for one of your projects, I encourage you to investigate for yourself, as not all products are equals and offer differing capabilities. It’s a matter of budget, time, and effort.

 

Visit our Blaze Advisor and Decision Management group for more information about applying a BRMS.

In my last blog, I explained how I organized the data in a Kaggle HR Attrition dataset to figure out how I could use analytics to improve employee retention. So far, I made sure the data was clean and representative, and then binned the data to investigate how each variable relates to my target of attrition. With a ranked list of each variable’s information value in tow (see below), I moved on to create a visual representation of the attrition likelihood.

 

binned data.png

 

Gain Insight with Distribution Analysis

Just as I expected, working overtime leads to a higher chance of attrition, as you can see by the large blue bar in the chart below. Stock options, however, have the opposite effect, as employees with stocks are more likely to stay put. Looking to the number of companies someone has worked for provides an interesting result. If an employee has worked at 2-4 companies, they are at low risk to attrite. However, having worked at 5 or more companies is related to a spike in attrition. 

 

gain insights.pngA bar to the right means the records have a higher likelihood of attrition, while a bar to the left means a lower likelihood.

 

So, what can we learn from all that? If someone is working loads of overtime, doesn’t have stock options, and on top of that, has worked for over 5 other employers, they are very likely to leave the company.

 

Build an Attrition Score

Now that we understand how each individual variable is likely to affect attrition, we can combine the variables from our dataset in the best way possible to predict how likely an employee is to leave. Below is a visual representation of the variables that contribute to the score I created. The size of each bubble indicates how predictive the variable is, but our scoring algorithms also take into account correlation between variables, so the amount of unique predictive information is shown by the “non-overlapping” amount of each bubble.

 

simplified scorecard.png

This scorecard was built purely for illustrative purposes, but I think it makes enough sense to extrapolate throughout this analysis. Examining the scorecard, I can gain information about the relationships between variables and my attrition target. For example, monthly income is more predictive than distance from home, but more correlated with overtime

 

From this scorecard, I built a single Attrition Score. Accounts with a high score have a high likelihood of attrition, while those with a low score have a low likelihood. Now that we have a score we can rank-order our employees by likelihood of attrition.

 

rank ordered scores.png

 

Graphing the score distribution allows us to identify which individuals are likely to attrite and draw a line to represent a score cutoff. After all that analysis, we know that employees who score high enough to fall to the right of our cutoff are likely to leave. Now what?

 

I started this project with the intention of helping managers make decisions. So, if you’re a manager and it’s end of the year review time, what do you do if your employee’s score indicates a high likelihood of attrition? That’s where decision trees and a retention strategy come in. Stay tuned to the FICO Community Blog for the rest of my analysis plus an explanation of how my results can actually be implemented into a company’s processes.

 

I used FICO® Analytics Workbench™ to analyze the data in this project and FICO Scorecard Professional to build the scorecard. You can get a free trial of FICO Scorecard Professional here.

Mixed-Integer programming (MIP) solvers like FICO Xpress address the most challenging optimization problems. About every seven years a new test library of MIP instances is released, the MIPLIB (see our blog post Call for MIPLIB 2017 Submissions). Besides a benchmarking set, MIPLIB always contains unsolved "challenge" instances which the scientific community tries to tackle for the years after a MIPLIB release. Sometimes, it is new algorithms that are the key to solving an instance, e.g., in December 2016, Xpress 8.1 could solve the previously unsolved instance pigeon-19 in less than a second by the use of special cutting plane techniques.

 

Sometimes, however, it is sheer computation power that makes the difference. Together with our cooperation partners at the Research Campus MODAL in Berlin, we developed a ground-breaking, massively parallel version of Xpress, called ParaXpress. ParaXpress is an academic expert code which can perform awesome feats with a bit of hand-crafting for each individual run. For considerations on parallelizing a MIP solver, see our blog posts on Parallelization of the FICO Xpress-Optimizer and on The Two Faces of Parallelism. ParaXpress runs FICO Xpress on supercomputers with tens of thousands of CPU cores, harnessing the computational power of hundreds of CPU years within a few hours.

 

ws17.jpgParticipants of the 2nd German-Japanese workshop on Mathematical Optimization and Data Analysis

 

At the 2nd German-Japanese workshop on Mathematical Optimization and Data Analysis at ZIB, Prof. Dr. Yuji Shinano  from MODAL, the master brain behind the ParaXpress development proudly presented the latest achievements: Two previously unsolved instances from MIPLIB2010, gmut-75-50 and gmut-77-40, could be solved for the first time ever by ParaXpress. Those are the first MIPLIB challenge instances solved in 2017.

 

To achieve this feat, we used the ISM supercomputer (located in Tokyo) and the HLRN-III supercomputer (located in Berlin), both members of the TOP500 list. We used up to 24576 CPU cores in parallel. Finding and proving the optimal solution took about eight hours per instance (this rolls out to 115000 CPU hours, that's 13 CPU years), exploring about twenty billion branch-and-bound nodes.

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

We've added several features and capability enhancements for rules creation efficiency, the creation of smarter and more collaborative analytics, and optimized decisions. Many of these efficiencies are created through the tighter integration of the Decision Management Suite components and the underlying Decision Management Platform.  A key benefit of this release is the increased empowerment of end users which reduces a dependency on IT for productivity.

 

The products with updated features in the cloud include:

  • FICO Decision Management Platform (DMP)
  • Decision Modeler
  • New packaging for FICO's Optimization Products
  • Decision Optimizer
  • Xpress Insight
  • Xpress Workbench

 

FICO Decision Management Platform 2.2

The FICO Decision Management Platform 2.2 provides new and critical capabilities focused on security and integration with the rest of the Decision Management Suite.  DMP 2.2 updates Java support to Java8.  In addition, DMP 2.2 introduces a new lightweight task orchestration tool which will allow users to articulate and codify business process models as components of a Decision Management solution.

 

FICO Decision Modeler 2.2

Decision Modeler 2.2 expands decision rules capabilities in the cloud by adding support for machine learning model import. It also features enhanced capabilities in testing and  in analyzing decision logic.  The new capabilities of Decision Modeler 2.2 include:

  • The ability to import and export FSML from decision trees.
    • The exported FSML can be consumed by other FICO products including Analytics Workbench and Decision Optimizer 7.x.  T
    • This dramatically improves the integration of these components and strengthens FICO's Decision Management Suite for its customers.
  • The ability to import machine learning models for business users.
    • By adding support for PMML mining models in Decision Modeler, business users can now import these models and incorporate them in their decision logic.
    • With Decision Modeler, citizen data scientists can now use machine learning in their operational decision-making without having to rely on IT staff.
  • The introduction of Decision Analyzer to enable troubleshooting as part of the existing Decision Testing process.
    • Decision Testing has long enabled users to easily pick one entry point and configure the fields they want to use in their test cases, but it has been limited in providing an explanation for the testing results.
    • Decision Analyzer now provides business users with a graphical and interactive representation of their entire execution flow, allowing business users to explore what logic was responsible for the results.  This provides a faster and far more efficient way to identify issues and to fix the decision logic.
  • The import and management of Java classes directly in Decision Modeler which eliminates the need to use the Blaze Advisor IDE.
    • Now anyone can reap the significant performance benefits of using a Java Business Object Model.

 

FICO Decision Optimizer 7.1

The 7.1 update provides additional functionality and performance improvements.  These include:

  • Data-informed strategy design in trees
    • While 7.0 provided significant enhancements to Decision Optimizer, users were still limited in what they could do with strategy trees.
    • Data can now be passed through trees to allow for better visualization as well as tree building, editing, and refining.
    • No longer will users be required to use separate software to build & refine strategy trees but rather can simply leverage the tree refinement capabilities within Decision Optimizer.
  • Enhanced Tree Aware Optimization (TAO)
    • Taking the decision tree value to the next level, 7.1 provides support for multiple TAO Tree Templates used to define tree output criteria, the ability to define granularity criteria from an imported tree, and define parameters to identify desired tree size outputs.
    • TAO allows users to automate the creation of optimal decisions as decision trees, based on predetermined tree constraints and parameters. No other decision optimization software on the market comes close to matching these capabilities as they are unique to FICO.
  • Performance improvements
    • Optimization and TAO scenario run times are 20% to 40% faster than those seen in DO 7.0.

 

FICO Xpress Optimization

This quarterly announcement includes the formal roll-out of new Optimization nomenclature. The new naming unifies the optimization products under one name, FICO Xpress Optimization. It includes four components:

  • FICO® Xpress Insight (formerly Decision Optimizer) enables businesses to rapidly deploy optimization models as powerful applications. It allows business and other users to work with models in easy-to-understand terms.
  • FICO® Xpress Executor provides standalone support for optimization execution services, allowing business to deploy and execute optimization models quickly and easily.
  • FICO® Xpress Solver provides the widest breadth of industry leading optimization algorithms and technologies to solve linear, mixed integer and non-linear problems.
  • FICO® Xpress Workbench is an Integrated Development Environment (IDE) for developing optimization models, services and complete solutions. It is used with and it supports all other FICO optimization components.

In addition, all software solutions developed using FICO Xpress Optimization technology have been grouped together under the name - FICO Xpress Solutions. To learn more about the new naming and packaging of FICO Xpress Optimization, please visit Neill Crossley's blog. This quarterly announcement includes updates to Xpress Workbench and Xpress Insight.


FICO Xpress Workbench 2.0 (formerly Optimization Designer)

  • This is is an IDE for developing optimization models, services, and Insight apps.  As Optimization Designer, it was originally launched in January 2017 as a cloud-only solution. With Xpress Workbench 2.0 this component is now available for on-premises customers, as a Windows install, as well.
  • It delivers automated performance tuning to provide fine grain performance tuning to automate the optimal parameter settings, saving time and rigor normally involved in this process.
  • The Xpress Solver Python API now supports the nonlinear solvers, and provides a full API and modeling environment for all Xpress mathematical programming algorithms.

 

FICO Xpress Insight 4.8 (formerly Optimization Modeler)

  • A new table engine to provide Excel-like usability of tables as well as the faster rendering of sparsely populated tables.
  • Complementing Tableau for FICO, Insight provides VDL (View Definition Language) constructs for creating charts as part of custom views.

 

FICO Xpress Community License

  • A new community license for FICO Xpress Optimization is now available. The community license, which replaces the student license, provides for the personal use of a size restricted version of Xpress.

 

Strategy Director 2.1

FICO Strategy Director 2.1, as announced last February 3, 2017, is now an official and supported application on the FICO Decision Management Platform. FICO Strategy Director is a powerful decision authoring solution that combines market-leading analytics and decision automation, enabling business users in many industries to proactively and more successfully manage their business.  While Strategy Director today delivers specific turnkey solutions for retail banking and telecommunication customers, FICO is looking to expand the solutions packages to other industries and use cases in the near future. By moving fully to the FICO Analytic Cloud on AWS, customers deploying Strategy Director no longer need to consider hardware and software acquisition and overhead costs. In addition, customers can more easily bring in new data and variables into Strategy Director making the solution more analytically sophisticated without adding complexity.

 

Want to try these products?

This quarterly update, and the formal GA notification, is only applicable to the FICO Decision Management Suite implementations for on-premises, unmanaged Amazon Web Services (AWS) implementations (e.g.., customer deployed AWS installations).  The AWS hosted, FICO managed offering general availability is scheduled for the near future. You can find and trial the majority of these products on the FICO Analytic Cloud.

Imagine that your decision service compiles successfully and you are ready to run data through the decision service to see if your decision logic generates the expected results. Now that you have reached this important development milestone, how do you test and debug your decision service?

 

You simply upload a dataset to the decision service using the Decision Testing framework. After the run is completed, any errors or validation warnings are highlighted in a table so you can quickly see which tests passed and which ones need to be reviewed. If your dataset includes expected values, the values are automatically compared to the actual values and any deviations are highlighted.

 

With Decision Modeler 2.2, the Decision Testing framework has been enhanced to include a new module called Decision Analyzer. If an error or validation warning occurs, you can select a particular test and use a trace graph to step through the decision logic to see how the results were generated. During the analysis, you see the result state at various points in the execution flow so you know which conditions in a decision entity were met and the actions that modified any object properties. Conversely, you can select an object property and see where it was modified during the execution. This analysis helps you to determine whether the problem lies with the decision logic or your test data.

 

 

In video above, you can see the Decision Analyzer in action. After the dataset is uploaded to the decision service, one of the tests generates a validation warning. For this test case, it was expected that the student’s application would be denied; instead, the application was accepted. Why did this happen? The developer runs the decision service and then steps through the decision logic to see which object properties were modified during each stage of the execution and finally the source of the problem is located.

 

The example shown in the video is included with the trial version of Decision Modeler along with a dataset that you can modify and run using the Decision Testing framework. Get the latest trial version of Decision Modeler here and see how easy it is to run test data through a decision service.

 

By,

Bonnie Clark & Fernando Donati

Is the simplex algorithm the most important algorithm for solving linear programs (LPs)? If you pose this question to a mathematical optimization expert, you will most likely get a "Yes, but..." as a reply. It is common folklore in the field that simplex is the most viable choice for solving sequences of similar LPs, like a mixed integer programming (MIP) solver does. The "but..." is brought in by the barrier algorithm. Barrier is on average faster for solving a single LP and it has the nice theoretical property of a polynomial runtime. Still, the common conception is that MIP solvers hardly benefit from the barrier algorithm and the best academic MIP solvers do not even include one. Most commercial solvers, however, use barrier as part of a concurrent solve of the initial LP relaxation, which gives some performance benefits.  We at FICO Xpress recently added some more applications of barrier to our MIP solver.

 

A barrier solution provides some unique structural insight into the problem at hand. This is due to the fact that barrier provides a solution which is in the relative interior of the optimal set or the feasible set when run without an objective function. The latter is typically referred to as the analytic center solution.  A variable taking a value at its bound or close to its bound in the analytic center tells you something about the likelihood that this variable takes the bound value in a feasible MIP solution.

 

We exploit the analytic center in different ways:

  • Presolving: If a variable is exactly at its bound in the analytic center (which is in the relatiinterior.pngve interior), then this is an indirect proof that this variable will be at its bound in every LP solution. Thus, it can be fixed in presolving.
  • Primal Heuristic: The analytic center can be seen as an indicator for the direction into which a variable is likely to move when going towards feasibility. We therefore use it as an auxiliary objective to search for a first feasible solution on particularly hard MIPs.
  • Branching: For the same reason, we use the analytic center to take branching decisions on the first levels of branch-and-bound on extremely dual degenerate MIP problems.

 

 

Last week, we presented these developments at the Operations Research Conference 2017 in Berlin, Germany. We submitted a paper to the post-conference proceedings, find it here:

https://opus4.kobv.de/opus4-zib/files/6459/TR_BertholdPerregaardMeszaros2017_Barrier4MIP.pdf

 

Within FICO Xpress, the proposed presolving, heuristic, and branching strategy all improve performance, but come with the computational burden of having to compute the barrier solution first. For each of the individual procedures, this is a rather big overhead. However, the barrier solution only needs to be computed once to enable the application of all of them, thus it is the variety of applications that makes it worthwhile. There are certainly more possible applications of barrier solutions within MIP solving, like for cutting plane generation, cut filtering, node selection or diverse primal heuristics. We would be excited to see further research in this direction.

 

You can learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

In the age of Hadoop and the Big Data hype, millions (or billions) of dollars are being spent to create the biggest data repositories – collections of unstructured data files stored horizontally across server farms.  Over the last few years of investment organizations large and small have built out Big Data collections and, instead of strategically planning for the (data) what? and why? they have started to collect everything in the hope of collecting something (anything) of value. These increasingly large data repositories even have a new name: Data Lakes.

 

Whether a formal investment in a supported distribution such as Cloudera, Hortonworks or MapR, or simply a side project initiated by downloading open source distributions, the number of Big Data projects is likely still not (yet) dwarfed by the countless words written by marketing and industry analysts hyping the new frontier of promised rich data insights driven by the commoditization of distributed file systems and no-SQL databases. Exacerbated by the promise of real time engagements and streaming data feeds, IT organizations have leapt into Big Data infrastructure with both feet, but with no planned landing.

 

Not at all ironically, four or five years after Hadoop-mania started, the new “big” thing seems to be artificial intelligence and machine learning – technologies that, while they continue to quickly evolve, have been around for decades. Why, all of a sudden, have AI and ML become trendy areas of investment? Could it be because, with data science experience hard to find and expensive to buy, business stuck with their large Data Lakes are increasingly under tremendous pressure to demonstrate value? What better way to do that than invest in technologies that promise to automate data insights? Check out Can Machine Learning Save Big Data? for a deeper discussion on this.

 

The big challenge, however, remains: how do businesses glean value from collecting all that data, sifting and wrangling the data to identify value, and then disseminate or leverage that value in real time throughout the business in a manner that’s actionable? Many Big Data investors have discovered an interesting Data Paradox: The more data you have the less value it provides.

 

bigstock--165557447.jpgDrowning in data is becoming a figurative reality.

 

More fundamental challenges with the growing repositories of data is that, in many cases, businesses don’t even know what they’re collecting. The required regulatory compliance that comes from harvesting credit card, personal health, or benefits information is onerous to those unprepared. Just as important is the question of whether any of that data is even relevant to the questions that need to be answered? Determining predictability and relevance are critical and should be asked before a Data Lake is even considered. The amount of data and the irrelevance of the data will only obfuscate and confuse inquiries and insights.

 

In order to try and glean even some value from their Big Data infrastructure many businesses have increased their investments in business intelligence and “data visualization” technologies. That is, the practice of visualizing data in charts and graphs to help identify trends. While these tools are great for reporting on sales numbers, they do not really provide the value add that you’d think would come from the massive investments in Big Data and real-time Data Lakes. After all, BI and data visualization technologies have been around for almost twenty years. They largely haven’t been architected in such a way to provide advanced analytic insights or leverage real-time predictive algorithms.

 

New data wrangling techniques are now coming to the forefront to address data obfuscation and relevance. However, there are no artificial intelligence or machine learning algorithms to pre-determine relevance. This is the value add of a true data scientist. Programming initial models to carefully consider relevance and applicability of data to problems is a science, so it should not be left to chance. Adding new data to an existing model or data pool is one thing; you can then measure and adjust based on predictive performance. The process of adding new data is certainly one that can be automated. Machine learning techniques exist to refine, train and improve existing models. However, initiating a new model requires training, perspective, and experience.

 

Unfortunately for many businesses, investments have been made and data collection has been done (or is being done) already. The heavy lifting required to glean value from these in-place repositories will require the human touch.

 

bigstock-Data-Recovery-3303827.jpgHow can you avoid drowning in all that data?

 

For those not yet invested, there is still time to consider what problems need to be solved, what insights need to be gleaned and what processes can be enhanced; what data is predictive? Still, the question remains: how can businesses make use of their ever-filling data lakes? There is in fact a way to solve this problem. Hundreds of companies across every industry imaginable are doing it. Check out how Vantiv is streamlining merchant onbarding, Southwest Airlines is optimizing flight and crew schedules, and Bank of Communications is fast tracking it's credit card business. These companies are using a combination of predictive data science along with business rules and optimization to dramatically improve business performance, but also to automate the process from data ingestion to insight to action.

 

I will continue this discussion with my colleague toddrollin@fico.com in a complimentary webinar on Wednesday, August 30, 2017. Join us to discuss Prescriptive Analytics and Decision Management: The Secret Ingredients for Digital Disruption.

Digital transformation is not just about implementing technology or changing the way technology is used. It is about a more evolved business experience; from operations to strategic decisions, companies are collaborating more with goals to improve workflows and increase the bottom line. Through automation, enhanced connectivity and more sophisticated decision making, organizations can change the way they work to improve customer interactions, day to day processes, and ultimately achieve their business objectives.

 

With FICO DMN Modeler, decision makers can understand exactly what goes into the decision making process. DMN Modeler takes a top down approach by understanding what inputs are necessary for the decision, rather than determining decisions based on the data that’s available. Because DMN follows an industry standard, Decision Model and Notation, all members of an organization can follow the decision process and understand key elements to the logic. This increases collaboration between teams and eliminates any confusion or lack of clarity stemmed by decisions made in silos. When analytic models or other inputs change, DMN Modeler easily shows how these changes effect the final decision, helping build collaboration between data scientists, LOB managers, and executives that are all following the same decision requirements diagrams.

 

For example, let’s say I want to buy a car. With Decision Model and Notation, there are 4 simple components involved in creating a Decision Requirements Diagram (DRD) to outline all necessary data and decisions to understand the decision process behind my purchase.

 

DMN 1.png

Deciding what car to buy involves several decisions and data points.

 

  1. Decision element – the business concept of an operational decision. In buying a car, the key decisions I need to make are the actual purchase of the car, deciding what my preferences are, and determining how much I can afford.
  2. Input data element – the data structure whose component values describe the case about which decisions are to be made. When deciding what my car preferences are, some of the inputs that I would consider are the quality of the car, or the color and style of the vehicle.
  3. Business knowledge model – business concepts such as policies, which could be an analytic model or an algorithm. In my car preferences, I would apply a set of rules or ranking. For example, I would have a scorecard ranking certain features such as a technology package, safety, and size of the car.
  4. Knowledge Source – this is where the knowledge or information to make a decision comes from. If I think about how I can better understand the quality of the car, I would consider my own personal experiences (with that particular make and model), as well as consumer ratings.

 

DMN blog 2.png

You can build a Decision Requirement Diagram (DRD) to capture everything in one place.

 

By using a DRD, it’s very easy to see how changes affect different parts of the decision making process. If one day I decided that my scorecard for car preferences changed, it would impact the entire decision. Because “car preferences” is one of the main decisions in the actual purchase of the car, changing the way I prioritize different features could change my ultimate decision.

 

Making decisions can be difficult and risky. Let DMN Modeler simplify the complexity of decisions through a clear industry standard. With DMN Modeler’s simplified decision-first approach, your organization will disrupt its existing processes and become more collaborative and efficient.

 

Start using DMN Modeler now, it's free!

Wherever there’s data, there’s a story to be told. I was browsing Kaggle for a dataset to analyze for a webinar (you can watch it here if you’re a member of WITI), and I came across this HR Attrition dataset. I’m used to the world of credit risk and lending scores, so the prospect of analyzing employee data piqued my interest as something completely different. I set out to apply the scoring concepts and analytic models I typically use to help banks make lending decisions to help managers make decisions about their employees.

 

My Goal

I set out to see if I could use employee data to predict attrition. Anytime you do analytics it’s important to understand the decision being made that you want to enhance. In this case, I want to help HR inform managers about their employee attrition likelihood so they can take action before it’s too late. While a good manager will be in touch with each employee to proactively detect changes in behavior or events that might affect attrition, I want to use analytics to help inform manager’s discussions with their employees and suggest actions they might recommend to improve retention.

 

I’ve outlined my methods and findings below. As a note: The data used in this project is not real employee data, I got it from Kaggle, so you can check it out for yourself and join the discussion.

 

Investigate the Data

You’ve all heard the saying “garbage in, garbage out.” Well, it’s true of data analysis as well.  With this employee data, we care about whether the attrition performance is representative of typical employees. So before actually beginning the analysis, we must make sure the data is clean and reliable.  To do so, we check distributions and values in the data.

 

In real life, we would know more about the data and how representative our sample might be.  For instance, it is important to account for historical change, like an acquisition or opening a new office in a different location, to ensure the population you draw from in your dataset is representative of the future records you will use for decisioning.

 

retention 1.png

This looks like a good, clean dataset on the surface. Let’s get started.

 

Once you’re sure you have a clean, robust set of data that is representative of the future, you can get to work relating data points to a “target”. In this case, the target is whether or not an employee is likely to stay with the company.

 

Binning the Variables

Binning is the process of taking a dataset that has prior decision variables and grouping those variables into ranges so you can see how predictive they are. With this, you can review the information value of each variable to discern predictability. This provides a measure of how “related” the variable is to employee attrition; the list below is ranked from most to least predictive of attrition.  In my webinar, I polled the audience to check their “mental model” of these variables: Over 90% thought “job satisfaction” would be the most predictive variable of employee attrition. Take a look at the image below to how that mental model compares to the actual results.

 

retention blog 2.png

 

I used FICO® Analytics Workbench™ to bin the data, that’s a peek at my screen. We see that the results do not match what most people expected. Overtime is at the top of the list, which means it is extremely related to attrition. Job satisfaction falls near the middle, meaning it is not nearly as strongly related.  So, the mental model of the audience was not correct, according to this data.  This may be an indication of bias in the data, since a variable like “job satisfaction” must come from a survey, and likely employees that answer surveys are not representative of the employee population.

 

Right away, I start thinking about the decisions I might make using this information. For instance, managers can effect change on “monthly income” or “overtime”, but cannot change “years at company” or “marital status” (unless the company opens a dating service!).

 

This list gives a nice overview of variable relation to the target, but I wanted to dig deeper. I went on to examine how each variable effect attrition and then created a single Attrition Score. More details on that in my next blog, stay tuned.

 

As I mentioned before, I used FICO® Analytics Workbench™ to bin my data in this project. Get more information here.

Several thousand decisions have already been made by the time shampoo is bottled, but at that point, it’s only half way to your cart. The role of artificial intelligence continues to affect the path the product takes from its origin all the way to your shower. In our journey through the secret life of your shampoo so far (you can get up to speed with my last blog), we already covered just how many decisions go into establishing the chemical mixture and transporting said chemicals to safely create the product you use to wash your hair. In fact, so many decisions must be made in compliance with so many rules that companies use business rules software to streamline the process and ensure efficiency and consistency.

 

Once the chemical mixture is established and all ingredients are safely transferred and mixed, it’s time to start production. Have you ever wondered how companies decide what to write on product packaging? There are laws dictating what must be printed on product labels in different countries. In Canada, brands must implement Bilingual Packaging, meaning labels have to be printed in both French and English. While packages sold in Canada must abide by this law, companies are not obliged to do the same for packages sold just across the border in the US. To comply with varying legislation, companies use automated rules software to analyze protocols and determine the text they print on your shampoo bottle versus the bottle of your (distant) neighbor.

 

shampoo2 1.png

Canadian products must comply with Bilingual Packaging laws—requiring a different label than in the US

 

With the packaging sorted, it’s time to go to market. The shelves of a big box store are competitive territory. The average height of a woman in the United States is 5’4”, and prime shelf real estate is eye level. Since stores have multiple shelves in multiple aisles in multiple sections, important decisions must be made about product placement. Retailers need to ensure maximum profit, and to that effort, they produce planograms to design and model the placement of products on their shelves. The creation of a planogram is far from arbitrary. Retailers can use a combination of FICO business rules software, analytics, and optimization tools to plan the finest placement of each and every product, including your shampoo. The fact that the bottle is on the second shelf to the left, ideal placement to catch your eye the first time and make it easy to find every subsequent trip, is no coincidence. That location was determined through a long process of data analytics that informs carefully weighted business decisions about product placement throughout the entire store.

 

shampoo2 2.png

Product placement is not random. Stores use analytics to produce planograms.

 

When you strolled down the hair care aisle and grabbed that bottle of shampoo, you made one decision. While that choice was your own, the entire process that got you to this point was informed by a chain of business decisions largely informed by artificial intelligence. Tools from FICO’s Decision Management Suite work in the background using AI to empower experts to act rapidly by eliminating the need for middlemen, thereby providing a more productive and efficient path to your cart. Every brand wants to make it a ‘no-brainer’ for a consumer to choose their product – but before you make that seemingly mindless decision to buy, countless decisions were made to create the product, curate the experience, and ultimately lead you to buy a bottle of shampoo.

 

by Fernando Donati Jorge and Makenna Breitenfeld

cliveharris

Deciding fast and slow

Posted by cliveharris Jul 5, 2017

The means by which we make decisions have fascinated researchers for generations. The Greek historian Herodotus was excited to discover, nearly 2500 years ago, of how the ancient Persians used to debate every decision twice – once when drunk, and once when sober – and if the two debates agreed, then the decision was passed. While no doubt fun for some, anyone who has ever regretted shopping on the internet while even slightly under the influence is going to be mildly skeptical of this approach.

 

Far more recently than Herodotus, in 2011, the celebrated behavioral economist Daniel Kahneman caused a near disturbance in the matrix with the publication of 'Thinking Fast and Slow' which assembled many of his years of psychology research into a form easily consumed by us ordinary people. As a pop-science title it was a best-seller, but it wasn't just a good story. It contained revelations about how we Earthlings arrive at decisions, and it taught us how to recognize, accommodate and compensate for mental traits that most likely evolved to suit our lives when we were hunter-gatherers. In a word - it was seminal.

 

This article describes a small fraction of the many rich findings in Kahneman's book, and goes on to explore how they might be interpreted in an organizational context, and in particular, if they can provide some insights in a world where organizations have the capability to automate many of their business decisions.

 

Both systems go

Try this mental arithmetic challenge - multiply 2 by 4. Now try this one - multiply 17 by 13. The second one was probably harder for you. What Kahneman reveals in his book is that these two tasks are not merely different by degree, rather - and perhaps counter-intuitively - they represent two different categories of thinking and of the processes by which we make decisions. He posits that this is an example of what in psychology is termed a dual-process model, which provides an account of how the same phenomenon can occur in two different ways.

 

Multiplying 2 by 4 probably took you close to zero effort. It was no doubt an automatic response and you didn't have to 'think' at all. This is the kind of mental processing that Kahneman refers to as System 1.

 

The second question probably took you a little effort. It is unlikely that you had an instinctive answer, and you probably had to hunker down a little and work it out. This kind of thinking is System 2 thinking.

 

Alarm-clock-on-laptop-concept--126245111.jpg

System 2 thinking increases necessary attention, therefore increasing time to a decision

 

System 1 is like a mental autopilot - always on, always ready to respond. It takes conscious effort to disengage System 1 and invoke System 2. System 1 is probably related to basic human responses like fight or flight - it aims to maximize survival, not, necessarily, success. There's no point in stopping to invoke System 2 to work things out when a sabre-toothed tiger fancies a bite.

 

This said, there probably was a time – perhaps when you were very young - when multiplying 2 by 4 wasn’t a System 1 activity – there was probably a time when you had to think about it. At some stage, this simple mental task and others like it seem to have made the transition from System 2 to System 1.

 

Further differences between the two modes of thinking are pretty startling. When you invoke System 2, there are some quite pronounced physiological changes. Your pupils dilate, your pulse quickens and you become insensitive to modest interruptions. It's probably not a million miles away from the state that software developers describe as flow. System 2 seems to reconfigure your body's limited resources (albeit in minor ways), consuming them slightly faster.

But what does it matter?

 

System 2 may well consume fractionally more of my body's resources, but I'm hardly likely to starve to death by overdoing it. For us people, it matters because by using the right system, we stand to benefit from better, faster decision-making, rather than from any resource surplus or deficit. But in organizations - where resources are often in the habit of being viewed through an accountant's lens - it matters because there's the potential for a second bite at the cherry. The resource costs of organizational decision-making can be incredibly high - think of the aggregated costs of employing people to individually meet and screen applicants for run-of-the-mill financial products. For organizations, reasoned, logical but expensive System 2 decisions have the potential to become automated System 1 decisions, and the payoff can be huge.

 

Right or wrong

System 1 thinking is absolutely necessary for us to engage with our lives - so much of what we do is effectively autonomous - but there are times when it’s a little too hasty. System 1 responses can be driven by instinct, resulting in the wrong answers, and Kahneman’s book contains many amusing examples of this. Indeed, part of Kahneman’s thesis is that we’d all become better decision makers if we took some time to assess which method is best in each situation (… in so doing, we push ourselves towards System 2).

 

So System 1 isn’t foolproof - if pressed to defend itself, it might proclaim “The perfect is the enemy of the good”. But for a business, there’s no point in rejecting a slow but good System 2 decision for a fast, automated, but poor System 1 decision. If you ever start making poor decisions at the speed of computers, your shareholders will soon want to know why.

 

Automating business decisions – or not

Some organizational decisions – perhaps rare or difficult decisions – are still a long way from becoming candidates for transitioning from System 2 to System 1. Think of a management team debating a proposed acquisition, or a major capital investment. Such decisions require analysis, logic, discourse and socializing throughout the organization. But for each rare or difficult decision, there are millions of other decisions capable of becoming System 1 decisions. Credit decisions are everyone’s go-to example of a decision domain that produces a high ROI and remains tractable with modern methods and tools.

 

Daniel Kahneman recognized the centrality of decisions in today’s organizations, describing an organization as ‘… a factory that manufactures judgments and decisions’. As a counterpoint, in his 2013 article Rethinking the Decision Factory in the Harvard Business Review, Roger L. Martin lamented that ‘Decision factories have arguably become corporate America’s largest cost’. Framed in the present discussion, it is clear that Martin was alluding to the costs of human-mediated System 2 decisions – essentially, the high cost of expert knowledge workers.

 

Since before the terms were even coined by Kahneman, many organizations have been genuinely engaged in a mission to transform their System 2 decisions into automated System 1 decisions. They may have not used this terminology, but the parallels are startling. It is as if the principle is central to the notion of automating decisions – whether they are human decisions or business decisions. For those that do it right, this is having a huge positive effect and helping to free their knowledge workers to concentrate on what they do best – innovate and help resolve the System 2 challenges that still defy automation.

 

Businesses across industries are automating their decisions with the FICO® Decision Management Suite. This suite processes data, utilizing advanced analytics, optimization, and decision management to quickly produce reliable, consistent decisions—moving businesses away from laborious System 2 processes.

Original article by, timoberthold@fico.com, James Farmer, Stefan Heinz & Michael Perregaard

 

Computing hardware has mostly thrashed out the physical limits for speeding up individual computing cores. Consequently, the main line of progress for new hardware is growing the number of computing cores within a single CPU. This makes the study of efficient parallelization schemes for computation-intensive algorithms more and more important.

 

A natural precondition to achieving reasonable speedups from parallelization is maintaining a high workload of the available computational resources. At the same time, reproducibility and reliability are key requirements for software that is used in industrial applications. In the Parallelization of the FICO Xpress-Optimizer paper, we present the new parallelization concept for the state-of-the-art MIP solver FICO Xpress-Optimizer. MIP solvers like Xpress are expected to be deterministic. This inevitably results in synchronization latencies which render the goal of a satisfying workload a challenge in itself.

 

We address this challenge by following a partial information approach and separating the concepts of simultaneous tasks and independent threads from each other. Our computational results indicate that this leads to a much higher CPU workload and thereby to an improved, almost linear, scaling on modern high-performance CPUs. As an added value, the solution path that Xpress takes is not only deterministic in a fixed environment, but also, to a certain extent, thread-independent.

 

Find the full copy of our article Parallelization of the FICO Xpress-Optimizer here, on Taylor & Francis Online. Get a free trial of FICO Optimization Modeler, or learn more about the Xpress Optimization Suite here.

After exploring the benefits of data visualization in retail marketing, the loaded question remains: Where else can binning analysis give us insights? For numeric variables, binning is the process of dividing the continuous range of a variable into adjacent ranges, slices, groups, classes or “bins.”

 

While it should now be abundantly clear that binning analysis helps marketers visualize insightful patterns (if you missed that discussion read my last blog here), my analysis of the retail data provided us with insights about responsiveness by analyzing the patterns between variables and our response outcome.  We noted that retail channel customers were some of the most responsive, so now we’ll dig into the same data set, again using FICO Scorecard Professional, to understand who these customers are and how that information can help marketers improve sales.

 

You may be thinking: how can you use binning to profile your customers? First, keep in mind that a binning analysis can include several ‘target’ variables. These ‘targets’ don’t necessarily need to come from some unknowable future state, or even be connected to a predictive model. By applying binning analysis to virtually any variable in the dataset, we can learn how other variables connect to that ‘target’.”

 

To further analyze the set of customers in our retail example, we created a profile variable called “retail shopper,” which has a value of “1” for customers who shopped at any retail store. “0” was assigned for those who only shopped through other, non-retail channels. Recently inactive customers were left out of this analysis by setting their value to missing.

 

To gain an overall understanding of which variables help identify the profile population of retail shoppers, we bin the data using our “retail shopper” profile as the target and sort it by Information Value (IV).

 

Retail Shopper Profile Binning - Sorted by Information Value

   bin 1.png

The display above shows which variables are most related to a retail shopper, and which are not. This allows us to see which variables, such as distance to store or purchase amount, have the strongest relationship to retail shopping versus the two other channels, providing insight into the factors which define a retail shopper.

 

Profiling Insights from Binning Analysis

Further insight can be gained from examining the patterns within each variable. Variables with high IV represent those that are the most interesting because they better differentiate retail shoppers from non-retail shoppers. Binning details for variables of interest are analyzed below.

Distance to Store

  bin 2.png

  • Distance to store is highly related to retail channel shopping.
  • Retail shoppers are more likely to live within close proximity to the store.
  • Over 62% of retail shoppers live within 6 miles of the store.
  • Over 70% of our non-retail shoppers live over 80 miles away from the store.

 

Minimum Purchase Amount

  bin 3 min purchase.png

  • Retail shoppers spend less money per purchase.
  • Over 16% of retail shoppers have spent as little as $10.
  • Retail shoppers may be motivated by convenience, i.e. picking up a gift or a seasonal item.

 

Frequency Last 24 Months

bin 4 frequency.png

  • Retail shoppers buy most frequently.
  • A small portion of retail shoppers bought 7 or more times in the last 24 months.

 

Maximum Purchase Amount

bin 5 max purchase.png

  • Retail shoppers spend low amounts of money per visit.
  • Almost 45% of retail customers have never spent more than $97 in one shopping trip.

 

Total Dollars spent last 24 months

  bin 6 total dollars.png

  • Nearly 9% of retail shoppers spent over $635 in the last 24 months, showing that low purchase amounts add up when shopping frequency is high.
  • There were higher than average volumes of 0’s (non-retail shoppers) at specific dollar amounts ($50, $90, $150) likely indicating specific “sales items” in the non-retail channel. Further analysis could help identify which items stimulated shopping through the catalog or web channels.

 

Marketing Strategy from Profiling

Binning allows for insightful customer profiling, which retailers can use to re-engineer their customer onboarding strategy. Not all customers will necessarily become loyal, but the sooner retailers can identify and engage with those that will, the more efficiently and effectively they can allocate their marketing budgets.

 

So, what can a retailer do with all this insight gained from analyzing the binning results? The retailer can understand how, when, and where customers are likely to buy. With that knowledge, they can develop more effective strategies to build business.

 

Binning and analyzing the customer data in this blog illustrated that people who live close to the store buy frequently, but tend to make small purchases. It is clear that people who live nearby can potentially become loyal, albeit low spending, customers with some increased awareness. Using this information, a marketing team may increase spending to reach their retail channel customers through local advertisements or postcards, targeted to specific neighborhoods near their stores. Once awareness is spiked with increased advertising spend, the key will be to get customers or browsers to spend more while they are in the store. Perhaps special promotions, discounts, or store displays designed to draw customers into the store may result in more spending.

 

This is just one example of how binning can be used in FICO Scorecard Pro to profile customers of interest. This same retail data can also be leveraged to target one-time buyers to elicit repeat business, holiday shoppers with special promotions, or high-ticket item buyers for larger marketing spend.  More robust data could be used in the creation of “look-alike” models.

 

If you’re interested in testing out your profiling ideas, you can drill into this same dataset with a free trial of FICO Scorecard Professional.