Skip navigation
1 2 3 Previous Next

FICO Community Articles

81 posts

Alliant, one of the largest credit unions in the US has been serving members for over 80 years, growing to over 335,000 members and 9.3 billion in assets. In our new press release, John Listak, lending systems project Manager at Alliant, shares how the company built a new consumer loan origination system using FICO® Origination Manager. They were able to improve efficiencies by reducing the number of decision rules per loan product by over 25%, and credit card applications can now close in less than 30 minutes.

 

Alliant's story is one we see with Financial Services clients around the world; they want to meet growth goals while mitigating risk and providing a fast, more digital customer experience. Most of the time, legacy systems built on outdated technology stand in the way of meeting these goals. Alliant's previous system was not flexible enough to make changes easily, add loan products or improve the product. This of course makes it difficult to stay competitive, because it is difficult to adapt to regulation and meet customer needs. FICO® Origination Manager helped Alliant reach these goals by adding sophistication to their system, enabling fast changes, easy decision rule updates and multiple credit bureau integration. If this story sounds similar to that of your organization, ask your questions about the Alliant story or Origination Manager in the comment section below.

 

Read the full press release here.

Reliability and reproducibility are key goals when developing the FICO Xpress solver engines; but what does that mean exactly? First, the same input to an algorithm should always lead to the same output. In addition to that, runtimes should be repeatable. Ultimately, the algorithm should always take the exact same path to the final results. This is what computer scientists call a deterministic algorithm, it makes results reproducible. That is handy, and even necessary in many different contexts:

 

  • Want to show a live demonstration to your bosses? Your software better be deterministic!
  • Want to fix a bug that one of your customers reported? Your software better be deterministic!
  • Want to report your research results in a scientific journal that peer-reviews software? Your software better be deterministic!
  • The results of your work might have legal consequences,for example, when you optimize the assignment of students to universities? Your software really, really better be deterministic!

 

I could go on, but for those reasons and many more, any commercial MIP (Mixed Integer Programming) solver that I am aware of, and even many academic solvers, are deterministic, to a certain point. That is, they are deterministic as long as they run in the same computational environment, i.e., on the same machine. Even if the input is completely identical, a different operating system, more or fewer CPU cores, or different cache sizes can destroy deterministic behavior.

 

While this probably doesn't matter for your live demonstration (as it will be on the same laptop as the test run), it is almost guaranteed to break performance in the journal peer review. For customer support, it comes down to being required to hold a farm of different machines, light ones, heavy ones, with (at least) three different operating systems to maintain. This can be a nightmare for both the developers and the IT admins.

 

concept gears.png

Reliability is a crucial aspect of optimization software.

 

This is where we at FICO Xpress outperform other solvers. To the best of my knowledge, Xpress is the only MIP solver whose solution path is identical under Windows, Linux and Mac. Our performance is also independent of the cache size. There is a dependence concerning different numbers of CPUs, but it can be overcome. The important ingredient here is that our MIP parallelization (see my earlier blog post about Parallelization of the FICO Xpress-Optimizer) does not depend on the number of actually available threads, but on the number of tasks that we hold in the queue to be scheduled in parallel. Such a task can consist of solving a node of the branch-and-bound tree or running a certain primal heuristic. Tasks do not have a one-to-one association to computing threads; therefore, they can be interrupted and be continued in a different thread in a later point in time. The idea is to always have more tasks available than there are threads.

 

The maximum number of parallel tasks can be determined via the user control MAXMIPTASKS. An important consequence of breaking the task-thread-association is that a run on a machine with X threads can always be reproduced on a machine with Y threads, no matter whether X is smaller or larger than Y. You just need to adjust the MAXMIPTASKS control in the Y-run to the value that is reported in the log file of the X-run. For example, on my 40-threaded Windows development server, the corresponding log line reads:

 

   Starting tree search.

   Deterministic mode with up to 40 running threads and up to 128 tasks.

 

So, if I want to redo a run on my 8-threaded Mac laptop, or inside my 4-threaded Linux virtual machine, I simply set MAXMIPTASKS=128, then I am good to go.

 

Albert Einstein is credited with saying: "Insanity is doing the same thing over and over again and expecting a different outcome." It's good to know that my favorite MIP solver is sane, and, moreover, offers me different ways of doing the same thing with the same outcome. This gives me the freedom to choose the way that comforts me most and is the best for the job at hand.

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

Recent FICO research reiterates that card-not-present (CNP) fraud is growing dramatically as a percentage of total fraud losses. In EMEA specifically, CNP fraud now accounts for 70% of total card fraud losses, up from 50% in 2008. At the account level, this figure is close to 90%. Similar CNP metrics exist around the globe.

 

While issuers often avoid high dollar liability for CNP fraud, they have a strong interest in CNP fraud prevention as an improved consumer experience benefits all parties. As a result, payment processors, card issuers, and merchants are all striving to improve the separation of fraudulent and legitimate CNP transactions.

In support of these efforts, FICO has developed new machine learning techniques focused specifically on CNP transactions. These advances have demonstrated an ability to reduce total CNP fraud losses by upwards of 30% without increasing false positive rates. The CNP machine learning innovations will be included in the 2018 consortium models (both Credit and Debit) and will be available in the standard consortium model release cycle. There are no incremental licensing or upgrade fees associated with this release.

Coming Soon: FICO® Falcon® Platform 6.5

FICO will soon be releasing version 6.5 of the FICO® Falcon® Platform. This release will include several enhancements aimed at business intelligence, modeling flexibility and decision flexibility. A few specific improvements include:

 

  • Executive dashboard: Executive-level KPIs and trending details that visually demonstrate and quantify fraud detection performance over time.
  • Modeling flexibility: Falcon 6.5 customers will have the ability to deploy in-house developed models on the Falcon Platform.
  • Decision flexibility: Falcon 6.5 will enable clients to configure incremental User Defined Variables (UDVs) for use in Falcon Expert decision strategies.

 

The new enhancements in Falcon 6.5 will be presented during the 2018 Falcon User Group taking place April 16 at the Fontainebleau Hotel in Miami Beach, Florida. Details will also be available from your FICO account team later this spring. To register for the User Group, visit the FICO World and Falcon User Group registration page, enter your email and select the registration options. For more information on FICO World, please visit our FICO World 18 website. If you choose to only attend the Falcon User Group, please select “Attending a Monday User Group Only.”

The 2018 Falcon User Group will be held Monday, April 16th at the Fontainebleau Hotel in Miami Beach, Florida. The event will be held from 8:00am until 3:30pm, just prior to FICO World 2018, which starts at 4:00.

 

The Falcon User Group is a complimentary event exclusively for users of the FICO Falcon Platform, the industry’s leading enterprise fraud platform that includes 25 years of machine learning and artificial intelligence analytic innovations. In addition to the opportunity to network and collaborate with industry peers and FICO fraud experts, topics will include:

  • Industry trends and the impact on fraud management
  • Enterprise fraud strategies
  • Falcon roadmap: upcoming enhancements
  • Analytic innovations in machine learning and AI within Falcon
  • Utilizing Falcon for Payments and Banking
  • Hot topics including card-not-present and mass compromises
  • Best practices in fraud management

The Falcon User Group will be valuable for all of our Falcon customers whether you are a direct client or are a customer through one of our partners.

 

If you register for both the Falcon User Group and FICO World, you will receive a $300 discount on your FICO World registration fee by entering promo code FFUG1840.

 

To register, visit the FICO World and Falcon User Group registration page, enter your email and select the registration options.

 

For more information about FICO World read our article 5 Reasons Fraud Professionals are Headed to FICO World or visit the FICO World 2018 website.

 

If you choose to only attend the Falcon User Group, please select “Attending a Monday User Group Only”.

Our Blaze Advisor customers keep finding compelling new ways to embed rules into all their critical decisions. To that end, FICO keeps raising the bar on what’s possible – particularly as decisions require more transparency, explainability, and ultimately scalability to the breadth and depth of big data analytics that proliferate in businesses.

 

Blaze Advisor 7.5 is the culmination of our aggressive (and client-motivated) efforts to put intelligent decisions at the forefront of your organization. It parallels features now part of FICO Decision Modeler powered by Blaze Advisor (our cloud-based decision rules solution).

 

So what’s potentially game-changing about Blaze Advisor 7.5 that should make you want to boost your decision engines to Warp Factor 9?

spock.pngEnsure your decision logic is acting logically. Testing and debugging your decision service is an essential step before deploying it within your business. How can you ensure that logic is going to play as out as expected? Decision Analyzer is a new Blaze capability that steps you through your decision logic to see how results are generated, which is especially helpful when error or warnings occur. It helps you cut to the chase whether the problem lies with your decision logic or test data. Get more information on this feature here: Debugging a Decision Service is Easy with the New Decision Analyzer
tribbles.jpgDecision tables growing exponentially? You’re not alone. Businesses power up their decision engines with decision tables, where you can readily define and maintain a large number of rules in a visually rich, user-friendly format. As decision logic gets more complex, however, these tables can become unwieldy. Blaze 7.5 offers improved editing, compilation, and execution of very large lookup tables, so you can continue to execute intelligent decisions at the speed of business.
robot.pngIntelligently deploy machine learning models in your decisions. The rush to let AI and ML help automate and otherwise improve decisions carries with it a significant opportunity for unwanted biases and errors. The ability to import tree ensemble models as PMML mining models can help increase decisioning accuracy. Blaze Advisor7.5 supports this capability, which will ultimately help businesses continue to expand and extend machine learning capabilities throughout their decisions. Get more information in this feature here: Have a Random Forest Tree? Use it in your Decision Model.

 

In Star Trek terms, it’s much more prudent to beam down to a new planet if you’re prepared. Stay tuned for more information about Blaze Advisor 7.5 in the Blaze Advisor and Decision Modeler Community. In the meantime, download a trial of Blaze Advisor 7.5 (or Decision Modeler – Blaze Advisor in the Cloud).

 

If you're already using Blaze Advisor and want to upgrade to 7.5, login to the FICO Fulfillment Center, select "Recent Product Releases," then select "Blaze Advisor 7.5 for Java."

FICO® Origination Manager is a powerful application for business users to dynamically develop and deploy customer on-boarding strategies, and rapidly manage changes to user experience needs and business processes. It contributes to more profitable portfolios and effectively manages risk in rapidly changing business environments. Origination Manager reduces operating costs, streamlines workflow and ultimately, provides the end-customer with a faster decision and higher potential to meet their financial and other product and service needs.

 

What's new in the 4.9 release?

Lending institutions have an urgent need to reduce complexity and cost of customer onboarding while improving system performance. The Origination Manager 4.9 On-Premises release meets these needs with increased level of configurability
by clients over workflow, cost-effective and secure product delivery with multi-tenancy, improved model execution and additional data providers to enrich client understanding of applicant risk.
Key new capabilities of each module are outlined below:


Application Processing Module (APM)

A new Synchronous Consumer Workflow template provides a more intuitive and streamlined application process that’s less reliant on queues and search. Instead of relying on FICO® Application Studio to configure and manage workflow, users can make changes directly in the Application Processing Module web application. This change helps reduce the costs and complexity associated with Origination Manager implementations, as configuration is much easier to do.

 

Origination Manager 4.9 on-premises version includes support for multi-tenancy: the ability to host multiple clients on the same application server without multiple installations of the product. Multi-tenancy means FICO, or potential reseller partners, can create a standardized, repeatable, Origination Manager offering with boundaries around workflow, data models and user interface. When clients stay within boundaries of a multi-tenant environment, the product is secure, easy to maintain, easy to upgrade, and less costly to implement.

 

Decision Module (DM)

The Decision Module has been enhanced to facilitate easier model execution and external decision data access:

  • User Interface for management of Excel file upload and look up
  • User Interface for management of JAR files and custom functions
  • Additional support for internal services including ADB, server configuration and transform files

 

Data Acquisition Module (DAM)

Includes performance improvements, and instrumentation to better understand transaction timing between various touchpoints in the data acquisition process.

 

Powered by FICO® Data Orchestrator, the library of data providers continues to expand to support new data providers and updated versions in this release.

 

New Data Sources

FICO is always improving and adding to the types of data sources available to use in decisions and models. Below are the newest additions:

  • Equifax Consumer Total View
  • Innovis Version 1, Release 1
  • PayNet Direct 3.25
  • Lexis Nexis Global Watch OFAC 1.6
  • Lexis Nexis Instant ID Consumer 1.87
  • Dun & Bradstreet Toolkit XML Version

 

Where is the Product or Solution Available?

The Application Processing, Decision and Data Acquisition Modules are available worldwide for on-premises deployment. Certain features vary by market, learn more on the Origination Manager product page.

This blog series features the opinions and experiences of five experts working in various roles in global strategy design. They were invited to discuss the best practices outlined in this white paper, and also to add their own. With combined experience of over 70 years, they partner with various businesses, helping them to use data, tools and analytic techniques to ensure effective decision strategies to drive success. The experts share their personal triumphs and difficulties; you’ll be surprised to learn the stark differences, and occasional similarities, in these assorted expert approaches to accomplishing successful data driven strategies across industries.

 

stacey headshot cropped.png

Stacey West is a Principal Consultant with FICO and has been with the company for 17 years. Stacey’s focus is not only working closely with customers to first gain their trust, but also to value and leverage the client’s knowledge and expertise. 

Establish Trust

The most imperative first step in a new strategy design engagement is to establish trust. Judgment must be left at the door in order to start the process in a new and different way. This helps to move away from intuitive legacy rules and ensure a smooth implementation of data driven decisions.

 

I personally experienced the benefits of this approach when I worked with a large financial services company in the UK who were new to data driven strategy builds. The analysts were skeptical of a new tool, FICO® Analytic Modeler Decision Tree Professional, as it was not immediately clear why it has selected a specific metric. However, after taking the time to review the differences between the approaches and the benefits that would arise from the change, trust was gained, and this allowed us to build improved analysis and strategies, together. After taking time to review their data and demonstrate that Decision Tree Professional was identifying accounts as they would expect, we could move forward with mutual understanding.

 

Ease into Change with Education

Thorough preparation during the planning stage is critical for all parties involved in the strategy design: clients, analysts, and consultants. As my colleague Therese stressed when she shared her best practices, knowing the project objectives up front will make the process much smoother. It is common to be resistant to or suspicious of a strategy change. Many people are of the mindset that their current state is working just fine. However, in my work with countless companies, I've found that walking through the benefits of shifting to a data focused strategy with client specific examples establishes the trust needed to pursue change with confidence.

 

Acknowledge External Influences

Changes in strategy are often necessary since many triggering events lead companies to reexamine strategies. The world is constantly changing, and strategies must be tweaked to accommodate things such as regulatory changes, purchases of new portfolios, or a new company wide strategy that triggers a change in policy rules. For example, implementing a lower acceptance score in a credit lending model will include more risky accounts and impact downstream rule implementations. By understanding how these outside changes will likely affect the entire portfolio, advisors can jump ahead several iterations in a strategy design project to predict future outcome with new scoring models and new data. This exercise demonstrates how predictive data driven analysis can be, igniting the desire to pursue a shift away from legacy strategy build methods.

 

human domain knowledge.png

A legacy strategy, however, should not be dismissed, as it is likely rich in human knowledge. Decision Tree Professional can be game changing tool, but it is only as good as the people, strategy and data put into it. To develop a winning strategy, it is crucial to engage with experts to capture existing domain knowledge and learned best practices. Without human input, a model might include items that weaken the predictive output of a strategy. Human knowledge about things like policy rules and target markets must be analyzed along with the raw data to guide the analytics and deliver useful results.

 

Case Study

All these best practices, particularly the need to establish trust, came together for me in one project. I had a meeting with a client in financial services, however, the team had reservations about how the tool worked and selected predictive metrics. By coming in prepared, including the customer in all stages of the analysis and iterating on Decision Tree Professional, my team and I helped the client save many months of work by providing detailed portfolio insight and reducing the test and learn phase.

 

The aha moment occurred in the evaluation stage. I gave the same demo twice: once with a generic dataset, then again with the customer's data and participation. Building the demo in real time and including familiar data provided an immediate sense of trust. It also allowed my team to highlight the strategy building capabilities of our tool of choice. Using the client's data in our FICO tool provided insight into the client's portfolios, which highlighted the areas with potential for improvement in their current strategy. The final perk of this approach was transparency. Building the strategy together with the client enabled them to understand and explain how it worked so they could explain the value to their internal teams.

 

Trust is critical to the success of every strategy design project I work on. If all parties come in with an open mind, it is easy to tackle initial preconceptions and feel comfortable with progressive changes. By including the customer and their existing data in all stages of the analysis, you can save months and possibly years of work-- if future strategies are data driven too -- and that's on top of the benefits that come with new strategies that improve operational decisions and performance results. Chat with experts and fellow users in our TRIAD and Analytics user groups.

 

If you’re a current TRIAD user, join us at our upcoming Customer Forum, May 23-24 in Atlanta, Georgia.

FICO World is coming to Miami Beach and will be held at the Fountainbleau Hotel from April 16-19, 2018. The agenda is packed and there's plenty for our fraud customers to see and do. Here are just 5 of the reasons why fraud professionals love FICO World:

 

  1. We have a Falcon User Group just for you. Held on Monday the 16th, this is a complimentary event exclusively for users of the FICO Falcon Platform. You will have the opportunity to network and collaborate with your peers and FICO fraud experts to discuss some of the most important topics for Falcon users. Get more details here: Register for the 2018 Falcon User Group.
  2. Hear the latest fraud trends from global experts including financial professionals, industry analysts and Falcon customers. We have a packed agenda, including a stream dedicated to fraud where you’ll hear about the latest developments in fraud protection and the winning approaches to fraud management our customers have taken.
  3. With an exhibition center, demo theatre and roundtable discussions there are unprecedented opportunities to interact with fraud experts one-to-one or in small groups. You’ll leave FICO World with new knowledge and understanding and valuable advice you can deploy in your own fraud operations.
  4. Learn how to develop detection strategies that also deliver exceptional consumer experience: We know that fraud management isn’t just about catching the bad guys – we will be focusing on how you can put customer experience at the center of your fraud operations.
  5. Meet, mix and learn with your peers. Whether you’re mingling at the welcome reception, getting some exercise at the fun run, or grooving at the party to the X Ambassadors - there are plenty of informal and fun opportunities for you to grow your network and share experiences. You’ll even get the chance to meet chess champion Gary Kasparov and find out what he thinks about artificial intelligence.

 

We hope to see you at FICO World!

Best Practices in Strategy Design

This blog series features the opinions and experiences of five experts working in various roles in global strategy design. They were invited to discuss the best practices outlined in this white paper, and also to add their own. With combined experience of over 70 years, they partner with various businesses, helping them to use data, tools and analytic techniques to ensure effective decision strategies to drive success. The experts share their personal triumphs and difficulties; you’ll be surprised to learn the stark differences, and occasional similarities, in these assorted expert approaches to accomplishing successful data driven strategies across industries.

Therse henry cropped.pngTherese Henry is a Principal Consultant at FICO where she has worked for the past 5 years. She collaborates with a wide range of clients, varying from those that are new to the data-driven strategic approach to global enterprise corporations entrenched in it. Therese is a champion for upfront education and training before any analysis begins.

1. Client training

Client training is crucial to the success of any project, and it should be completed before any deep analysis even begins. Every client is at a different level of strategy development, however, establishing a base knowledge of terms, markets, trends, and processes will ensure a smoother campaign and build the trust needed between all participating parties.

 

The Best Practices in this paper span across industries and can guide a client through the different levels of strategy design, evolving from a judgmental approach all the way through to implementing data driven strategies. I believe it is most important for a team to be armed with the knowledge of terms, trends, and best practices. In my experience, this can help shift a project away from the “always done it this way” thought process to a “prove to me it works” mind set, to finally considering the gaps in the legacy approach and filling them with informed, data driven decisions.

 

When first adopting a data driven strategy approach, it is common for clients I work with to be content to follow the status quo of their proven strategies. However, shying away from cutting edge tactics will likely result in missed opportunities.

 

2. Develop strategy, know your goals
In developing a strategy, the most important factor to begin with is basic benchmarking and strategy design. Defining goals helps drive exactly what it is the client wants to achieve. Once goals are defined, the next step is to determine what is achievable, and what is necessary to get there. Knowing how to measure outcomes and what success looks like will inform the strategy design. Strategy will change depending on the maturity of a client or project. For instance, a newer client might require a basic approach with phased implementations, however, it is critical to talk through this process before pulling any data. Open communication and taking the time to confirm expectations will ensure goals, strategies, and execution are all aligned to produce the desired outcome.

 

In determining strategic goals, clients must consider the trade-offs necessary to achieve their goals. If they want to increase outstandings, that comes with a higher risk of default. To finalize strategy and ensure the right goals are being prioritized, stakeholders must decide what they are willing to live with, and what the bottom line cost of those goals will be. In other words, the ends must be worth the means for a project to deliver value.

 

Picture1.jpg

 

3. Set baseline to track relative change

In order to measure outcomes and determine success, a baseline must be set. This baseline must consider project goals along with regulatory ruling needs. Having results in a vacuum won’t show relative change or shed light on what has impacted the bottom line; this often leads to frustration. Building out the first line strategies is important to each group in determining the benchmark.  All subsequent outcomes must be measured, and measured again, to track and learn from changes and deviations from the baseline. 

 

Final Suggestions

With FICO’s new cloud-based solutions, we are able to work with smaller customers, and also customers that are new to the process of strategy design. FICO offers a Foundations in Customer Lifecycle and Credit Scoring Course, available to help clients get the background they need to pursue data driven strategies. Many smaller organizations can benefit from automating simple strategies. Now, because of the scalability and cloud-first product focus, FICO can help. All FICO expertise is now available to a wide range of clients of all sizes and across several industries, including auto and healthcare. Regardless of industry or size, client training, benchmarking and goal setting remain paramount for success.  If you're already a FICO TRIAD client, join us at our upcoming customer forum in Atlanta, Georgia to discuss even more best practices, plus new features and user cases.

 

My experience has taught me that leading with comprehensive training and knowledge training leads to successful strategy design; several of my colleagues will be sharing their best practices in future blogs. Until then, I'd like to know how upfront training and knowledge transfer has helped your organization. Please share your stories and questions in the comments section below or chat with experts and fellow users in the TRIAD Community.

USA: Survey Reveals that US Banks Stopped Nearly $17 Billion in Fraudulent Transactions in 2016

On January 24, the American Bankers Association (ABA) issued its 2017 Deposit Account Fraud Survey Report. The survey showed a substantial increase in attempted fraud as the nation’s banks stopped nearly $17 billion in fraudulent transactions in 2016 compared to $11 billion in 2014. While fraud attempts against bank deposit accounts were up 48% over the two-year period, fraud losses increased at a slower pace of only 16%, costing the industry $2.2 billion in total losses in 2016 compared to $1.9 billion in 2014. US banks stopped $9 out of $10 of attempted deposit account fraud in 2016.

 

According to the ABA survey, debit card fraud accounted for 58% of industry loss, with the majority of cases involving counterfeit cards, card-not-present transactions or lost or stolen cards. At 35%, check fraud was the second most common fraud type. Other channels, including online banking and electronic transfers like wires and ACH payments, accounted for 7% of industry losses.

 

The survey also revealed that fraud attempts increased in all categories, especially in non-debit electronic channels. The volume of fraud attempted in other channels but stopped by banks more than doubled from 2014 to 2016. And while debit card fraud losses remained consistent with previous surveys, check fraud losses saw their first increase since 2008, surging by 28% to $789 million.

 

The survey sampled 138 banks of different sizes. A copy of the annual survey results can be purchased from the ABA.

 

USA: Secret Service Issues ATM “Jackpotting” Alert to Financial Institutions in the US

As reported by numerous media sources, in late January the US Secret Service issued a confidential alert warning ATM owners and operators that criminals were conducting “jackpotting” attacks on standalone front-loading ATMs. The targeted ATMs are routinely located in pharmacies, big box retailers and drive-thru ATMs.

 

The jackpot schemes involve thieves posing as ATM technicians who often replace the original hard disk of the ATM with a disk that mirrors the ATM’s own software.  The fraudsters can then remotely control the ATM and force it to spit out cash like winning slot machines. The cash is then collected by hired runners. Jackpotting attacks have been long been reported across Europe and Asia. The Secret Service alert is in response to what is believed to be the first known attacks in the United States.

 

The Secret Service recommends that banks contact their ATM service providers for the latest security updates and patches to mitigate the risk from these attacks, to ensure proper physical security controls limiting access to the machine and to monitor for communications failures and alarms. The Financial Services Information-Sharing and Analysis Center is also issuing information on the attacks. FICO® Card Alert Service is a tool that can be applied as it uses industry-leading predictive analytic software and investigative techniques to pinpoint ATM and debit card transaction fraud early to spot fraud at its inception.

 

Europe: PSD2 Becomes Law Across the Eurozone

The Second Payment Services Directive, better known as PSD2 became law on 13 January 2018. From this date, banks across the Eurozone have had to give regulated third parties access to bank account information via APIs. This move to more open banking will see third parties, known as Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs), set up a variety of new services for consumers, such as the aggregation of account information and payment initiation across multiple accounts. For fraud managers, the addition of third parties into account management is likely to be a concern, as it will alter the fraud data available, on which fraud decisions are based. FICO has responded by developing a PSD2 analytics model for the FICO® Falcon® Platform, which is available now and is already helping clients adapt their fraud operations to the altered data landscape created by PSD2.

 

PSD2 also looks to reduce payment fraud. On 13 January, the two-year countdown to the regulatory use of strong customer authentication to secure payment accounts and transactions began. For payment service providers, the need to use strong customer authentication must be balanced with providing a smooth experience for consumers. To do this, it is likely that PSPs will want to limit the use of strong customer authentication as much as possible. For that to happen, they need to keep their fraud levels below specified reference rates and they need to monitor and report fraud rates to the regulator. This is an extra consideration for the fraud department and a strategic approach that balances the requirements of the law with the experience of customers will be needed.

 

There is a wealth of PSD2 information available on our corporate website.

 

USA, Eurozone, Australia: Introduction of New Real-Time Payment Schemes

November 2017 saw the launch of real-time payment schemes in both the USA and the Eurozone. The Eurozone launched a cross-border real-time payments scheme called SEPA CT Inst. Available in all Single Euro Payments Area (SEPA) countries, individuals and businesses can now send irrevocable payments of up to €15,000 in less than 10 seconds.

 

Meanwhile in the USA, in November, bank-owned The Clearing House sent the first real-time payment via the first new payments scheme to be developed in the USA in 40 years. While transaction volumes and values are currently low, The Clearing House is aiming for ubiquity by 2020.

 

The latest national real-time payments scheme is Australia’s The New Payments Platform which was launched to consumers on 13th February.

 

Banks in countries with new real-time payment schemes can look to learn from the experiences of earlier schemes such as the UK Faster Payments Scheme, which launched in 2008 – particularly when it comes to the fraud that evolves in a real-time payments environment.

 

European Union and Beyond: New Privacy Laws Have Far-Reaching Effects

In May, the General Data Protection Regulation (GDPR) comes into law. GDPR is wide-ranging in its remit, not least because it is applicable to any data subject (person) who is in the EU. This means that it applies to organizations that are not based in the EU; for example, a US bank with customers who are living (even temporarily) in an EU country would need to comply with GDPR.

 

Some have expressed concern that the processing of customer data to detect and manage fraud will be affected. It is worth noting that GDPR does not always require a data subject’s consent to process their data — legitimate business interest and compliance with other national regulations are also legitimate grounds to process data.

 

There is an intersection between PSD2 and GDPR that banks should be aware of. Under GDPR, data subjects may raise a subject access request, requiring the data controller — for example, their bank — to provide back to them all the data they hold on them. PSD2 mandates that any action that could increase the risk of fraud uses strong customer authentication to verify that the person presenting themselves is the legitimate customer. Payment service providers should, therefore, consider how they implement strong customer authentication in the case of subject access requests under GDPR.

 

Latin America: Caribbean Tax Havens and New Threats

On December 5, EU ministers added 17 countries to a list of non-cooperative tax jurisdictions for failing to meet tax governance standards. Four of them are bathed by the Caribbean Sea. The Panama Papers, FIFA case, Brazilian “Car Wash” and other scandals involving multinational companies, banks, politicians and even governments, demonstrate how LAC is still a fragile region in terms of corruption, money laundering, drug trafficking, sexual exploitation and smuggling. Nevertheless, things are changing and getting better, albeit slowly, whether because of political reasons, economic pressures from worldwide trading, convincing new foreign investors about transparency, economic instability, loss of taxes, reputational risk, social costs or any other international pressure. Regardless of the reason, local governments and regulators are now more concerned about keeping the economies of emerging countries growing and prepared for the disruptive market.

 

When we talk about the disruptive market and new technologies, LAC countries not only face challenges to comply with new regulations but also to prevent cyberattacks and fraud. WhatsApp, for instance, is the most popular social media messaging app in LAC. It has been used as one of the main channels for phishing, malware dissemination, badvertising and fraud. In Brazil, for instance, there were more than 44 million identified threats last year.

 

All countries are suffering from more sophisticated crimes being created every day, even when artificial intelligence and complex schemes to counter cyberattacks are used. According to an ESET Latin American Security Report (2017), the number of reported ransomware cases grew 131% in 2016 and the Inter-American Development Bank (IDB) puts the cost of cybercrime in LAC at US$90 billion per year at least.

Machine learning is a science of using principled and well-defined techniques to learn from data. While it is obvious that data enables learning, its usefulness is capped by data richness and data quality; this fact often goes underappreciated. Using poor quality data can too easily lead to bad models, to overfitting, and to ultimately poor model performance down the line. In addition to turn around cost for correcting models, this can potentially disrupt the underlying business operations.

 

In the context of machine learning, it is imperative to have supportive evidence that is unambiguous, and also rich enough to justify the conclusions drawn from it. With this in mind, it is important to note:

  • Learning from randomness can be detrimental to model performance.
  • An attempt to learn beyond the limits of your data is just as bad.

 

Randomness

A target is influenced by a multitude of factors, many of which are observable and explainable. Some factors, typically those that are unobservable, bring variability to data. These random signals are referred to as stochastic noise. It is hard to deduce anything useful from this regardless of how accommodative the learning capacity may be. The noisier the data, the less one can infer from it, therefore, requirements for data resources go up with increasing noise levels.


Confusing_street_signs.jpg

When the data provided is contradictory, it’s hard for the interpreter to take the correct action.

 

Target complexity

Noise need not be source propagated. In many cases, the receiver has bounds to learning, and anything beyond that boundary is effectively noise (often referred to as deterministic noise). In some cases, models built on inadequate data are incapable of explaining the inherent complexity in the data. Therefore, an attempt to interpret deterministic noise can lead to overfitting.

 

Overfitting occurs when data is trained without taking in cognizance model performance for out of sample. While most associate overfitting to randomness in data, it is astounding how often it is a result of self-inflicted, highly complex usage of model specification.

 

complexity.jpg

Too much complexity in an explanation of data can be a bad thing…especially if it cannot be understood.

 

In machine learning, a rule of thumb helps us understand this bound. For a dataset with N data points, the most complex function that can be attempted to be learnt with reasonable confidence is given by a VC dimension of ~N/20. This explains how model complexity is capped by data resources. In order to build accurate, meaningful and robust models, we need to acknowledge and treat randomness, and ultimately control for model complexity.

 

Treat stochastic noise and control for model complexity with a Binning Library

Variables, whether continuous or categorical, can be transformed into mutually exclusive and exhaustive bins; each bin then acts as a predictive feature. Binning of variables is a tradeoff exercise in accuracy and degree of confidence (precision) space, just like any bias- variance tradeoff.

 

Bias and accuracy are inversely related; lower bias is synonymous with higher accuracy. Variance of performance across different data samples is representative of the degree of confidence in learning. Higher variance is indicative of lower robustness, leading to lower confidence in learning.

 

In binning, thicker bins are associated with higher robustness (lower variance, higher bias) while thinner bins do well on account of accuracy (lower bias, higher variance). Therefore, it is critical that balanced bins are created, since they do well on both accounts. In this pursuit, FICO® Model Builder’s interactive binning library helps via autobinner functionality with multiple specifications and validation across out of sample Weight of Evidence.

 

In Figure 1 below, we analyze the target's relationship to years of credit history, and see that applicants with shorter histories tend to default more often-- shown as lower Weights of Evidence (WoE)-- than applicants with longer histories. The label: MV01 represents the sparse sub-population with no recorded history with the credit bureau. The Interactive Binning Library allows this bin to be grouped or weight-constrained in order to achieve the same (or perhaps an even lower) weight in the model than its neighbor, Label: [-,2).

 

model builder.png

Figure 1: Interactive Binning Library in FICO® Model Builder

 

As shown in the transition from Figure 2 to Figure 3 below, binning averages stochastic noise, simplifies learning and fortifies it with higher degree of confidence (vis-à-vis a continuous or highly granulated noisy variable). It helps tremendously to escape the traps set by noise in data.

 

noisy to binned.png


Weight Constraints

Subsequent to binning, it is helpful to impose constraints to the weight patterns of bins. Mostly monotonic, or sometimes U shaped as per the business logic, these constraints prevent models from meandering off into needlessly noisy shapes. Such meandering can be highly complex and comes at the cost of a lower degree of confidence in learning.

 

Constraints guide pattern discovery and effectively keep the complexity of learning in check. Figure 4 below shows how applying monotonicity constraints on bin weights in Figure 3, which had a more volatile pattern to begin with, results in a smoother and monotonic weight pattern, which will likely better align with business expectations.

 

applied constraints.pngFigure 4: Weights after applying constraints

 

Sound binning and constraining helps in the process of simple, reliable and parsimonious learning. I hope it is no more a wonder as to why FICO® Model Builder and the upcoming offerings in FICO® Analytic Workbench™ have such a comprehensive and interactive binning libraries.

 

Has binning helped you make more robust models in a recent project? Want to learn more about binning in FICO’s analytic tools? Join our discussions about binning, machine learning, and related topics in the Analytics Community.

Abstract

After observing a steady upward trend in the US credit card delinquency rate in a time of historic lows for credit card delinquency, I set out to investigate the account default rates by analyzing the performance in lower bands of FICO Score and behavior score with current accounts. The results point to an opportunity for lenders to shift to an early collections intervention strategy via improved customer management strategies. In this blog, I explain this study in 3 parts:

 

The Issue

US lenders are seeing an increase in credit card delinquencies over recent quarters, fueled in part by an overall rise in total household debt. From Q2 2017 to Q3 2017, total household debt in the US increased by $116 billion to $12.96 trillion. Debt levels increased across mortgages, student loans, auto loans, and credit cards; however the largest percentage increase (3.1%) was in credit card debt. In Q3 2017, the Federal Reserve Board of Governors reported a credit card delinquency rate of 2.49% (seasonally-adjusted) for the 100 largest banks, continuing a steady upward trend for the sixth consecutive quarter.

 

Given that these results are far from levels seen during the financial crisis of 2008-2009 (with 30+ day card delinquencies that approached 7%) and a strong economy with near full employment, lenders aren’t ready to hit the panic button just yet.  These current historic lows for card delinquency have led some lenders to accept higher risk applicants, contributing to the slight delinquency uptick.

 

Curious about this continual increase, I examined opportunities related to early collections intervention for a prominent US card lender with approximately 6 million accounts as a means to mitigate risk introduced via aggressive balance growth programs instituted over the prior two years.

 

Most card issuers today have early-stage collection efforts that initiate after the billing cycle for the highest risk delinquent customer segments, leaving a period of four to five days between a missed payment due date and the onset of customer contact strategies.

Intuitively, failure to accelerate customer contact in these high-risk segments represents a missed opportunity. This point of contact may ultimately be the tipping point in collecting the debt and keeping an account from going into default.

 

A relatively simple analytic exercise can help lenders identify those customers who not only have a very high likelihood of rolling delinquent subsequent to a missed payment due date, but also have a significantly elevated probability of reaching a severe stage of default in the following six to twelve months.

 

The Analysis

Leveraging a component of the FICO® Decision Management Suite, I studied performance for non-delinquent accounts that had drifted to lower bands of FICO Score and behavior.  Specifically, my goal was to quantify the account default rates, defined as 3+ cycles past due, bankrupt, or charged-off over the subsequent twelve months (i.e., “bad” performance definition).  The analysis showed that the lowest 10% of FICO Scores and behavior scores captures only 5%(*) of the total current account population, but identifies over 40% (**) of the future bad accounts originating from this current account segment.

 

table 1.png

(*) The sum of the “% of total”  in the upper left highlighted 4 quadrants: lowest 5% and the lowest 6-10% (2% + 1% + 1% + 1% = 5%).

(**) The sum of the “% of total bads”  in the upper left highlighted 4 quadrants: lowest 5% and the lowest 6-10% (24% + 8% + 5% + 3% = 40%)

 

This study leveraged cycle-based data for current accounts (accounts in good standing) as opposed to isolating only those accounts that miss their required minimum payment, which would have resulted in considerably higher bad rates across defined score intervals.  However, the analysis validates the appropriateness and necessity of accelerating collections intervention – at the missed payment due date – within high-risk segments rather than initiating customer contact after the grace period.

 

The Results and Recommendations

Lenders will need to shift their business practices and consider operational implications when making these strategic decisions and they need to be vigilant about tracking performance shifts across key portfolio segments.  Collections capacity planning and ensuring adequate staff training is critical to success.

 

From a tactical sense, adaptive control technologies such as FICO’s TRIAD Customer Manager and Strategy Director are easily configured to enable “pre-delinquent” account treatment which allows users to orient customer contact strategies around the payment due date. Notably, the introduction of self-resolution strategies via mobile application, automated voice, SMS, and email has gained remarkable traction in recent years as lenders strive to match engagement channel to customer preferences.

 

Both lenders and customers have benefitted from the proliferation of digital communications.  Lenders have observed significant improvements in delinquent roll rates and dramatically lower collections-related expenses while customer satisfaction has improved. However, customers who do not react to these automated payment reminders and self-resolution channels are often signaling a state of financial distress that warrants rapid live-agent intervention.

 

I’m happy to field questions about this study and corresponding findings; comment here on the blog or contact Fair Isaac Advisors if you want to discuss innovative ways to create effective risk mitigation strategies which include early intervention tactics abating delinquencies. You can also reach FICO experts and other users in the TRIAD User Forum.

My dishwasher tabs at home are promoted as a "three-phase system." They feature components to carry out three different tasks of cleaning dishes: softening the water (essential for the next two phases), dissolving grease (the main bit I, as a user, am interested in) and finally, some rinse aid (for a nice finish). While solving optimization problems is not a typical household chore, a Mixed-Integer Programming (MIP) solver, like FICO Xpress, can be viewed as a three-phase system.

 

The solution process consists of three distinct phases:

  1. Finding a first feasible solution: essential for the next two phases.
  2. Improving the solution quality: the main bit I, as a user, am interested in.
  3. Proving optimality: a nice finish.

 

In a recent research paper, together with our research collaboration partners from MODAL/ZIB, we show that the entire solving process can be improved by adapting the search strategy with respect to phase-specific aims using different control tunings. Since MIP is solved by a tree search algorithm, it comes as no surprise that the major tweaks were done for tree search controls. During the first phase, we used special branching and node-selection rules to quickly explore different regions of the search tree. The second phase used modified local search heuristics. During the third phase, we switched to pure depth-first-search, disabled all primal heuristics, and activated local cutting plane separation to aggressively prune the search tree.

 

phase.pngSolving phases of a MIP solution process. Point t1 marks the transition from

feasibility to improvement, t2 marks the transition from improvement to proof.

 

Additionally, we provided criteria to predict the transition between the individual phases. Just like a dishwasher, a MIP solver is a bit of a black box; sometimes it's hard to tell what's going on inside. In our case, the challenging bit was to predict the transition from the improvement phase to the proof phase. How do you know that the best solution found so far is optimal when you have not yet proven optimality? Well, you can't, but we found a good approximate criterion: we define the set of rank-1 nodes as the set of all nodes with a node estimate at least as good as the best-evaluated node at the same depth. The term rank-1 is motivated by ranking the nodes of a depth level of the search tree by an estimate of their expected solution quality, with the most promising nodes being ranked first. We trigger the transition to the last solving phase when the set of rank-1 nodes becomes empty, loosely speaking: when we start processing nodes of poor quality.

 

The combination of applying named phase transition criterion and the tuned controls for each phase led to an average increase in speed up of 14% on hard MIP instances. We also found that when you are only interested in one of the phases, like finding any feasible solution or finding good solutions without proving optimality, there is room for improvement by control tuning. You can look forward to an upcoming blog post on our performance tuner.

 

If you are interested in the math behind all of this, the article has been published in Optimization Methods and Software. A preprint is available here. The work is based on results from Gregor Hendel's Master thesis, which was supervised by me, Timo Berthold. For the experiments, we used the academic software SCIP.

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

 

I hope you enjoyed the read. I have to go home now and do the dishes.

Abstract: Learn about FICO's latest innovations in prescriptive analytic software and solutions at our webinar on 2/14 at 11am PT / 2pm ET. REGISTER NOW!

 

According to Gartner, the prescriptive analytics software market will reach $1.57 billion by 2021. Despite
this tremendous growth, most analytics programs fail to deliver business results. In one McKinsey
survey
, 86 percent of executives said their organizations were “only somewhat effective” at meeting the primary objective of their data and analytics programs.

                   

Why is that? What is the secret to success? FICO goes beyond experimentation and delivers the future of prescriptive analytics today. Our prescriptive analytics solutions have helped deliver hundreds of millions in incremental revenue and have helped power the world’s most efficient companies, such as Southwest, Shell, and Nestle.

 

Join us at 11am PT/2pm ET on February 14th to learn how FICO Xpress Insight helps you bridge the gap between analytic concepts and business value. Read data in any format from any source, bring your own machine learning,
bring your own solver or use Xpress Solvers, collaborate with your business users, deploy decision support or automated solutions, and do it all in 75% less time.

 

For operations researchers and data scientists:

  • We will present OPEN and FREE Xpress Mosel, the mathematical modeling, programming, and analytic
    orchestration language, now available to everyone.
  • We will discuss how you can bring your own solver to deploy optimization solutions using the powerful Xpress Insight platform.
  • We will cover the OPEN Xpress Mosel native solver interface which allows solver developers to extend Xpress Mosel with your own solvers giving you an instant audience of thousands of users.

 

For business management and executives:

  • We will reveal a world in which machine learning and optimization techniques can be deployed into production
    in weeks, not months or years.
  • We will showcase a world in which changes are welcomed by your analytic experts and by IT. You
    can change your requirements and then implement them in minutes or hours instead of months.
  • You will see a world where you can explore new avenues to gain more efficiency. Your stakeholders will want to get together to collaborate because they can see how the solution benefits them as individuals and as an organization.

 

Register today!

 

Don’t miss our next webinar on 2/21/2018 on Pattern Discovery

Make sure to register for our final webinar on 2/28/2018 about our Forecasting Tool

 

Visit the FICO Xpress Optimization Community to learn more about Xpress Optimization, download free software, get support and talk to the experts!