Recent findings on vintage analysis Source: Experian-Oliver Wyman Market Intelligence Reports Analyzing recent vintage analysis provides insights gleaned from cursory review Analyzing recent trends from vintages published in the Experian-Oliver Wyman Market Intelligence Reports, there are numerous insights that can be gleaned from just a cursory review of the results. Mortgage vintage analysis trends As noted in an earlier posting, recent mortgage vintage analysis\' show a broad range of behaviors between more recent vintages and older, more established vintages that were originated before the significant run-up of housing prices seen in the middle of the decade. The 30+ delinquency levels for mortgage vintages in 2005, 2006, and 2007 approach and in two cases exceed 10 percent of trades in the last 12 months of performance, and have spiked from historical trends, beginning almost immediately after origination. On the other end of the spectrum, the vintages from 2003 and 2002 have barely approached or exceeded 5 percent for the last 6 or 7 years. Bandcard vintage analysis trends As one would expect, the 30+ delinquency trends demonstrated within bankcard vintage analysis are vastly different from the trends of mortgage vintages. Firstly, card delinquencies show a clear seasonal trend, with a more consistent yearly pattern evident in all vintages, resulting from the revolving structure of the product. The most interesting trends within the card vintages do show that the more recent vintages, 2005 to 2008, display higher 30+ delinquency levels, especially the Q2 2007 vintage, which is far and away the underperformer of the group. Within each vintage pool, an analysis can extend into the risk distribution and details of the portfolio and further segment the pool by credit score, specifically VantageScore. In other words, the loans in this pool are only for the most creditworthy customers at the time of origination. The noticeable trend is that while these consumers were largely resistant to deteriorating economic conditions, each vintage segment has seen a spike in the most recent 9-12 months. Given that these consumers tend to have the highest limits and lowest utilization of any VantageScore band, this trend encourages further account management consideration and raises flags about overall bankcard performance in coming months. Even a basic review of vintage analysis pools and the subsequent analysis opportunities that result from this data can be extremely useful. This vintage analysis can add a new perspective to risk management, supplementing more established analysis techniques, and further enhancing the ability to see the risk within the risk. Purchase a complete picture of consumer credit trends from Experian’s database of over 230 million consumers with the Market Intelligence Brief.
As I wrote in my previous posting, a key Red Flags Rule challenge facing many institutions is one that manages the number of referrals generated from the detection of Red Flags conditions. The big ticket item in referral generation is the address mismatch condition. Identity Theft Prevention Program I’ve blogged previously on the subject of risk-based authentication and risk-based pricing, so I won’t rehash that information. What I will suggest, however, is that those institutions who now have an operational Identity Theft Prevention Program (if you don’t, I’d hurry up) should continue to explore the use of alternate data sources, analytics and additional authentication tools (such as knowledge-based authentication) as a way to detect Red Flags conditions and reconcile them all within the same real-time transaction. Referral rates Referral rates stemming from address mismatches (a key component of the Red Flags Rule high risk conditions) can approach or even surpass 30 percent. That is a lot. The good news is that there are tools which employ additional data sources beyond a credit profile to “find” that positive address match. The use of alternate data sources can often clear the majority of these initial mismatches, leaving the remaining transactions for treatment with analytics and knowledge-based authentication and Identity Theft Prevention Program. Whatever “referral management” process you have in place today, I’d suggest exploring risk-based authentication tools that allow you to keep the vast majority of those referrals out of the hands of live agents, and distanced from the need to put your customers through the authentication wringer. In the current marketplace, there are many services that allow you to avoid high referral costs and risks to customer experience. Of course, we think ours are pretty good.
By: Wendy Greenawalt In the last installment of my three part series dispelling credit attribute myths, we’ll discuss the myth that the lift achieved by utilizing new attributes is minimal, so it is not worth the effort of evaluating and/or implementing new credit attributes. First, evaluating accuracy and efficiency of credit attributes is hard to measure. Experian data experts are some of the best in the business and, in this edition, we will discuss some of the methods Experian uses to evaluate attribute performance. When considering any new attributes, the first method we use to validate statistical performance is to complete a statistical head-to-head comparison. This method incorporates the use of KS (Kolmogorov–Smirnov statistic), Gini coefficient, worst-scoring capture rate or odds ratio when comparing two samples. Once completed, we implement an established standard process to measure value from different outcomes in an automated and consistent format. While this process may be time and labor intensive, the reward can be found in the financial savings that can be obtained by identifying the right segments, including: • Risk models that better identify “bad” accounts and minimizing losses • Marketing models that improve targeting while maximizing campaign dollars spent • Collections models that enhance identification of recoverable accounts leading to more recovered dollars with lower fixed costs Credit attributes Recently, Experian conducted a similar exercise and found that an improvement of 2-to-22 percent in risk prediction can be achieved through the implementation of new attributes. When these metrics are applied to a portfolio where several hundred bad accounts are now captured, the resulting savings can add up quickly (500 accounts with average loss rate of $3,000 = $1.5M potential savings). These savings over time more than justify the cost of evaluating and implementing new credit attributes.
By: Wendy Greenawalt In the second installment of my three part series, dispelling credit attribute myths, we will discuss why attributes with similar descriptions are not always the same. The U.S. credit reporting bureaus are the most comprehensive in the world. Creating meaningful attributes requires extensive knowledge of the three credit bureaus’ data. Ensuring credit attributes are up-to-date and created by informed data experts. Leveraging complete bureau data is also essential to obtaining long-term strategic success. To illustrate why attributes with similar names may not be the same let’s discuss a basic attribute, such as “number of accounts paid satisfactory.” While the definition, may at first seem straight forward, once the analysis begins there are many variables that must be considered before finalizing the definition, including: Should the credit attributes include trades currently satisfactory or ever satisfactory? Do we include paid charge-offs, paid collections, etc.? Are there any date parameters for credit attributes? Are there any trades that should be excluded? Should accounts that have a final status of \"paid” be included? These types of questions and many others must be carefully identified and assessed to ensure the desired behavior is captured when creating credit attributes. Without careful attention to detail, a simple attribute definition could include behavior that was not intended. This could negatively impact the risk level associated with an organization’s portfolio. Our recommendation is to complete a detailed analysis up-front and always validate the results to ensure the desired outcome is achieved. Incorporating this best practice will guarantee that credit attributes created are capturing the behavior intended.
By: Wendy Greenawalt This blog kicks off a three part series exploring some common myths regarding credit attributes. Since Experian has relationships with thousands of organizations spanning multiple industries, we often get asked the same types of questions from clients of all sizes and industries. One of the questions we hear frequently from our clients is that they already have credit attributes in place, so there is little to no benefit in implementing a new attribute set. Our response is that while existing credit attributes may continue to be predictive, changes to the type of data available from the credit bureaus can provide benefits when evaluating consumer behavior. To illustrate this point, let’s discuss a common problem that most lenders are facing today-- collections. Delinquency and charge-off continue to increase and many organizations are having difficulty trying to determine the appropriate action to take on an account because consumer behavior has drastically changed regarding credit attributes. New codes and fields are now reported to the credit bureaus and can be effectively used to improve collection-related activities. Specifically, attributes can now be created to help identify consumers who are rebounding from previous account delinquencies. In addition, lenders can evaluate the number and outstanding balances of collection or other types of trades. This can be achieved while considering the percentage of accounts that are delinquent and the specific type of accounts affected after assessing credit risk. The utilization of this type of data helps an organization to make collection decisions based on very granular account data. This is done while considering new consumer trends such as strategic defaulters. Understanding all of the consumer variables will enable an organization to decide if the account should be allowed to self-cure. If so, immediate action should be taken or modification of account terms should be contemplated. Incorporating new data sources and updating attributes on a regular basis allows lenders to react to market trends quickly by proactively managing strategies.
--by Mike Sutton In today’s collections environment, the challenges of meeting an organization’s financial objectives are more difficult than ever. Case volumes are higher, accounts are more difficult to collect and changing customer behaviors are rendering existing business models less effective. When responding to recent events, it is not uncommon for organizations to take what may seem to be the easiest path to success — simply hiring more staff. Perhaps in the short-term there may appear to be cash flow improvements, but in most cases, this is not the most effective way to cope with long-term business needs. As incremental staff is added to compensate for additional workloads, there is a point of diminishing return on investment and that can be difficult to define until after the expenditures have been made. Additionally, there are almost always significant operational improvements that can be realized by introducing new technology. Furthermore, the relevant return on investment models often forecast very accurately. So, where should a collections department consider investing to improve financial results? The best option may not be the obvious choice, and the mere thought can make the most seasoned collections professionals shutter at the thought of replacing the core collections system with modern technology. That said, let’s consider what has changed in recent years and explore why the replacement proposition is not nearly as difficult or costly as in the past. Collection Management Software The collections system software industry is on the brink of a technology evolution to modern and next-generation offerings. Legacy systems are typically inflexible and do not allow for an effective change management program. This handicap leaves collections departments unable to keep up with rapidly changing business objectives that are a critical requirement in surviving these tough economic times. Today’s collections managers need to reduce operational costs while improving these objectives: reducing losses, improving cash flow and promoting customer satisfaction (particularly with those who pose a greater lifetime profit opportunity). The next generation collections software squarely addresses these business problems and provides significant improvement over legacy systems. Not only is this modern technology now available, but the return on investment models are extremely compelling and have been proven in markets where successful implementations have already occurred. As an example of modern collections technologies that can help streamline operations, check out the overview and brief demonstration that is on this link: www.experian.com/decision-analytics/tallyman-demo.html.
In my last entry, I talked about the challenges clients face in trying to meet multiple and complex regulatory requirements, such as FACT Act’s Red Flags Rule and the USA Patriot Act. While these regulations serve both different and shared purposes, there are some common threads between the two: 1. You must consider the type of accounts and methods of account opening: The type of account offered - credit or deposit, consumer or business – as well as the method of opening – phone, online, or face-to-face – has a bearing on the steps you need to take and the process that will be established. 2. Use of consumer name, address, and identification number:The USA Patriot Act requires each of these – plus date of birth – to open a new account. Red Flags stops short of “requiring” these for new account openings, but it consistently illustrates the use of these Personally Identifiable Information (PII) elements as examples of reasonable procedures to detect red flags. 3. Establishing identity through non-documentary verification:Third party information providers, such as a credit reporting agency or data broker, can be used to confirm identity, particularly in the case where the verification is not done in person. Knowing what’s in common means you can take a look at where to leverage processes or tools to gain operational and cost efficiencies and reduce negative impact on the customer experience. For example, if you’re using any authentication products today to comply with the USA Patriot Act and/or minimize fraud losses, the information you collect from consumers and authentication steps you are already taking now may suffice for a large portion of your Red Flags Identity Theft Prevention Program. And if you’re considering fraud and compliance products for account opening or account management – it’s clear that you’ll want something flexible that, not only provides identity verification, but scales to the compliance programs you put in place, and those that may be on the horizon.
--by Mike Sutton I recently interviewed a number of Experian clients to determine how they believe their organizations and industry peers will prioritize collections process improvement over the next 24 months. Additional contributions were collected by written surveys. Here are several interesting observations: Improve Collections survey results: Financial services professionals, in general, ranked “loss mitigation / risk management improvement” as the most critical area of focus. Credit unions were the financial services group’s exception and placed” customer relationship management / attrition control” at the top of their priority list. Healthcare providers ranked both “general delinquency management” and “improving cash flow / receivables” as their primary area of focus for the foreseeable future. Almost all of the first-party contributors, across all industries polled, ranked “operational expense management / cost reductions” as being very important or at least a high priority. This category was also rated the most critical by utilities. “External partner management (agencies, repo vendors and debt buyers)” also ranked high, but did not stand out on its own, as a top priority for any particular group. All of the categories mentioned above were considered important by every respondent, but the most urgent priorities were not consistent across industries.
While the FACT Act’s Red Flags Rule seems to capture all of the headlines these days, it’s just one of a number of compliance challenges that banks, credit unions, and a myriad of other institutions face on a daily basis. And meeting today’s regulatory requirements is more complicated than ever. Risk managers and compliance officers are asked to consider many questions, including: 1. Do FACTA Sections 114 and 315 apply to me? 2. What do I have to do to comply? 3. What impact does this have on the customer’s experience? 4. What is this going to cost me in terms of people and process? Interpretation of the law or guideline – including who it applies to and to whom it does not - varies widely. Which types of businesses are subject to the Red Flags Rule? What is a “covered account?” If you’re not sure, you’re not alone - it’s a primary reason why the Federal Trade Commission (FTC) continues to postpone enforcement of the rule, while this healthy debate continues. And by the way, FTC – it’s almost November 1st…aren’t we about due for another delay? But we’re not talking about just protecting consumers from identity theft and reducing fraud and protecting themselves using the Identity Theft Prevention Program. The USA Patriot Act and “Know Your Customer” requirements have been around much longer, but there are current challenges of interpretation and practical application when it comes to identifying customers and performing due diligence to deter fraud and money laundering. Since Customer Identification Programs require procedures based on the bank’s own “assessment of the relevant risks,” including types of accounts opened, methods of opening, and even the bank’s “size, location, and customer base,” it’s safe to say that each program will differ slightly – or even greatly. So it’s clear there’s a lack of specificity in the regulations of the Red Flags Rule which cause heartburn for those tasked with compliance…but are there some common themes and requirements across the two? The short answer is Yes. In my next post, I’ll talk about the elements in common and how authentication products can play a part in addressing both.
In my previous three postings, I’ve covered basic principles that can define a risk-based authentication process, associated value propositions, and some best-practices to consider. Finally, I’d like to briefly discuss some emerging informational elements and processes that enhance (or have already enhanced) the notion of risk-based authentication in the coming year. For simplicity, I’m boiling these down to three categories: 1. Enterprise Risk Management – As you’d imagine, this concept involves the creation of a real-time, cross channel, enterprise-wide (cross business unit) view of a consumer and/or transaction. That sounds pretty good, right? Well, the challenge has been, and still remains, the cost of developing and implementing a data sharing and aggregation process that can accomplish this task. There is little doubt that operating in a more silo’d environment limits the amount of available high-risk and/or positive authentication data associated with a consumer…and therefore limits the predictive value of tools that utilize such data. It is only a matter of time before we see more widespread implementation of systems designed to look at a single transaction, an initial application profile, previous authentication results, or other relationships a consumer may have within the same organization -- and across all of this information in tandem. It’s simply a matter of the business case to do so, and the resources to carry it out. 2. Additional Intelligence – Beyond some of the data mentioned above, some additional informational elements emerging as useful in isolation (or, even better, as a factor among others in a holistic assessment of a consumer’s identity and risk profile) include these areas: IP address vs. physical address comparisons; device ID or fingerprinting; and biometrics (such as voice verification). While these tools are being used and tested in many organizations and markets, there is still work to be done to strike the right balance as they are incorporated into an overall risk-based authentication process. False positives, cost and implementation challenges still hinder widespread use of these tools from being a reality. That should change over time, and quickly to help with the cost of credit risk. 3. Emerging Verification Techniques – Out-of-band authentication is defined as the use of two separate channels, used simultaneously, to authenticate a customer. For example: using a phone to verify the identity of that person while performing a Web transaction. Similarly, many institutions are finding success in initiating SMS texts as a means of customer notification and/or verification of monetary or non-monetary transactions. The ability to reach out to a consumer in a channel alternate to their transaction channel is a customer friendly and cost effective way to perform additional due diligence.
By: Kennis Wong In Part 1 of Generic fraud score, we emphasized the importance of a risk-based approach when it comes to fraud detection. Here are some further questions you may want to consider. What is the performance window? When a model is built, it has a defined performance window. That means the score is predicting a certain outcome within that time period. For example, a traditional risk score may be predicting accounts that are decreasing in twenty-four months. That score may not perform well if your population typically worsens in two months. This question is particularly important when it relates to scoring your population. For example, if a bust-out score has a performance window of three months, and you score your accounts at the time of acquisition, it would only catch accounts that are busting-out within the next three months. As a result, you should score your accounts during periodic account reviews in addition to the time of acquisition to ensure you catch all bust-outs. Therefore, bust out fraud is an important indicator. Which accounts should I score? While it’s typical for creditors to use a fraud score on every applicant at the time of acquisition, they may not score all their accounts during review. For example, they may exclude inactive accounts or older accounts assuming those with a long history means less likelihood of fraud. This mistake may be expensive. For instance, the typical bust-out behavior is for fraudsters to apply for cards way before they intend to bust out. This may be forty-eight months or more. So when you think they are good and profitable customers, they can strike and leave you with seriously injury. Make sure that your fraud database is updated and accurate. As a result, the recommended approach is to score your entire portfolio during account review. How often do I validate the score? The answer is very often -- this may be monthly or quarterly. You want to understand whether the score is working for you – do your actual results match the volume and risk projections? Shifts of your score distribution will almost certainly occur over time. To meet your objectives over the long run, continue to monitor and adjust cutoffs. Keep your fraud database updated at all times.
By: Kennis Wong In this blog entry, we have repeatedly emphasized the importance of a risk-based approach when it comes to fraud detection. Scoring and analytics are essentially the heart of this approach. However, unlike the rule-based approach, where users can easily understand the results, (i.e. was the S.S.N. reported deceased? Yes/No; Is the application address the same as the best address on the credit bureau? Yes/No), scores are generated in a black box where the reason for the eventual score is not always apparent even in a fraud database. Hence more homework needs to be done when selecting and using a generic fraud score to make sure they satisfy your needs. Here are some basic questions you may want to ask yourself: What do I want the score to predict? This may seem like a very basic question, but it does warrant your consideration. Are you trying to detect these areas in your fraud database? First-party fraud, third-party fraud, bust out fraud, first payment default, never pay, or a combination of these? These questions are particularly important when you are validating a fraud model. For example, if you only have third-party fraud tagged in your test file, a bust out fraud model would not perform well. It would just be a waste of your time. What data was used for model development? Other important questions you may want to ask yourself include: Was the score based on sub-prime credit card data, auto loan data, retail card data or another fraud database? It’s not a definite deal breaker if it was built with credit card data, but, if you have a retail card portfolio, it may still perform well for you. If the scores are too far off, though, you may not have good result. Moreover, you also want to understand the number of different portfolios used for model development. For example, if only one creditor’s data is used, then it may not have the general applicability to other portfolios.
In my previous two blog postings, I’ve tried to briefly articulate some key elements of and value propositions associated with risk-based authentication. In this entry, I’d like to suggest some best-practices to consider as you incorporate and maintain a risk-based authentication program. 1. Analytics – since an authentication score is likely the primary decisioning element in any risk-based authentication strategy, it is critical that a best-in-class scoring model is chosen and validated to establish performance expectations. This initial analysis will allow for decisioning thresholds to be established. This will also allow accept and referral volumes to be planned for operationally. Further more, it will permit benchmarks to be established which follow on performance monitoring that can be compared. 2. Targeted decisioning strategies – applying unique and tailored decisioning strategies (incorporating scores and other high-risk or positive authentication results) to various access channels to your business just simply makes sense. Each access channel (call center, Web, face-to-face, etc.) comes with unique risks, available data, and varied opportunity to apply an authentication strategy that balances these areas; risk management, operational effectiveness, efficiency and cost, improved collections and customer experience. Champion/challenger strategies may also be a great way to test newly devised strategies within a single channel without taking risk to an entire addressable market and your business as a whole. 3. Performance Monitoring – it is critical that key metrics are established early in the risk-based authentication implementation process. Key metrics may include, but should not be limited to these areas: • actual vs. expected score distributions; • actual vs. expected characteristic distributions; • actual vs. expected question performance; • volumes, exclusions; • repeats and mean scores; • actual vs. expected pass rates; • accept vs. referral score distribution; • trends in decision code distributions; and • trends in decision matrix distributions. Performance monitoring provides an opportunity to manage referral volumes, decision threshold changes, strategy configuration changes, auto-decisioning criteria and pricing for risk based authentication. 4. Reporting – it likely goes without saying, but in order to apply the three best practices above, accurate, timely, and detailed reporting must be established around your authentication tools and results. Regardless of frequency, you should work with internal resources and your third-party service provider(s) early in your implementation process to ensure relevant reports are established and delivered. In my next posting, I will be discussing some thoughts about the future state of risk based authentication.
In my last blog posting, I presented the foundational elements that enable risk-based authentication. These include data, detailed and granular results, analytics and decisioning. The inherent value of risk-based authentication can be summarized as delivering an holistic assessment of a consumer and/or transaction with the end goal of applying the right authentication and decisioning treatment at the right time. The opportunity, especially, to minimize fraud losses using fraud analytics as part of your assessment is significant. What are some residual values of risk-based authentication? 1. Minimized fraud losses involves the use of fraud analytics, and a more comprehensive view of a consumer identity (the good and the bad), in combination with consistent decisioning over time. This analysis will outperform simple binary rules and more subjective decisioning. 2. Improved consumer experience. By applying the right authentication and treatment at the right time, consumers are subjected to processes that are proportional to the risk associated with their identity profile. This means that lower-risk consumers are less likely to be put through more arduous courses of action, preserving a streamlined and often purely “behind the scenes” authentication process for the majority of consumers and potential consumers. In other words, you are saving the pain for the bad guys -- and that can be a good thing. 3. Operational efficiencies can be successful with the implementation of a well-designed program. Much of the decisioning can be done without human intervention and subjective contemplation. Use of score-driven policies affords businesses the opportunity to use automated authentication processes for the majority of their applicants or account management cases. Fewer human resources will be required which usually means lower costs. Or, it can mean the human resources you possess are more appropriately focused on the applications or transactions that warrant such attention. 4. Measurable performance is critical because understanding the past and current performance of risk-based authentication policies allows for the adjustment over time of such policies. These adjustments can be made based on evolving fraud risks, resource constraints, approval rate pressures, and compliance requirements, just to name a few. Given its importance, Experian recommends performance monitoring for our clients using our authentication products. In my next posting, I’ll discuss some best practices associated with implementing and managing a risk-based authentication program.
By: Kristan Keelan Most financial institutions are well underway in complying with the FTC’s ID Theft Red Flags Rule by: 1. Identifying covered accounts 2. Determining what red flags need to be monitored 3. Implementing a risk based approach However, one of the areas that seems to be overlooked in complying with the rule is the area of commercial accounts. Did your institution include commercial accounts when identifying covered accounts? You’re not alone if you focused only on consumer accounts initially. Keep in mind that commercial credit and deposit accounts also can be included as covered accounts when there is a “reasonably foreseeable risk” of identity theft to customers or to safety and soundness. Start by determining if there is a reasonably foreseeable risk of identity theft in a business or commercial account, especially in small business accounts. Consider the risk of identity theft presented by the methods used to open business accounts, the methods provided to access business accounts, and previous experiences with identity theft on a business account. I encourage you to revisit your institution’s compliance program and review whether commercial accounts have been examined closely enough.