By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model. We will now discuss the next three stages, beginning with the “baking” stage: scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done. However, the remaining two steps are critical to development and application of a predictive model: implementation/documentation and scorecard monitoring. Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”
By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model. We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done. However, the remaining two steps are critical to development and application of a predictive model: implementation/documentation and scorecard monitoring. Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”
There were always questions around the likelihood that the August 1, 2009 deadline would stick. Well, the FTC has pushed out the Red Flag Rules compliance deadline to November 1, 2009 (from the previously extended August 1, 2009 deadline). This extension is in response to pressures from Congress – and, likely, "lower risk" businesses questioning their being covered under the Red Flag Rule to begin with (businesses such as those related to healthcare, retailers, small businesses, etc). Keep in mind that the FTC extension on enforcement of Red Flag Guidelines does not apply to address discrepancies on credit profiles, and that those discrepancies are expected to be worked TODAY. Risk management strategies are key to your success. To view the entire press release, visit: http://www.ftc.gov/opa/2009/07/redflag.shtm
By: Wendy Greenawalt When consulting with lenders, we are frequently asked what credit attributes are most predictive and valuable when developing models and scorecards. Because we receive this request often, we recently decided to perform the arduous analysis required to determine if there are material differences in the attribute make up of a credit risk model based on the portfolio on which it is applied. The process we used to identify the most predictive attributes was a combination of art and sciences -- for which our data experts drew upon their extensive data bureau experience and knowledge obtained through engagements with clients from all types of industries. In addition, they applied an empirical process which provided statistical analysis and validation of the credit attributes included. Next, we built credit risk models for a variety of portfolios including bankcard, mortgage and auto and compared the credit attribute included in each. What we found is that there are some attributes that are inherently predictive regardless for which portfolio the model was being developed. However, when we took the analysis one step further, we identified that there can be significant differences in the account-level data when comparing different portfolio models. This discovery pointed to differences, not just in the behavior captured with the attributes, but in the mix of account designations included in the model. For example, in an auto risk model, we might see a mix of attributes from all trades, auto, installment and personal finance…as compared to a bankcard risk model which may be mainly comprised of bankcard, mortgage, student loan and all trades. Additionally, the attribute granularity included in the models may be quite different, from specific derogatory and public record data to high level account balance or utilization characteristics. What we concluded is that it is a valuable exercise to carefully analyze available data and consider all the possible credit attribute options in the model-building process – since substantial incremental lift in model performance can be gained from accounts and behavior that may not have been previously considered when assessing credit risk.
By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development: scorecard development; implementation/documentation; and scorecard monitoring.
By: Kari Michel In my last blog I gave an overview of monitoring reports for new account acquisition decisions listing three main categories that reports typically fall into: (1) population stability; (2) decision management; (3) scorecard performance. Today, I want to focus on population stability. Applicant pools may change over time as a result of new marketing strategies, changes in product mix, pricing updates, competition, economic changes or a combination of these. Population stability reports identify acquisition trends and the degree to which the applicant pool has shifted over time, including the scorecard components driving the shift in custom credit scoring models. Population stability reports include: • Actual versus expected score distribution • Actual versus expected scorecard characteristics distributions (available with custom models) • Mean applicant scores • Volumes, approval and booking rates These types of reports provide information to help monitor trends over time, rather than spikes from month to month. Understanding the trends allows one to be proactive in determining if the shifts warrant changes to lending policies or cut-off scores. Population stability is only one area that needs to be monitored; in my next blog I will discuss decision management reports.
By: Wendy Greenawalt On any given day, US credit bureaus contain consumer trade data on approximately four billion trades. Interpreting data and defining how to categorize the accounts and build attributes, models and decisioning tools can and does change over time, due to the fact that the data reported to the bureaus by lenders and/or servicers also changes. Over the last few years, new data elements have enabled organizations to create attributes to identify very specific consumer behavior. The challenge for organizations is identifying what reporting changes have occurred and the value that the new consumer data can bring to decisioning. For example, a new reporting standard was introduced nearly a decade ago which enabled lenders to report if a trade was secured by money or real property. Before the change, lenders would report the accounts as secured trades making it nearly impossible to determine if the account was a home equity line of credit or a secured credit card. Since then, lender reporting practices have changed and, now, reports clearly state that home equity lines of credit are secured by property making it much easier to delineate the two types of accounts from one another. By taking advantage of the most current credit bureau account data, lenders can create attributes to capture new account types. They can also capture information (such as: past due amounts; utilization; closed accounts and derogatory information including foreclosure; charge-off and/or collection data) to make informed decisions across the customer life cycle.
Vintage analysis 101 The title of this edition, ‘The risk within the risk’ is a testament to the amount of information that can be gleaned from an assessment of the performances of vintage analysis pools. Vintage analysis pools offer numerous perspectives of risk. They allow for a deep appreciation of the effects of loan maturation, and can also point toward the impact of external factors, such as changes in real estate prices, origination standards, and other macroeconomic factors, by highlighting measurable differences in vintage to vintage performance. What is a vintage pool? By the Experian definition, vintage pools are created by taking a sample of all consumers who originated loans in a specific period, perhaps a certain quarter, and tracking the performance of the same consumers and loans through the life of each loan. Vintage pools can be analyzed for various characteristics, but three of the most relevant are: * Vintage delinquency, which allows for an understanding of the repayment trends within each pool; * Payoff trends, which reflect the pace at which pools are being repaid; and * Charge-off curves, which provide insights into the charge-off rates of each pool. The credit grade of each borrower within a vintage pool is extremely important in understanding the vintage characteristics over time, and credit scores are based on the status of the borrower just before the new loan was originated. This process ensures that the new loan origination and the performance of the specific loan do not influence the borrower’s credit score. By using this method of pooling and scoring, each vintage segment contains the same group of loans over time – allowing for a valid comparison of vintage pools and the characteristics found within. Once vintage pools have been defined and created, the possibilities for this data are numerous... Read more about our analysis opportunities for vintage analysis and our recent findings on vintage analysis.
In recent months, the topics of stress-testing and loss forecasting have been at the forefront of the international media and, more importantly, at the forefront of the minds of American banking executives. The increased involvement of the federal government in managing the balance sheets of the country’s largest banks has mixed implications for financial institutions in this country. On one hand, some banks have been in the practice of building macroeconomic scenarios for years and have tried and tested methods for risk management and loss forecasting. On the other hand, in financial institutions where these practices were conducted in a less methodical manner, if at all, the scrutiny placed on capital adequacy forecasting has left many looking to quickly implement standards that will address regulatory concerns when their number is called. For those clients to whom this process is new, or for those who do not possess a methodology that would withstand the examination of federal inspectors, the question seems to be – where do we begin? I think that before you can understand where you’re going, you must first understand where you are and where you have been. In this case, it means having a detailed understanding of key industry and peer benchmarks and your relative position to those benchmarks. Even simple benchmarking exercises provide answers to some very important questions. • What is my risk profile versus that of the industry? • How does the composition of my portfolio differ from that of my peers? • How do my delinquencies compare to those of my peers? How has this position been changing? By having a thorough understanding of one’s position in these challenging circumstances, it allows for a more educated foundation upon which to build assessments of the future.
By: Kari Michel Are you using scores to make new applicant decisions? Scoring models need to be monitored regularly to ensure a sound and successful lending program. Would you buy a car and run it for years without maintenance -- and expect it to run at peak performance? Of course not. Just like oil changes or tune-ups, there are several critical components that need to be addressed regarding your scoring models on a regular basis. Monitoring reports are essential for organizations to answer the following questions: • Are we in compliance? • How is our portfolio performing? • Are we making the most effective use of your scores? To understand how to improve your portfolio performance, you must have good monitoring reports. Typically, reports fall into one of three categories: (1) population stability, (2) decision management, (3) scorecard performance. Having the right information will allow you to monitor and validate your underwriting strategies and make any adjustments when necessary. Additionally, that information will let you know that your scorecards are still performing as expected. In my next blog, I will discuss the population stability report in more detail.
By: Tracy Bremmer It’s not really all about the credit score. Now don’t get me wrong, a credit score is a very important tool used in credit decision making; however there’s so much more that lenders use to say “accept” or “decline.” Many lenders segment their customer/prospect base prior to ever using the score. They use credit-related attributes such as, “has this consumer had a bankruptcy in the last two years?” or “do they have an existing mortgage account?” to segment out consumers into risk-tier buckets. Lenders also evaluate information from the application such as income or number of years at current residence. These types of application attributes help the lender gain insight that is not typically evaluated in the traditional risk score. For lenders who already have a relationship with a customer, they will look at their existing relationships with that customer prior to making a decision. They’ll look at things like payment history and current product mix to better understand who best to cross-sell, up-sell, or in today’s economy, down-sell. In addition, many lenders will run the applicant through some type of fraud database to ensure the person really is who they say they are. I like to think of the score as the center of the decision, with all of these other metrics as necessary inputs to the entire decision process. It is like going out for an ice cream sundae and starting with the vanilla and needing all the mix-ins to make it complete.
-- By Kari Michel What is your credit risk score? Is it 300, 700, 900 or something in between? In order to understand what it means, you need to know which score you are referencing. Lenders use many different scoring models to determine who qualifies for a loan and at what interest rate. For example, Experian has developed many scores, such as VantageScore®. Think of VantageScore® as just one of many credit scores available in the marketplace. While all credit risk models have the same purpose, to use credit information to assess risk, each credit model is unique in that each one has its own proprietary formula that combines and calculates various credit information from your credit report. Even if lenders used the same credit risk score, the interpretation of risk depends on the lender, and their lending policies and criteria may vary. Additionally, each credit risk model has its own score range as well. While the score range may be relatively similar to another score range, the meaning of the score may not necessarily be the same. For example, a 640 in one score may not mean the same thing or have the same credit risk as a 640 for another score. It is also possible for two different scores to represent the same level of risk. If you have a good credit score with one lender, you will likely have a good score with other lenders, even if the number is different.
As I\'ve suggested in previous postings, we\'ve certainly expected more clarifying language from the Red Flags Rule drafting agencies. Well, here is some pretty good information in the form of another FAQ document created by the Board of Governors of the Federal Reserve System (FRB), Federal Deposit Insurance Corporation (FDIC), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and Federal Trade Commission (FTC). This is a great step forward in responding to many of the same Red Flag guidelines questions that we get from our clients, and I hope it\'s not the last one we see. You can access the document via any of the agency website, but for quick reference, here is the FDIC version: http://www.fdic.gov/news/news/press/2009/pr09088.html
We at Experian have been conducting a survey of visitors to our Red Flag guidelines microsite (www.experian.com/redflags). Some initial findings show that approximately 40 percent of those surveyed were \"ready\" by the original November 1, 2008 deadline. However, nearly 50 percent of the respondents found the Identity Theft Red Flag deadline extension(s) helpful. For those of you that have not taken the survey, please do so. We welcome your feedback.
One of the handful of mandatory elements in the Red Flag guidelines, which focus on FACTA Sections 114 and 315, is the implementation of Section 315. Section 315 provides guidance regarding reasonable policies and procedures that a user of consumer reports must employ when a consumer reporting agency sends the user a notice of address discrepancy. A couple of common questions and answers to get us started: 1. How do the credit reporting agencies display an address discrepancy? Each credit reporting agency displays an “address discrepancy indicator,” which typically is simply a code in a specified field. Each credit reporting agency uses a different indicator. Experian, for example, supplies an indicator for each displayable address that denotes a match or mismatch to the address supplied upon inquiry. 2. How do I “form a reasonable belief” that a credit report relates to the consumer for whom it was requested? Following procedures that you have implemented as a part of your Customer Identification Program (CIP) under the USA PATRIOT Act can and should satisfy this requirement. You also may compare the credit report with information in your own records or information from a third-party source, or you may verify information in the credit report with the consumer directly. In my last posting, I discussed the value of a risk-based approach to Red Flag compliance. Foundational to that value is the ability to efficiently and effectively reconcile Red Flag conditions…including addressing discrepancies on a consumer credit report. Arguably, the biggest Red Flag problem we solve for our clients these days is in responding to identified and detected Red Flag conditions as part of their Identity Theft Prevention Program. There are many tools available that can detect Red Flag conditions. The best-in-class solutions, however, are those that not only detect these conditions, but allow for cost-effective and accurate reconciliation of high risk conditions. Remember, a Red Flag compliant program is one that identifies and detects high risk conditions, responds to the presence of those conditions, and is updated over time as risk and business processes change. A recent Experian analysis of records containing an address discrepancy on the credit profile showed that the vast majority of these could be positively reconciled (a.k.a. authenticated) via the use of alternate data sources and scores. Layer on top of a solid decisioning strategy using these elements, the use of consumer-facing knowledge-based authentication questions, and nearly all of that potential referral volume can be passed through automated checks without ever landing in a manual referral queue or call center. Now that address discrepancies can no longer be ignored, this approach can save your operations team from having to add headcount to respond to this initially detected condition.