
By: Kari Michel How are your generic or custom models performing? As a result of the volatile economy, consumer behavior has changed significantly over the last several years and may have impacted the predictiveness of your models. Credit models need to monitored regularly and updated periodically in order to remain predictive. Let’s take a look at VantageScore, it was recently redeveloped using consumer behavioral data reflecting the volatile economic environment of the last few years. The development sample was compiled using two performance timeframes: 2006 – 2008, and 2007 – 2009, with each contributing 50% of the development sample. This is a unique approach and is unlike traditional score development methodology, which typically uses a single, two year time window. Developing models with data over an extended window reduces algorithm sensitivity to highly volatile behavior in a single timeframe. Additionally, the model is more stable as the development is built on a broader range of consumer behaviors. The validation results show VantageScore 2.0 outperforms VantageScore 1.0 by 3% for new accounts and 2% for existing accounts overall. To illustrate the differences that were seen in consumer behavior, the following chart and table show the consumer characteristics that contribute to a consumer’s score and compare the characteristic contributions of VantageScore 2.0 vs VantageScore 1.0. Payment History Utilization Balances Length of Credit Recent Credit Available Credit Vantage Score 2.0 28% 23% 9% 8% 30% 1% Vantage Score 1.0 32% 23% 15% 13% 10% 7% As we expect ‘payment history’ is a large portion driving the score, 28% for VantageScore 2.0 and 32% for VantageScore 1.0. What is interesting to see is the ‘recent credit’ contribution has increased significantly to 30% from 10%. There also is a shift with lower emphases on balances, 9% versus 15% as well as ‘length of credit’, 8% versus 13%. As you can see, consumer behavior changes over time and it is imperative to monitor and validate your scorecards in order to assess if they are producing the results you expect. If they are not, you may need to redevelop or switch to a newer version of a generic model.

By: Kennis Wong As a fraud management professional, naturally I am surrounded by fraud prevention topics and other professionals in the field all the time. Financial, ecommerce, retail, telecommunication, government and other organizations are used to talking about performance, scoring models, ROI, false-positives, operational efficiency, customer satisfaction trade-off, loss provisioning, decisioning strategy or any other sophisticated measures when it comes to fraud management. But when I bring up the topic of fraud outside of this circle, I am always surprised to see how little educated the general public is about an issue that is so critical to their financial health. I met a woman in an event several weeks ago. After learning about my occupation, she told me her story about someone from XYZ credit card company calling her and asking for her Social Security number, date of birth and other personal identifying information. Only days after she gave out the information that she realized things didn’t seem right. She called the credit card company and got her credit card re-issued. But at the time I talked to her, she still didn’t know enough to realize that the fraudster could now use her identity to start any new financial relationship under her name. As long as consumers are ignorant about protecting their identity information, businesses’ identity theft prevention program will not be complete and identity fraud will occur as a result of this weak link. To address this vulnerability and minimize fraud, consumers need to be educated.

— by, Andrew Gulledge One of the quickest and easiest ways to reduce fraud in your portfolio is to incorporate question weighting into your out of wallet question strategy. To continue the use of knowledge based authentication without question weighting is to assign a point value of 100 points to each question. This is somewhat arbitrary (and a bit sloppy) when we know that certain questions consistently perform better than others. So if a fraudster gets 3 easier questions right, and 1 harder question wrong they will have an easier time passing your authentication process without question weighting. If, on the other hand, you adopt question weighting as part of your overall risk based authentication approach, that same fraudster would score much worse on the same KBA session. The 1 question that they got wrong would have cost them a lot of points, and the 3 easier questions they got right wouldn’t have given them as many points. Question weighting based on known fraud trends is more punitive for the fraudsters. Let’s say the easier questions were worth 50 points each, and the harder question was worth 150 points. Without question weighting, the fraudster would have scored 75% (300 out of 400 points). With question weighting, the fraudster would have scored 50% (150 out of 300 points correct). Your decisioning strategy might well have failed him with a score of 50, but passed him with a score of 75. Question weighting will often kick the fraudsters into the fail regions of your decisioning strategy, which is exactly what risk based authentication is all about. Consult with your fraud account management representative to see if you are making the most out of your KBA experience with the intelligent use of question weighting. It is a no-brainer way to improve your overall fraud prevention, even if you keep your overall pass rate the same. Question weighting is an easy way to squeeze more value of your knowledge based authentication tool.