At A Glance
It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.Paragraph Block- is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
New Text!


Heading 2
Heading 3
Heading 4
Heading 5
- This is a list
- Item 1
- Item 2
- Sub list
- Sub list 2
- Sub list 3
- More list
- More list 2
- More list 3
- More more
- More more
This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation

This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation
| Table element | Table element | Table element |
| my table | my table | my table |
| Table element | Table element | Table element |

Media Text Block
of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum
My Small H5 Title


When reviewing offers for prospective clients, lenders often deal with a significant amount of missing information in assessing the outcomes of lending decisions, such as: Why did a consumer accept an offer with a competitor? What were the differentiating factors between other offers and my offer, i.e. what were their credit score trends? What happened to consumers that we declined? Do they perform as expected or better than anticipated? What were their credit risk models? While lenders can easily understand the implications of the loans they have offered and booked with consumers, they often have little information about two important groups of consumers: 1. Lost leads: consumers to whom they made an offer but did not book 2. Proxy performance: consumers to whom financing was not offered, but where the consumer found financing elsewhere. Performing a lost lead analysis on the applications approved and declined, can provide considerable insight into the outcomes and credit performance of consumers that were not added to the lender’s portfolio. Lost lead analysis can also help answer key questions for each of these groups: How many of these consumers accepted credit elsewhere? What were their credit attributes? What are the credit characteristics of the consumers we're not booking? Were these loans booked by one of my peers or another type of lender? What were the terms and conditions of these offers? What was the performance of the loans booked elsewhere? Who did they choose for loan origination? Within each of these groups, further analysis can be conducted to provide lenders with actionable feedback on the implications of their lending policies, possibly identifying opportunities for changes to better fulfill lending objectives. Some key questions can be answered with this information: Are competitors offering longer repayment terms? Are peers offering lower interest rates to the same consumers? Are peers accepting lower scoring consumers to increase market share? The results of a lost lead analysis can either confirm that the competitive marketplace is behaving in a manner that matches a lender’s perspective. It can also shine a light into aspects of the market where policy changes may lead to superior results. In both circumstances, the information provided is invaluable in making the best decision in today’s highly-sensitive lending environment.

By: Kennis Wong In this blog entry, we have repeatedly emphasized the importance of a risk-based approach when it comes to fraud detection. Scoring and analytics are essentially the heart of this approach. However, unlike the rule-based approach, where users can easily understand the results, (i.e. was the S.S.N. reported deceased? Yes/No; Is the application address the same as the best address on the credit bureau? Yes/No), scores are generated in a black box where the reason for the eventual score is not always apparent even in a fraud database. Hence more homework needs to be done when selecting and using a generic fraud score to make sure they satisfy your needs. Here are some basic questions you may want to ask yourself: What do I want the score to predict? This may seem like a very basic question, but it does warrant your consideration. Are you trying to detect these areas in your fraud database? First-party fraud, third-party fraud, bust out fraud, first payment default, never pay, or a combination of these? These questions are particularly important when you are validating a fraud model. For example, if you only have third-party fraud tagged in your test file, a bust out fraud model would not perform well. It would just be a waste of your time. What data was used for model development? Other important questions you may want to ask yourself include: Was the score based on sub-prime credit card data, auto loan data, retail card data or another fraud database? It’s not a definite deal breaker if it was built with credit card data, but, if you have a retail card portfolio, it may still perform well for you. If the scores are too far off, though, you may not have good result. Moreover, you also want to understand the number of different portfolios used for model development. For example, if only one creditor’s data is used, then it may not have the general applicability to other portfolios.

In my previous two blog postings, I’ve tried to briefly articulate some key elements of and value propositions associated with risk-based authentication. In this entry, I’d like to suggest some best-practices to consider as you incorporate and maintain a risk-based authentication program. 1. Analytics – since an authentication score is likely the primary decisioning element in any risk-based authentication strategy, it is critical that a best-in-class scoring model is chosen and validated to establish performance expectations. This initial analysis will allow for decisioning thresholds to be established. This will also allow accept and referral volumes to be planned for operationally. Further more, it will permit benchmarks to be established which follow on performance monitoring that can be compared. 2. Targeted decisioning strategies – applying unique and tailored decisioning strategies (incorporating scores and other high-risk or positive authentication results) to various access channels to your business just simply makes sense. Each access channel (call center, Web, face-to-face, etc.) comes with unique risks, available data, and varied opportunity to apply an authentication strategy that balances these areas; risk management, operational effectiveness, efficiency and cost, improved collections and customer experience. Champion/challenger strategies may also be a great way to test newly devised strategies within a single channel without taking risk to an entire addressable market and your business as a whole. 3. Performance Monitoring – it is critical that key metrics are established early in the risk-based authentication implementation process. Key metrics may include, but should not be limited to these areas: • actual vs. expected score distributions; • actual vs. expected characteristic distributions; • actual vs. expected question performance; • volumes, exclusions; • repeats and mean scores; • actual vs. expected pass rates; • accept vs. referral score distribution; • trends in decision code distributions; and • trends in decision matrix distributions. Performance monitoring provides an opportunity to manage referral volumes, decision threshold changes, strategy configuration changes, auto-decisioning criteria and pricing for risk based authentication. 4. Reporting – it likely goes without saying, but in order to apply the three best practices above, accurate, timely, and detailed reporting must be established around your authentication tools and results. Regardless of frequency, you should work with internal resources and your third-party service provider(s) early in your implementation process to ensure relevant reports are established and delivered. In my next posting, I will be discussing some thoughts about the future state of risk based authentication.
In this article…
typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.


