At A Glance
It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.Paragraph Block- is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.


Heading 2
Heading 3
Heading 4
Heading 5
- This is a list
- Item 1
- Item 2
- Sub list
- Sub list 2
- Sub list 3
- More list
- More list 2
- More list 3
- More more
- More more
This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation

This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation
| Table element | Table element | Table element |
| my table | my table | my table |
| Table element | Table element | Table element |

Media Text Block
of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum
My Small H5 Title


By: Kennis Wong In this blog entry, we have repeatedly emphasized the importance of a risk-based approach when it comes to fraud detection. Scoring and analytics are essentially the heart of this approach. However, unlike the rule-based approach, where users can easily understand the results, (i.e. was the S.S.N. reported deceased? Yes/No; Is the application address the same as the best address on the credit bureau? Yes/No), scores are generated in a black box where the reason for the eventual score is not always apparent even in a fraud database. Hence more homework needs to be done when selecting and using a generic fraud score to make sure they satisfy your needs. Here are some basic questions you may want to ask yourself: What do I want the score to predict? This may seem like a very basic question, but it does warrant your consideration. Are you trying to detect these areas in your fraud database? First-party fraud, third-party fraud, bust out fraud, first payment default, never pay, or a combination of these? These questions are particularly important when you are validating a fraud model. For example, if you only have third-party fraud tagged in your test file, a bust out fraud model would not perform well. It would just be a waste of your time. What data was used for model development? Other important questions you may want to ask yourself include: Was the score based on sub-prime credit card data, auto loan data, retail card data or another fraud database? It’s not a definite deal breaker if it was built with credit card data, but, if you have a retail card portfolio, it may still perform well for you. If the scores are too far off, though, you may not have good result. Moreover, you also want to understand the number of different portfolios used for model development. For example, if only one creditor’s data is used, then it may not have the general applicability to other portfolios.

In my previous two blog postings, I’ve tried to briefly articulate some key elements of and value propositions associated with risk-based authentication. In this entry, I’d like to suggest some best-practices to consider as you incorporate and maintain a risk-based authentication program. 1. Analytics – since an authentication score is likely the primary decisioning element in any risk-based authentication strategy, it is critical that a best-in-class scoring model is chosen and validated to establish performance expectations. This initial analysis will allow for decisioning thresholds to be established. This will also allow accept and referral volumes to be planned for operationally. Further more, it will permit benchmarks to be established which follow on performance monitoring that can be compared. 2. Targeted decisioning strategies – applying unique and tailored decisioning strategies (incorporating scores and other high-risk or positive authentication results) to various access channels to your business just simply makes sense. Each access channel (call center, Web, face-to-face, etc.) comes with unique risks, available data, and varied opportunity to apply an authentication strategy that balances these areas; risk management, operational effectiveness, efficiency and cost, improved collections and customer experience. Champion/challenger strategies may also be a great way to test newly devised strategies within a single channel without taking risk to an entire addressable market and your business as a whole. 3. Performance Monitoring – it is critical that key metrics are established early in the risk-based authentication implementation process. Key metrics may include, but should not be limited to these areas: • actual vs. expected score distributions; • actual vs. expected characteristic distributions; • actual vs. expected question performance; • volumes, exclusions; • repeats and mean scores; • actual vs. expected pass rates; • accept vs. referral score distribution; • trends in decision code distributions; and • trends in decision matrix distributions. Performance monitoring provides an opportunity to manage referral volumes, decision threshold changes, strategy configuration changes, auto-decisioning criteria and pricing for risk based authentication. 4. Reporting – it likely goes without saying, but in order to apply the three best practices above, accurate, timely, and detailed reporting must be established around your authentication tools and results. Regardless of frequency, you should work with internal resources and your third-party service provider(s) early in your implementation process to ensure relevant reports are established and delivered. In my next posting, I will be discussing some thoughts about the future state of risk based authentication.

In my last blog posting, I presented the foundational elements that enable risk-based authentication. These include data, detailed and granular results, analytics and decisioning. The inherent value of risk-based authentication can be summarized as delivering an holistic assessment of a consumer and/or transaction with the end goal of applying the right authentication and decisioning treatment at the right time. The opportunity, especially, to minimize fraud losses using fraud analytics as part of your assessment is significant. What are some residual values of risk-based authentication? 1. Minimized fraud losses involves the use of fraud analytics, and a more comprehensive view of a consumer identity (the good and the bad), in combination with consistent decisioning over time. This analysis will outperform simple binary rules and more subjective decisioning. 2. Improved consumer experience. By applying the right authentication and treatment at the right time, consumers are subjected to processes that are proportional to the risk associated with their identity profile. This means that lower-risk consumers are less likely to be put through more arduous courses of action, preserving a streamlined and often purely “behind the scenes” authentication process for the majority of consumers and potential consumers. In other words, you are saving the pain for the bad guys — and that can be a good thing. 3. Operational efficiencies can be successful with the implementation of a well-designed program. Much of the decisioning can be done without human intervention and subjective contemplation. Use of score-driven policies affords businesses the opportunity to use automated authentication processes for the majority of their applicants or account management cases. Fewer human resources will be required which usually means lower costs. Or, it can mean the human resources you possess are more appropriately focused on the applications or transactions that warrant such attention. 4. Measurable performance is critical because understanding the past and current performance of risk-based authentication policies allows for the adjustment over time of such policies. These adjustments can be made based on evolving fraud risks, resource constraints, approval rate pressures, and compliance requirements, just to name a few. Given its importance, Experian recommends performance monitoring for our clients using our authentication products. In my next posting, I’ll discuss some best practices associated with implementing and managing a risk-based authentication program.
In this article…
typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.


