Tag: Decisioning

Loading...

By: Andrew Gulledge I hate this question. There are several reasons why the concept of an “average fraud rate” is elusive at best, and meaningless or misleading at worst. Natural fraud rate versus strategy fraud rate The natural fraud rate is the number of fraudulent attempts divided by overall attempts in a given period. Many companies don’t know their natural fraud rate, simply because in order to measure it accurately, you need to let every single customer pass authentication regardless of fraud risk. And most folks aren’t willing to take that kind of fraud exposure for the sake of empirical purity. What most people do see, however, is their strategy fraud rate—that is, the fraud rate of approved customers after using some fraud prevention strategy. Obviously, if your fraud model offers any fraud detection at all, then your strategy fraud rate will be somewhat lower than your natural fraud rate. And since there are as many fraud prevention strategies as the day is long, the concept of an “average fraud rate” breaks down somewhat. How do you count frauds? You can count frauds in terms of dollar loss or raw units. A dollar-based approach might be more appropriate when estimating the ROI of your overall authentication strategy. A unit-based approach might be more appropriate when considering the impact on victimized consumers, and the subsequent impact on your brand. If using the unit-based approach, you can count frauds in terms of raw transactions or unique consumers. If one fraudster is able to get through your risk management strategy by coming through the system five times, then the consumer-based fraud rate might be more appropriate. In this example a transaction-based fraud rate would overrepresent this fraudster by a factor of five. Any fraud models based on solely transactional fraud tags would thus be biased towards the fraudsters that game the system through repeat usage. Clearly, however, different folks count frauds differently. Therefore, the concept of an “average fraud rate” breaks down further, simply based on what makes up the numerator and the denominator. Different industries. Different populations. Different uses. Our authentication tools are used by companies from various industries. Would you expect the fraud rate of a utility company to be comparable to that of a money transfer business?  What about online lending versus DDA account opening? Furthermore, different companies use different fraud prevention strategies with different risk buckets within their own portfolios. One company might put every customer at account opening through a knowledge based authentication session, while another might only bother asking the riskier customers a set of out of wallet questions. Some companies use authentication tools in the middle of the customer lifecycle, while others employ fraud detection strategies at account opening only. All of these permutations further complicate the notion of an “average fraud rate.” Different decisioning strategies Companies use an array of basic strategies governing their overall approach to fraud prevention. Some people hard decline while others refer to a manual review queue.  Some people use a behind-the-scenes fraud risk score; others use knowledge based authentication questions; plenty of people use both. Some people use decision overrides that will auto-fail a transaction when certain conditions are met. Some people use question weighting, use limits, and session timeout thresholds. Some people use all of the out of wallet questions; others use only a handful. There is a near infinite possibility of configuration settings even for the same authentication tools from the same vendors, which further muddies the waters in regards to an “average fraud rate.” My next post will beat this thing to death a bit more.

Published: December 10, 2010 by Guest Contributor

By: Margarita Lim Recently, the Social Security Administration (SSA) announced that it will change how Social Security numbers (SSN) will be issued, with a move toward a random method of assigning SSNs. Social Security numbers are historically 9 digits in length, and are comprised of a three-digit number that represents a geographic area, a two-digit number referred to as a Group number and a four digit serial number.You can go to http://www.ssa.gov/employer/randomization.html to learn more about this procedural change, but in summary, the random assignment of SSNs will affect: • The geographic significance of the first three digits of the SSN because it will no longer uniquely represent specific states • The correlation of the Group number (the fourth and fifth digits of the SSN) to an issuance date range. What does this mean? It means that if you’re a business or agency that uses any type of authentication product in order to minimize fraud losses, one of the components used to verify a consumer’s identity – Social Security number, will no longer be validated with respect to state and date.   However, one of the main advantages of utilizing a risk-based approach to authentication is the reduction in over-reliance on one identity element validation result.  Validation of SSN issuance date and state, while useful in determining certain levels of risk, is but one of many attributes and conditions utilized in detailed results, robust analytics, and risk-based decisioning.  It can also be argued that the randomization of SSN issuance, while somewhat impacting the intelligence we can glean from a specific number, may also prove to be beneficial to consumer protection and the overall confidence in the SSN issuance process.

Published: December 3, 2010 by Guest Contributor

By: Wendy Greenawalt Large financial institutions have acknowledged for some time that taking a more consumer-centric versus product-centric approach can be a successful strategy for an organization. However, implementing such a strategy can be difficult, because inherently organizations want to promote a specific product for one reason or another. With the current economic unrest, organizations are looking for ways to improve customer loyalty with their most profitable and lowest risk customers. They are also looking for ways to improve offers to consumers to provide segment of one decisioning, while satisfying organizational goals. Customer management, and specifically cross-sell or up-sell strategies, are a great example of where organizations can implement what I call “segment of one decisioning”.  In essence, this refers to identifying the best possible decision or outcome for a specific consumer when given multiple offers, scenarios and objectives. Marketers strive to identify the best strategies to maximize decision-making, while minimizing costs. For many, this takes the form of models and complex strategy trees or spreadsheets to identify the ideal offering for a segment of consumers. While this approach is effective, algorithm-based decisioning processes exist that can help organizations identify the optimal decisioning strategies, while considering all possible options at a consumers level. By leveraging an optimization tool, organizations can expand the decision process by considering all variables and all alternatives to find the most cost effective, most-likely-to-be-successful strategies. By optimizing decisions, marketers can determine the ideal offer, while quantifying the ROI and adhering to budgetary or other campaign constraints. Many organizations are once again focusing on account growth and building strategies to implement in the near future. With the limited pool of qualified candidates and increased competition, it is more important than ever that each consumer offer be the best to increase response rates, achieve portfolio growth goals and build a profitable portfolio.

Published: November 2, 2010 by Guest Contributor

The overarching ‘business driver’ in adopting a risk-based authentication strategy, particularly one that is founded in analytics and proven scores, is the predictive ‘lift’ associated with using scoring in place of a more binary rule set. While basic identity element verification checks, such as name, address, Social Security number, date-of-birth, and phone number are important identity proofing treatments, when viewed in isolation, they are not nearly as effective in predicting actual fraud risk. In other words, the presence of positive verification across multiple identity elements does not, alone, provide sufficient predictive value in determining fraud risk. Positive verification of identity elements may be achieved in customer access requests that are, in fact, fraudulent. Conversely, negative identity element verification results may be associated with both ‘true’ or ‘good’ customers as well as fraudulent ones. In other words, these false positive and false negative conditions lead to a lack of predictive value and confidence as well as inefficient and unnecessary referral and out-sort volumes. The most predictive authentication and fraud models are those that incorporate multiple data assets spanning traditionally used customer information categories such as public records and demographic data, but also utilize, when possible, credit history attributes, and historic application and inquiry records. A risk-based fraud detection system allows institutions to make customer relationship and transactional decisions based not on a handful of rules or conditions in isolation, but on a holistic view of a customer’s identity and predicted likelihood of associated identity theft, application fraud, or other fraud risk. To implement efficient and appropriate risk-based authentication procedures, the incorporation of comprehensive and broadly categorized data assets must be combined with targeted analytics and consistent decisioning policies to achieve a measurably effective balance between fraud detection and positive identity proofing results. The inherent value of a risk-based approach to authentication lies in the ability to strike such a balance not only in a current environment, but as that environment shifts as do its underlying forces.

Published: August 23, 2010 by Keir Breitenfeld

I recently attended a conference where Credit Union managers spoke of the many changes facing their industry in the wake of the real estate crisis and economic decline that has impacted the US economy over the past couple of years.  As these managers weighed in on the issues facing their businesses today, several themes began to emerge – tighter lending standards & risk management practices, increased regulatory scrutiny, and increased competition resulting in tighter margins for their portfolios. Across these issues, another major development was discussed – increased Credit Union mergers and acquisitions. As I considered the challenges facing these lenders, and the increase in M&A activity, it occurred to me that these lenders might have a common bond with an unexpected group –American family farms.  Overall, Credit Unions are facing the challenge of adding significant fixed costs (more sophisticated lending platforms & risk management processes) all the while dealing with increased competition from lenders like large banks and captive automotive lenders.  This challenge is not unlike the challenges faced by the family farm over the past few decades – small volume operators having to absorb significant fixed costs from innovation & increased corporate competition, without the benefit of scale to spread these costs over to maintain healthy lending margins. Without the benefit of scale, the family farm basically disappeared as large commercial operators acquired less-efficient (and less profitable) operators. Are Credit Unions entering into a similar period of competitive disadvantage? It appears that the Credit Union model will have to adjust in the very near future to remain viable. With high infrastructure expectations, many credit unions will have to develop improved decisioning strategies, become more proficient in assessing credit risk –implementing risk-based pricing models, and executing more efficient operational processes in order to sustain themselves when the challenges of regulation and infrastructure favor economies of scale. Otherwise, they are facing an uphill challenge, just as the family farm did (and does); to compete and survive in a market that favors the high-volume lender.

Published: June 8, 2010 by Kelly Kent

By: Wendy Greenawalt The auto industry has been hit hard by this Great Recession. Recently, some good news has emerged from the captive lenders, and the industry is beginning to rebound from the business challenges they have faced in the last few years.  As such, many lenders are looking for ways to improve risk management and strategically grow their portfolio as the US economy begins to recover. Due to the economic decline, the pool of qualified consumers has shrunk, and competition for the best consumers has significantly increased. As a result, approval terms at the consumer level need to be more robust to increase loan origination and booking rates of new consumers. Leveraging optimized decisions is a way lenders can address regional pricing pressure to improve conversion rates within specific geographies. Specifically, lenders can perform a deep analysis of specific competitors such as captives, credit unions and banks to determine if approved loans are being lost to specific competitor segments. Once the analysis is complete, auto lenders can leverage optimization software to create robust pricing, loan amount and term account strategies to effectively compete within specific geographic regions and grow profitable portfolio segments. Optimization software utilizes a mathematical decisioning approach to identify the ideal consumer level decision to maximize organizational goals while considering defined constraints. The consumer level decisions can then be converted into a decision tree that can be deployed into current decisioning strategies to improve profitability and meet key business objectives over time.  

Published: May 10, 2010 by Guest Contributor

By: Wendy Greenawalt Optimization has become somewhat of a buzzword lately being used to solve all sorts of problems. This got me thinking about what optimizing decisions really means to me? In pondering the question, I decided to start at the beginning and really think about what optimization really stands for. For me, it is an unbiased mathematical way to determine the most advantageous solution to a problem given all the options and variables. At its simplest form, optimization is a tool, which synthesizes data and can be applied to everyday problems such as determining the best route to take when running errands. Everyone is pressed for time these days and finding a few extra minutes or dollars left in our bank account at the end of the month is appealing. The first step to determine my ideal route was to identify the different route options, including toll-roads, factoring the total miles driven, travel time and cost associated with each option. In addition, I incorporated limitations such as required stops, avoid main street, don’t visit the grocery store before lunch and must be back home as quickly as possible. Optimization is a way to take all of these limitations and objectives and simultaneously compare all possible combinations and outcomes to determine the ideal option to maximize a goal, which in this case was to be home as quickly as possible. While this is by its nature a very simple example, optimizing decisions can be applied to home and business in very imaginative and effective means. Business is catching on and optimization is finding its way into more and more businesses to save time and money, which will provide a competitive advantage. I encourage all of you to think about optimization in a new way and explore the opportunities where it can be applied to provide improvements over business-as-usual as well as to improve your quality of life.  

Published: April 20, 2010 by Guest Contributor

By: Wendy Greenawalt In my last few blogs, I have discussed how optimization can be leveraged to make improved decisions across an organization while considering the impact that opimizing decisions have to organizational profits, costs or other business metrics. In this entry, I would like to discuss how optimization is used to improve decisions at the point of acquisition, while minimizing costs. Determining the right account terms at inception is increasingly important due to recent regulatory legislation such as the Credit Card Act.  Doing so plays a role in assessing credit risk, relationship managment, and increasing out of wallet share. These regulations have established guidelines specific to consumer age, verification of income, teaser rates and interest rate increases. Complying with these regulations will require changes to existing processes and creation of new toolsets to ensure organizations adhere to the guidelines. These new regulations will not only increase the costs associated with obtaining new customers, but also the long term revenue and value as changes in account terms will have to be carefully considered. The cost of on-boarding and servicing individual accounts continues to escalate while internal resources remain flat. Due to this, organizations of all sizes are looking for ways to improve efficiency and decisions while minimizing costs. Optimizing decisions is an ideal solution to this problem. Optimized strategy trees (trees that optimize decisioning strategies) can be easily implemented into current processes to ensure lending decisions adhere to organizational revenue, growth or cost objectives as well as regulatory requirements.  Optimized strategy trees enable organizations to create executable strategies that provide on-going decisions based upon optimization conducted at a consumer level. Optimized strategy trees outperform manually created trees as they are created utilizing sophisticated mathematical analysis and ensure organizational objectives are adhered to. In addition, an organization can quantify the expected ROI of decisioning strategies and provide validation in strategies – before implementation. This type of data is not available without the use of a sophisticated optimization software application.  By implementing optimized strategy trees, organizations can minimize the volume of accounts that must be manually reviewed, which results in lower resource costs. In addition, account terms are determined based on organizational priorities leading to increased revenue, retention and profitability.

Published: April 5, 2010 by Guest Contributor

By: Wendy Greenawalt The economy has changed drastically in the last few years and most organizations have had to reduce costs across their businesses to retain profits. Determining the appropriate cost-cutting measures requires careful consideration of trade-offs while quantifying the short- and long-term organizational priorities.  Too often, cost reduction decisions are driven by dynamic market conditions, which mandate quick decision-making. Due to this, decisions are made without a sound understanding of the true impact to organizational objectives. Optimization (optimizing decisions) can be used for virtually any business problem and provides decisions based on complex mathematics. Therefore, whether you are making decisions related to outsourcing versus staffing, internal versus external project development or specific business unit cost savings opportunities, optimization can be applied. While some analytical requirements exist to obtain the highest business metric improvements, most organizations have the data available that is required to take full advantage of optimization technology.  If you are using predictive models, credit attributes and have multiple actions that can be taken on an individual consumer, then, most likely, your organization can benefit from strategies in optimizing decisions. In my next few blogs, I will discuss how optimization / optimizing decisions can be used to create better strategies across an organization whether your focus is marketing, risk, customer management or collections.  

Published: February 19, 2010 by Guest Contributor

Meat and potatoes Data are the meat and potatoes of fraud detection.  You can have the brightest and most capable statistical modeling team in the world.  But if they have crappy data, they will build crappy models.  Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics.  I will want to know basic metrics like how many records I have in my database with name and address populated.  And how many addresses do I typically have for consumers?  Just one, or many?  I will want to know how often, on average, we are able to match a name with an address.  It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place.  If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database?  This process of finding a consumer based on certain input data (such as name and address) is called pinning.  If you have incomplete or stale data, your pin rate will undoubtedly suffer.  And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh.  Old and out-of-date information will hurt your strategies, often punishing good consumers.  Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address?  Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.  

Published: January 20, 2010 by Guest Contributor

To calculate the expected business benefits of making an improvement to your decisioning strategies, you must first identify and prioritize the key metrics you are trying to positively impact.  For example, if one of your key business objectives is improved enterprise risk management, then some of the key metrics you seek to impact, in order to effectively address changes in credit score trends, could include reducing net credit losses through improved credit risk modeling and scorecard monitoring. Assessing credit risk is a key element of enterprise risk management and can addressed as part of your application risk management processes as well as other decisioning strategies that are applied at different points in the customer lifecycle. In working with our clients, Experian has identified 15 key metrics that can be positively impacted through optimizing decisions.  As you review the list of metrics below, you should identify those metrics that are most important to your organization. • Approval rates • Booking or activation rates • Revenue • Customer net present value • 30/60/90-day delinquencies • Average charge-off amount • Average recovery amount • Manual review rates • Annual application volume • Charge-offs (bad debt & fraud) • Avg. cost per dollar collected • Average amount collected • Annual recoveries • Regulatory compliance • Churn or attrition Based on Experian’s extensive experience working with clients around the world to achieve positive business results through optimizing decisions, you can expect between a 10 percent and 15 percent improvement in any of these metrics through the improved use of data, analytics and decision management software. The initial high-level business benefit calculation, therefore, is quite important and straightforward.  As an example, assume your current approval rate for vehicle loans is 65 percent, the average value of an approved application is $200 and your volume is 75,000 applications per year.  Keeping all else equal, a 10 percent improvement in your approval rates (from 65 percent to 72 percent) would generate $10.7 million in incremental business value each year ($200 x 75,000 x .65 x 1.1).  To prioritize your business improvement efforts, you’ll want to calculate expected business benefits across a number of key metrics and then focus on those that will deliver the greatest value to your organization.  

Published: January 14, 2010 by Roger Ahern

By: Wendy Greenawalt Given the current volatile market conditions and rising unemployment rates, no industry is immune from delinquent accounts. However, recent reports have shown a shift in consumer trends and attitudes related to cellular phones. For many consumers, a cell phone is an essential tool for business and personal use, and staying connected is a very high priority. Given this, many consumers pay their cellular bill before other obligations, even if facing a poor bank credit risk. Even with this trend, cellular providers are not immune from delinquent accounts and determining the right course of action to take to improve collection rates. By applying optimization, technology for account collection decisions, cellular providers can ensure that all variables are considered given the multiple contact options available. Unlike other types of services, cellular providers have numerous options available in an attempt to collect on outstanding accounts.  This, however, poses other challenges because collectors must determine the ideal method and timing to attempt to collect while retaining the consumers that will be profitable in the long term.  Optimizing decisions can consider all contact methods such as text, inbound/outbound calls, disconnect, service limitation, timing and diversion of calls.  At the same time, providers are considering constraints such as likelihood of curing, historical consumer behavior, such as credit score trends, and resource costs/limitations.  Since the cellular industry is one of the most competitive businesses, it is imperative that it takes advantage of every tool that can improve optimizing decisions to drive revenue and retention.  An optimized strategy tree can be easily implemented into current collection processes and provide significant improvement over current processes.

Published: January 7, 2010 by Guest Contributor

In my previous two blogs, I introduced the definition of strategic default and compared and contrasted the population to other types of consumers with mortgage delinquency.  I also reviewed a few key characteristics that distinguish strategic defaulters as a distinct population. Although I’ve mentioned that segmenting this group is important, I would like to specifically discuss the value of segmentation as it applies to loan modification programs and the selection of candidates for modification. How should loan modification strategies be differentiated based on this population? By definition, strategic defaulters are more likely to take advantage of loan modification programs. They are committed to making the most personally-lucrative financial decisions, so the opportunity to have their loan modified - extending their ‘free’ occupancy – can be highly appealing.  Given the adverse selection issue at play with these consumers, lenders need to design loan modification programs that limit abuse and essentially screen-out strategic defaulters from the population. The objective of lenders when creating loan modification programs should be to identify consumers who show the characteristics of cash-flow managers within our study. These consumers often show similar signs of distress as the strategic defaulters, but differentiate themselves by exhibiting a willingness to pay that the strategic defaulter, by definition, does not. So, how can a lender make this identification? Although these groups share similar characteristics at times, it is recommended that lenders reconsider their loan modification decisioning algorithms, and modify their loan modification offers to screen out strategic defaulters.  In fact, they could even develop programs such as equity-sharing arrangements whereby the strategic defaulter could be persuaded to remain committed to the mortgage.  In the end, strategic defaulters will not self-identify by showing lower credit score trends, by being a bank credit risk, or having previous bankruptcy scores, so lenders must create processes to identify them among their peers. For more detailed analyses, lenders could also extend the Experian-Oliver Wyman study further, and integrate additional attributes such as current LTV, product type, etc. to expand their segment and identify strategic defaulters within their individual portfolios.    

Published: December 14, 2009 by Kelly Kent

By: Wendy Greenawalt In my last blog on optimization we discussed how optimized strategies can improve collection strategies. In this blog, I would like to discuss how optimization can bring value to decisions related to mortgage delinquency/modification. Over the last few years mortgage lenders have seen a sharp increase in the number of mortgage account delinquencies and a dramatic change in consumer mortgage payment trends.   Specifically, lenders have seen a shift in consumer willingness from paying their mortgage obligation first, while allowing other debts to go delinquent. This shift in borrower behavior appears unlikely to change anytime soon, and therefore lenders must make smarter account management decisions for mortgage accounts. Adding to this issue, property values continue to decline in many areas and lenders must now identify if a consumer is a strategic defaulter, a candidate for loan modification, or a consumer affected by the economic downturn. Many loans that were modified at the beginning of the mortgage crisis have since become delinquent and have ultimately been foreclosed upon by the lender. Making optimizing decisions related to collection action for mortgage accounts is increasingly complex, but optimization can assist lenders in identifying the ideal consumer collection treatment. This is taking place while lenders considering organizational goals, such as minimizing losses and maximizing internal resources, are retaining the most valuable consumers. Optimizing decisions can assist with these difficult decisions by utilizing a mathematical algorithm that can assess all possible options available and select the ideal consumer decision based on organizational goals and constraints. This technology can be implemented into current optimizing decisioning processes, whether it is in real time or batch processing, and can provide substantial lift in prediction over business as usual techniques.    

Published: December 7, 2009 by Guest Contributor

In my last post I discussed the problem with confusing what I would call “real” Knowledge Based Authentication (KBA) with secret questions.   However, I don’t think that’s where the market focus should be.  Instead of looking at Knowledge Based Authentication (KBA) today, we should be looking toward the future, and the future starts with risk-based authentication. If you’re like most people, right about now you are wondering exactly what I mean by risk-based authentication.  How does it differ from Knowledge Based Authentication, and how we got from point A to point B? It is actually pretty simple.  Knowledge Based Authentication is one factor of a risk-based authentication fraud prevention strategy.  A risk- based authentication approach doesn’t rely on question/answers alone, but instead utilizes fraud models that include Knowledge Based Authentication performance as part of the fraud analytics to improve fraud detection performance.  With a risk-based authentication approach, decisioning strategies are more robust and should include many factors, including the results from scoring models. That isn’t to say that Knowledge Based Authentication isn’t an important part of a risk-based approach.  It is.  Knowledge Based Authentication is a necessity because it has gained consumer acceptance. Without some form of Knowledge Based Authentication, consumers question an organization’s commitment to security and data protection. Most importantly, consumers now view Knowledge Based Authentication as a tool for their protection; it has become a bellwether to consumers. As the bellwether, Knowledge Based Authentication has been the perfect vehicle to introduce new and more complex authentication methods to consumers, without them even knowing it.  KBA has allowed us to familiarize consumers with out-of-band authentication and IVR, and I have little doubt that it will be one of the tools to play a part in the introduction of voice biometrics to help prevent consumer fraud. Is it always appropriate to present questions to every consumer?  No, but that’s where a true risk-based approach comes into play.  Is Knowledge Based Authentication always a valuable component of a risk based authentication tool to minimize fraud losses as part of an overall approach to fraud best practices?  Absolutely; always. DING!  

Published: November 23, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe