Key drivers to auto financial services are speed and precision. What model year is your decisioning system? In the auto world the twin engineering goals are performance and durability. Some memorable quotes have been offered about the results of all that complex engineering. And some not so complex observations. The world of racing has offered some best examples of the latter. Here’s a memorable one: “There’s no secret. You just press the accelerator to the floor and steer left. – Bill Vukovich When considering an effective auto financial services relationship one quickly comes to the conclusion that the 2 key drivers of an improved booking rate is the speed of the decision to the consumer/dealer and the precision of that decision – both the ‘yes/no’ and the ‘at what rate’. In the ‘good old days’ a lender relied upon his dealer relationship and a crew of experienced underwriters to quickly respond to a sales opportunities. Well, these days dealers will jump to the service provider that delivers the most happy customers. But, for all too many lenders some automated decisioning is leveraged but it is not uncommon to still see a significantly large ‘grey area’ of decisions that falls to the experienced underwriter. And that service model is a failure of speed and precision. You may make the decision to approve but your competition came in with a better price at the same time. His application got booked. Your decision and the cost incurred was left in the dust – bin. High on the list of solutions to this business issue is an improved use of available data and decisioning solutions. Too many lenders still underutilize available analytics and automated decisions to deliver an improved booking rate. Is your system last year’s model? Does your current underwriting system fully leverage available third party data to reduce delays due to fraud flags. Is your ability to pay component reliant upon a complex application or follow-up requests for additional information to the consumer? Does your management information reporting provide details to the incidence and disposition of all exception processes? Are you able to implement newer analytics and/or policy modifications in hours or days versus sitting in the IT queue for weeks or months? Can you modify policies to align with new dealer demographics and risk factors? The new model is in and Experian® is ready to help you give it a ride. Purchase auto credit data now.
As Big Data becomes the norm in the credit industry and others, the seemingly non-stop efforts to accumulate more and more data leads me to ask the question - when is Big Data too much data? The answer doesn’t lie in the quantity of data itself, but rather in the application of it – Big Data is too much data when you can’t use it to make better decisions. So what do I mean by a better decision? From any number of perspectives, the answer to that question will vary. From the viewpoint of a marketer, maybe that decision is about whether new data will result in better response rates through improved segmentation. From a lender perspective, that decision might be about whether a borrower will repay a loan or the right interest rate to charge the borrower. That is one the points of the hype around Big Data – it is helping companies and individuals in all sorts of situations make better decisions – but regardless of the application, it appears that the science of Big Data must not just be based on an assumption that more data will always lead to better decisions, but that more data can lead to better decisions – if it is also the “right data”. Then how does one know when another new data source is helping? It’s not obvious that additional data won’t help make a better decision. It takes an expert to understand not only the data employed, but ultimately the use of the data in the decision-making process. It takes expertise that is not found just anywhere. At Experian, one of our core capabilities is based on the ability to distinguish between data that is predictive and can help our clients make better decisions, and that which is noise and is not helpful to our clients. Our scores and models, whether they be used for prospecting new customers, measuring risk in offering new credit, or determining how to best collect on an outstanding receivable, are all designed to optimize the decision making process. Learn more about our big data capabilities
In today's data driven world, decisioning strategies can no longer be one-dimensional and only risk-focused. By employing a multidimensional decisioning approach, companies can deliver the products and services customers need and want.
When we think about fraud prevention, naturally we think about mininizing fraud at application. We want to ensure that the identities used in the application truly belong to the person who applies for credit, and not from some stolen identities. But the reality is that some fraudsters do successfully get through the defense at application. In fact, according to Javelin’s 2011 Identity Fraud Survey Report, 2.5 million accounts were opened fraudulently using stolen identities in 2010, costing lenders and consumers $17 billion. And these numbers do not even include other existing account fraud like account takeover and impersonation (limited misusing of account like credit/debit card and balance transfer, etc.). This type of existing account fraud affected 5.5 million accounts in 2010, costing another $20 billion. So although it may seem like a no brainer, it’s worth emphasizing that we need to continue to detect fraud for new and established accounts. Existing account fraud is unlikely to go away any time soon. Lending activities have changed significantly in the last couple of years. Origination rate in 2010 is still less than half of the volume in 2008, and booked accounts become riskier. In this type of environment, when regular consumers are having hard time getting new credits, fraudsters are also having hard time getting credit. So naturally they will switch their focus to something more profitable like account takeover. Does your organization have appropriate tools and decisioning strategy to fight against existing account fraud?
By: Andrew Gulledge I hate this question. There are several reasons why the concept of an “average fraud rate” is elusive at best, and meaningless or misleading at worst. Natural fraud rate versus strategy fraud rate The natural fraud rate is the number of fraudulent attempts divided by overall attempts in a given period. Many companies don’t know their natural fraud rate, simply because in order to measure it accurately, you need to let every single customer pass authentication regardless of fraud risk. And most folks aren’t willing to take that kind of fraud exposure for the sake of empirical purity. What most people do see, however, is their strategy fraud rate—that is, the fraud rate of approved customers after using some fraud prevention strategy. Obviously, if your fraud model offers any fraud detection at all, then your strategy fraud rate will be somewhat lower than your natural fraud rate. And since there are as many fraud prevention strategies as the day is long, the concept of an “average fraud rate” breaks down somewhat. How do you count frauds? You can count frauds in terms of dollar loss or raw units. A dollar-based approach might be more appropriate when estimating the ROI of your overall authentication strategy. A unit-based approach might be more appropriate when considering the impact on victimized consumers, and the subsequent impact on your brand. If using the unit-based approach, you can count frauds in terms of raw transactions or unique consumers. If one fraudster is able to get through your risk management strategy by coming through the system five times, then the consumer-based fraud rate might be more appropriate. In this example a transaction-based fraud rate would overrepresent this fraudster by a factor of five. Any fraud models based on solely transactional fraud tags would thus be biased towards the fraudsters that game the system through repeat usage. Clearly, however, different folks count frauds differently. Therefore, the concept of an “average fraud rate” breaks down further, simply based on what makes up the numerator and the denominator. Different industries. Different populations. Different uses. Our authentication tools are used by companies from various industries. Would you expect the fraud rate of a utility company to be comparable to that of a money transfer business? What about online lending versus DDA account opening? Furthermore, different companies use different fraud prevention strategies with different risk buckets within their own portfolios. One company might put every customer at account opening through a knowledge based authentication session, while another might only bother asking the riskier customers a set of out of wallet questions. Some companies use authentication tools in the middle of the customer lifecycle, while others employ fraud detection strategies at account opening only. All of these permutations further complicate the notion of an “average fraud rate.” Different decisioning strategies Companies use an array of basic strategies governing their overall approach to fraud prevention. Some people hard decline while others refer to a manual review queue. Some people use a behind-the-scenes fraud risk score; others use knowledge based authentication questions; plenty of people use both. Some people use decision overrides that will auto-fail a transaction when certain conditions are met. Some people use question weighting, use limits, and session timeout thresholds. Some people use all of the out of wallet questions; others use only a handful. There is a near infinite possibility of configuration settings even for the same authentication tools from the same vendors, which further muddies the waters in regards to an “average fraud rate.” My next post will beat this thing to death a bit more.
By: Margarita Lim Recently, the Social Security Administration (SSA) announced that it will change how Social Security numbers (SSN) will be issued, with a move toward a random method of assigning SSNs. Social Security numbers are historically 9 digits in length, and are comprised of a three-digit number that represents a geographic area, a two-digit number referred to as a Group number and a four digit serial number.You can go to http://www.ssa.gov/employer/randomization.html to learn more about this procedural change, but in summary, the random assignment of SSNs will affect: • The geographic significance of the first three digits of the SSN because it will no longer uniquely represent specific states • The correlation of the Group number (the fourth and fifth digits of the SSN) to an issuance date range. What does this mean? It means that if you’re a business or agency that uses any type of authentication product in order to minimize fraud losses, one of the components used to verify a consumer’s identity – Social Security number, will no longer be validated with respect to state and date. However, one of the main advantages of utilizing a risk-based approach to authentication is the reduction in over-reliance on one identity element validation result. Validation of SSN issuance date and state, while useful in determining certain levels of risk, is but one of many attributes and conditions utilized in detailed results, robust analytics, and risk-based decisioning. It can also be argued that the randomization of SSN issuance, while somewhat impacting the intelligence we can glean from a specific number, may also prove to be beneficial to consumer protection and the overall confidence in the SSN issuance process.
By: Wendy Greenawalt Large financial institutions have acknowledged for some time that taking a more consumer-centric versus product-centric approach can be a successful strategy for an organization. However, implementing such a strategy can be difficult, because inherently organizations want to promote a specific product for one reason or another. With the current economic unrest, organizations are looking for ways to improve customer loyalty with their most profitable and lowest risk customers. They are also looking for ways to improve offers to consumers to provide segment of one decisioning, while satisfying organizational goals. Customer management, and specifically cross-sell or up-sell strategies, are a great example of where organizations can implement what I call “segment of one decisioning”. In essence, this refers to identifying the best possible decision or outcome for a specific consumer when given multiple offers, scenarios and objectives. Marketers strive to identify the best strategies to maximize decision-making, while minimizing costs. For many, this takes the form of models and complex strategy trees or spreadsheets to identify the ideal offering for a segment of consumers. While this approach is effective, algorithm-based decisioning processes exist that can help organizations identify the optimal decisioning strategies, while considering all possible options at a consumers level. By leveraging an optimization tool, organizations can expand the decision process by considering all variables and all alternatives to find the most cost effective, most-likely-to-be-successful strategies. By optimizing decisions, marketers can determine the ideal offer, while quantifying the ROI and adhering to budgetary or other campaign constraints. Many organizations are once again focusing on account growth and building strategies to implement in the near future. With the limited pool of qualified candidates and increased competition, it is more important than ever that each consumer offer be the best to increase response rates, achieve portfolio growth goals and build a profitable portfolio.
The overarching ‘business driver’ in adopting a risk-based authentication strategy, particularly one that is founded in analytics and proven scores, is the predictive ‘lift’ associated with using scoring in place of a more binary rule set. While basic identity element verification checks, such as name, address, Social Security number, date-of-birth, and phone number are important identity proofing treatments, when viewed in isolation, they are not nearly as effective in predicting actual fraud risk. In other words, the presence of positive verification across multiple identity elements does not, alone, provide sufficient predictive value in determining fraud risk. Positive verification of identity elements may be achieved in customer access requests that are, in fact, fraudulent. Conversely, negative identity element verification results may be associated with both ‘true’ or ‘good’ customers as well as fraudulent ones. In other words, these false positive and false negative conditions lead to a lack of predictive value and confidence as well as inefficient and unnecessary referral and out-sort volumes. The most predictive authentication and fraud models are those that incorporate multiple data assets spanning traditionally used customer information categories such as public records and demographic data, but also utilize, when possible, credit history attributes, and historic application and inquiry records. A risk-based fraud detection system allows institutions to make customer relationship and transactional decisions based not on a handful of rules or conditions in isolation, but on a holistic view of a customer’s identity and predicted likelihood of associated identity theft, application fraud, or other fraud risk. To implement efficient and appropriate risk-based authentication procedures, the incorporation of comprehensive and broadly categorized data assets must be combined with targeted analytics and consistent decisioning policies to achieve a measurably effective balance between fraud detection and positive identity proofing results. The inherent value of a risk-based approach to authentication lies in the ability to strike such a balance not only in a current environment, but as that environment shifts as do its underlying forces.
I recently attended a conference where Credit Union managers spoke of the many changes facing their industry in the wake of the real estate crisis and economic decline that has impacted the US economy over the past couple of years. As these managers weighed in on the issues facing their businesses today, several themes began to emerge – tighter lending standards & risk management practices, increased regulatory scrutiny, and increased competition resulting in tighter margins for their portfolios. Across these issues, another major development was discussed – increased Credit Union mergers and acquisitions. As I considered the challenges facing these lenders, and the increase in M&A activity, it occurred to me that these lenders might have a common bond with an unexpected group –American family farms. Overall, Credit Unions are facing the challenge of adding significant fixed costs (more sophisticated lending platforms & risk management processes) all the while dealing with increased competition from lenders like large banks and captive automotive lenders. This challenge is not unlike the challenges faced by the family farm over the past few decades – small volume operators having to absorb significant fixed costs from innovation & increased corporate competition, without the benefit of scale to spread these costs over to maintain healthy lending margins. Without the benefit of scale, the family farm basically disappeared as large commercial operators acquired less-efficient (and less profitable) operators. Are Credit Unions entering into a similar period of competitive disadvantage? It appears that the Credit Union model will have to adjust in the very near future to remain viable. With high infrastructure expectations, many credit unions will have to develop improved decisioning strategies, become more proficient in assessing credit risk –implementing risk-based pricing models, and executing more efficient operational processes in order to sustain themselves when the challenges of regulation and infrastructure favor economies of scale. Otherwise, they are facing an uphill challenge, just as the family farm did (and does); to compete and survive in a market that favors the high-volume lender.
--By Wendy Greenawalt Recently the Federal Reserve Board and Federal Trade Commission issued a new rule requiring any lender who utilizes a credit report or score when making a credit decision to provide consumers with a risk-based pricing notice. The new regulation goes into effect on January 1, 2011, but lenders must begin the planning process now--as compliance will require potential changes to their current lending practices. The regulation is another evolution in an attempt to provide consumers with more visibility to their credit history and the impact a blemished record may have on their finances. The ruling is good for consumers, but will require lenders to modify existing lending processes and add another consumer disclosure, as well as additional costs to the lending process. The risk-based pricing rule provides lenders with two compliance options--the risk-based pricing notice or a credit score disclosure exception. In this blog, I will discuss the primary compliance option, the risk-based pricing notice. The risk-based pricing notice is a document that notifies consumers that the terms of their new credit account are materially less favorable than the most favorable terms. The notice will not be provided to all consumers, but rather just those that receive account terms that are worse than what is offered to the most credit worthy consumers. Determining who will receive the notice has been outlined in the rule, and lenders can use several options including the direct comparison, credit score proxy or tiered pricing method. For lenders that perform regular validation of their portfolio, determining which consumers to issue a notice to should not be difficult. However, for those lenders who do not perform regular scorecard performance monitoring, this is another reminder of the importance of on-going validations and monitoring. As the economy continues to recover and lenders begin to re-enter the market, it will be more important than ever to validate that scores are performing as expected to manage risk and revenue goals. In my next blog, I will discuss the credit score disclosure exception.
By: Wendy Greenawalt The auto industry has been hit hard by this Great Recession. Recently, some good news has emerged from the captive lenders, and the industry is beginning to rebound from the business challenges they have faced in the last few years. As such, many lenders are looking for ways to improve risk management and strategically grow their portfolio as the US economy begins to recover. Due to the economic decline, the pool of qualified consumers has shrunk, and competition for the best consumers has significantly increased. As a result, approval terms at the consumer level need to be more robust to increase loan origination and booking rates of new consumers. Leveraging optimized decisions is a way lenders can address regional pricing pressure to improve conversion rates within specific geographies. Specifically, lenders can perform a deep analysis of specific competitors such as captives, credit unions and banks to determine if approved loans are being lost to specific competitor segments. Once the analysis is complete, auto lenders can leverage optimization software to create robust pricing, loan amount and term account strategies to effectively compete within specific geographic regions and grow profitable portfolio segments. Optimization software utilizes a mathematical decisioning approach to identify the ideal consumer level decision to maximize organizational goals while considering defined constraints. The consumer level decisions can then be converted into a decision tree that can be deployed into current decisioning strategies to improve profitability and meet key business objectives over time.
By: Wendy Greenawalt Optimization has become somewhat of a buzzword lately being used to solve all sorts of problems. This got me thinking about what optimizing decisions really means to me? In pondering the question, I decided to start at the beginning and really think about what optimization really stands for. For me, it is an unbiased mathematical way to determine the most advantageous solution to a problem given all the options and variables. At its simplest form, optimization is a tool, which synthesizes data and can be applied to everyday problems such as determining the best route to take when running errands. Everyone is pressed for time these days and finding a few extra minutes or dollars left in our bank account at the end of the month is appealing. The first step to determine my ideal route was to identify the different route options, including toll-roads, factoring the total miles driven, travel time and cost associated with each option. In addition, I incorporated limitations such as required stops, avoid main street, don’t visit the grocery store before lunch and must be back home as quickly as possible. Optimization is a way to take all of these limitations and objectives and simultaneously compare all possible combinations and outcomes to determine the ideal option to maximize a goal, which in this case was to be home as quickly as possible. While this is by its nature a very simple example, optimizing decisions can be applied to home and business in very imaginative and effective means. Business is catching on and optimization is finding its way into more and more businesses to save time and money, which will provide a competitive advantage. I encourage all of you to think about optimization in a new way and explore the opportunities where it can be applied to provide improvements over business-as-usual as well as to improve your quality of life.
By: Wendy Greenawalt In my last few blogs, I have discussed how optimization can be leveraged to make improved decisions across an organization while considering the impact that opimizing decisions have to organizational profits, costs or other business metrics. In this entry, I would like to discuss how optimization is used to improve decisions at the point of acquisition, while minimizing costs. Determining the right account terms at inception is increasingly important due to recent regulatory legislation such as the Credit Card Act. Doing so plays a role in assessing credit risk, relationship managment, and increasing out of wallet share. These regulations have established guidelines specific to consumer age, verification of income, teaser rates and interest rate increases. Complying with these regulations will require changes to existing processes and creation of new toolsets to ensure organizations adhere to the guidelines. These new regulations will not only increase the costs associated with obtaining new customers, but also the long term revenue and value as changes in account terms will have to be carefully considered. The cost of on-boarding and servicing individual accounts continues to escalate while internal resources remain flat. Due to this, organizations of all sizes are looking for ways to improve efficiency and decisions while minimizing costs. Optimizing decisions is an ideal solution to this problem. Optimized strategy trees (trees that optimize decisioning strategies) can be easily implemented into current processes to ensure lending decisions adhere to organizational revenue, growth or cost objectives as well as regulatory requirements. Optimized strategy trees enable organizations to create executable strategies that provide on-going decisions based upon optimization conducted at a consumer level. Optimized strategy trees outperform manually created trees as they are created utilizing sophisticated mathematical analysis and ensure organizational objectives are adhered to. In addition, an organization can quantify the expected ROI of decisioning strategies and provide validation in strategies – before implementation. This type of data is not available without the use of a sophisticated optimization software application. By implementing optimized strategy trees, organizations can minimize the volume of accounts that must be manually reviewed, which results in lower resource costs. In addition, account terms are determined based on organizational priorities leading to increased revenue, retention and profitability.
By: Wendy Greenawalt The economy has changed drastically in the last few years and most organizations have had to reduce costs across their businesses to retain profits. Determining the appropriate cost-cutting measures requires careful consideration of trade-offs while quantifying the short- and long-term organizational priorities. Too often, cost reduction decisions are driven by dynamic market conditions, which mandate quick decision-making. Due to this, decisions are made without a sound understanding of the true impact to organizational objectives. Optimization (optimizing decisions) can be used for virtually any business problem and provides decisions based on complex mathematics. Therefore, whether you are making decisions related to outsourcing versus staffing, internal versus external project development or specific business unit cost savings opportunities, optimization can be applied. While some analytical requirements exist to obtain the highest business metric improvements, most organizations have the data available that is required to take full advantage of optimization technology. If you are using predictive models, credit attributes and have multiple actions that can be taken on an individual consumer, then, most likely, your organization can benefit from strategies in optimizing decisions. In my next few blogs, I will discuss how optimization / optimizing decisions can be used to create better strategies across an organization whether your focus is marketing, risk, customer management or collections.
Meat and potatoes Data are the meat and potatoes of fraud detection. You can have the brightest and most capable statistical modeling team in the world. But if they have crappy data, they will build crappy models. Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics. I will want to know basic metrics like how many records I have in my database with name and address populated. And how many addresses do I typically have for consumers? Just one, or many? I will want to know how often, on average, we are able to match a name with an address. It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place. If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database? This process of finding a consumer based on certain input data (such as name and address) is called pinning. If you have incomplete or stale data, your pin rate will undoubtedly suffer. And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh. Old and out-of-date information will hurt your strategies, often punishing good consumers. Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address? Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.