Credit Lending

Loading...

A recent New York Times (1) article outlined the latest release of credit borrowing by the Federal Reserve, indicating that American’s borrowed less for the ninth-straight month in October. Nested within the statistics released by the Federal Reserve were metrics around reduced revolving credit demand and comments about how “Americans are borrowing less as they try to replenish depleted investments.” While this may be true, I tend to believe that macro-level statements are not fully explaining the differences between consumer experiences that influence relationship management choices in the current economic environment. To expand on this, I think a closer look at consumers at opposite ends of the credit risk spectrum tells a very interesting story. In fact, recent bank card usage and delinquency data suggests that there are at least a couple of distinct patterns within the overall trend of reducing revolving credit demand: • First, although it is true that overall revolving credit balances are decreasing, this is a macro-level trend that is not consistent with the detail we see at the consumer level. In fact, despite a reduction of open credit card accounts and overall industry balances, at the consumer-level, individual balances are up – that’s to say that although there are fewer cards out there, those that do have them are carrying higher balances. • Secondly, there are significant differences between the most and least-risky consumers when it comes to changes in balances. For instance, consumers who fall into the least-risky VantageScore® tiers, Tier A and B, show only 12 percent and 4 percent year-over-year balance increases in Q3 2009, respectively. Contrast that to the increase in average balance for VantageScore F consumers, who are the most risky, whose average balances increased more than 28 percent for the same time period. So, although the industry-level trend holds true, the challenges facing the “average” consumer in America are not average at all – they are unique and specific to each consumer and continue to illustrate the challenge in assessing consumers\' credit card risk in the current credit environment. 1 http://www.nytimes.com/2009/12/08/business/economy/08econ.html  

Published: December 10, 2009 by Kelly Kent

In my last blog, I discussed the presence of strategic defaulters and outlined the definitions used to identify these consumers, as well as other pools of consumers within the mortgage population that are currently showing some measure of mortgage repayment distress. In this section, I will focus on the characteristics of strategic defaulters, drilling deeper into the details behind the population and learning how one might begin to recognize them within that population. What characteristics differentiate strategic defaulters? Early in the mortgage delinquency stage, mortgage defaulters and cash flow managers look quite similar – both are delinquent on their mortgage, but are not going bad on any other trades. Despite their similarities, it is important to segment these groups, since mortgage defaulters are far more likely to charge-off and far less likely to cure than cash flow managers. So, given the need to distinguish between these two segments, here are a few key measures that can be used to define each population. Origination VantageScore® • Despite lower overall default rates, prime and super-prime consumers are more likely to be strategic defaulters  Origination Mortgage Balance • Consumers with higher mortgage balances at origination are more likely to be strategic defaulters, we conclude this is a result of being further underwater on their real estate property than lower-balance consumers Number of Mortgages • Consumers with multiple first mortgages show higher incidence of strategic default.  This trend represents consumers with investment properties making strategic repayment decisions on investments (although the majority of defaults still occur on first mortgages where the consumer has only one first mortgage) Home Equity Line Performance • Strategic defaulters are more likely to remain current on Home Equity Lines until mortgage delinquency occurs, potentially a result of drawing down the HELOC line as much as possible before becoming delinquent on the mortgage Clearly, there are several attributes that identify strategic defaulters and can assist in differentiating them from cash flow managers. The ability to distinguish between these two populations is extremely valuable when considering its usefulness in the application of account management and collections management, improving collections, and loan modification, which is my next topic. Source: Experian-Oliver Wyman Market Intelligence Reports; Understanding strategic default in mortgage topical study/webinar, August 2009.

Published: December 10, 2009 by Kelly Kent

By: Amanda Roth During the past few months, we have been hearing from our clients that there is a renewed focus from the regulators/examiners on risk-based pricing strategies.  Many are requesting a validation of the strategies to ensure acceptable management of risk through proper loan pricing and profitability. The question we often receive is “what exactly are they requiring?” In some cases, a simple validation of the scoring models used in the strategies will be sufficient.  However, many require a deeper dive into where the risk bands are set and how pricing is determined.  They are looking to see if applicants of the same risk level are being priced the same, and when the price is increased from tier A to B, for example.  Also, they\'re checking that the change in rate is in line with the change in risk.  Some are even requiring a profitability analysis to show the expected impact of delinquency, loss and operating expense on net revenue for the product, tier and total portfolio. We\'ll address each of these analyses in more detail over the next few weeks.  In the meantime, what are you hearing from your regulators/examiners?  

Published: December 8, 2009 by Guest Contributor

For the past couple years, the deterioration of the real estate market and the economy as a whole has been widely reported as a national and international crisis. There are several significant events that have contributed to this situation, such as, 401k plans have fallen, homeowners have simply abandoned their now under-valued properties, and the federal government has raced to save the banking and automotive sectors. While the perspective of most is that this is a national decline, this is clearly a situation where the real story is in the details. A closer look reveals that while there are places that have experienced serious real estate and employment issues (California, Florida, Michigan, etc.), there are also areas (Texas) that did not experience the same deterioration in the same manner. Flash forward to November, 2009 – with signs of recovery seemingly beginning to appear on the horizon – there appears to be a great deal of variability between areas that seem poised for recovery and those that are continuing down the slope of decline. Interestingly though, this time the list of usual suspects is changing. In a recent article posted to CNN.com, Julianne Pepitone observes that many cities that were tops in foreclosure a year ago have since shown stabilization, while at the same time, other cities have regressed. A related article outlines a growing list of cities that, not long ago, considered themselves immune from the problems being experienced in other parts of the country. Previous economic success stories are now being identified as economic laggards and experiencing the same pains, but only a year or two later. So – is there a lesson to be taken from this? From a business intelligence perspective, the lesson is generalized reporting information and forecasting capabilities are not going to be successful in managing risk. Risk management and forecasting techniques will need to be developed around specific macro- and micro-economic changes.  They will also need to incorporate a number of economic scenarios to properly reflect the range of possible future outcomes about risk management and risk management solutions. Moving forward, it will be vital to understand the differences in unemployment between Dallas and Houston and between regions that rely on automotive manufacturing and those with hi-tech jobs. These differences will directly impact the performance of lenders’ specific footprints, as this year’s “Best Place to Live” according to Money.CNN.com can quickly become next year’s foreclosure capital. ihttp://money.cnn.com/2009/10/28/real_estate/foreclosures_worst_cities/index.htm?postversion=2009102811 iihttp://money.cnn.com/galleries/2009/real_estate/0910/gallery.foreclosures_worst_cities/2.html  

Published: November 30, 2009 by Kelly Kent

By: Wendy Greenawalt Optimization has become a \"buzz word\" in the financial services marketplace, but some organizations still fail to realize all the possible business applications for optimization. As credit card lenders scramble to comply with the pending credit card legislation, optimization can be a quick and easily implemented solution that fits into current processes to ensure compliance with the new regulations. Optimizing decisions Specifically, lenders will now be under strict guidelines of when an APR can be changed on an existing account, and the specific circumstances under which the account must return to the original terms. Optimization can easily handle these constraints and identify which accounts should be modified based on historical account information and existing organizational policies. APR account changes can require a great deal of internal resources to implement and monitor for on-going performance. Implementing an optimized strategy tree within an existing account management strategy will allow an organization to easily identify consumer level decisions.  This can be accomplished while monitoring accounts through on-going batch processing. New delivery options are now available for lenders to receive optimized strategies for decisions related to: Account acquisition Customer management Collections Organizations who are not currently utilizing this technology within their  processes should investigate the new delivery options. Recent research suggests optimizing decisions can provide an improvement of 7-to-16 percent over current processes.  

Published: November 30, 2009 by Guest Contributor

In my last blog, I discussed the basic concept of a maturation curve, as illustrated below: Exhibit 1 In Exhibit 1, we examine different vintages beginning with those loans originated by year during Q2 2002 through Q2 2008. The purpose of the vintage analysis is to identify those vintages that have a steeper slope towards delinquency, which is also known as delinquency maturation curve. The X-axis represents a timeline in months, from month of origination.  Furthermore, the Y-axis represents the 90+ delinquency rate expressed as a percentage of balances in the portfolio.  Those vintage analyses that have a steeper slope have reached a normalized level of delinquency sooner, and could in fact, have a trend line suggesting that they overshoot the expected delinquency rate for the portfolio based upon credit quality standards. So how can you use a maturation curve as a useful portfolio management tool? As a consultant, I spend a lot of time with clients trying to understand issues, such as why their charge-offs are higher than plan (budget).  I also investigate whether the reason for the excess credit costs are related to collections effectiveness, collections strategy, collections efficiency, credit quality or a poorly conceived budget. I recall one such engagement, where different functional teams within the client’s organization were pointing fingers at each other because their budget evaporated. One look at their maturation curves and I had the answers I needed. I noticed that two vintages per year had maturation curves that were pointed due north, with a much steeper curve than all other months of the year. Why would only two months or vintages of originations each year be so different than all other vintage analyses in terms of performance? I went back to my career experiences in banking, where I worked for a large regional bank that ran marketing solicitations several times yearly. Each of these programs was targeted to prospects that, in most instances, were out-of-market, or in other words, outside of the bank’s branch footprint. Bingo! I got it! The client was soliciting new customers out of his market, and was likely getting adverse selection. While he targeted the “right” customers – those with credit scores and credit attributes within an acceptable range, the best of that targeted group was not interested in accepting their offer, because they did not do business with my client, and would prefer to do business with an in-market player. Meanwhile, the lower grade prospects were accepting the offers, because it was a better deal than they could get in-market. The result was adverse selection...and what I was staring at was the \"smoking gun\" I’d been looking for with these two-a-year vintages (vintage analysis) that reached the moon in terms of delinquency. That’s the value of building a maturation curve analysis – to identify specific vintages that have characteristics that are more adverse than others.  I also use the information to target those adverse populations and track the performance of specific treatment strategies aimed at containing losses on those segments. You might use this to identify which originations vintages of your home equity portfolio are most likely to migrate to higher levels of delinquency; then use credit bureau attributes to identify specific borrowers for an early lifecycle treatment strategy. As that beer commercial says – “brilliant!”  

Published: November 25, 2009 by Jeff Bernstein

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

The value of a good decision can generate $150 or more in customer net present value, while the cost of a bad decision can cost you $1,000 or more.  For example, acquiring a new and profitable customer by making good prospecting and approval and pricing decisions and decisioning strategies may generate $150 or much more in customer net present value and help you increase net interest margin and other key metrics.  While the cost of a bad decision (such as approving a fraudulent applicant or inappropriately extending credit that ultimately results in a charge-off) can cost you $1,000 or more. Why is risk management decisioning important? This issue is critical because average-sized financial institutions or telecom carriers make as many as eight million customer decisions each year (more than 20,000 per day!).  To add to that, very large financial institutions make as many as 50 billion customer decisions annually.  By optimizing decisions, even a small 10-to-15 percent improvement in the quality of these customer life cycle decisions can generate substantial business benefit. Experian recommends that clients examine the types of decisioning strategies they leverage across the customer life cycle, from prospecting and acquisition, to customer management and collections.  By examining each type of decision, you can identify those opportunities for improvement that will deliver the greatest return on investment by leveraging credit risk attributes, credit risk modeling, predictive analytics and decision-management software.        

Published: November 13, 2009 by Roger Ahern

By: Kari Michel Most lenders use a credit scoring model in their decision process for opening new accounts; however, between 35 and 50 million adults in the US may be considered unscoreable with traditional credit scoring models. That is equivalent to 18-to-25 percent of the adult population. Due to recent market conditions and shrinking qualified candidates, lenders have placed a renewed interest in assessing the risk of this under served population.  Unscoreable consumers could be a pocket of missed opportunity for many lenders. To assess these consumers, lenders must have the ability to better distinguish between consumers with a clear track record of unfavorable credit behaviors versus those that are just beginning to develop their credit history and credit risk models. Unscoreable consumers can be divided into three populations: • Infrequent credit users:  Consumers who have not been active on their accounts for the past six months, and who prefer to use non-traditional credit tools for their financial needs. • New entrants:  Consumers who do not have at least one account with more than six months of activity; including young adults just entering the workforce, recently divorced or widowed individuals with little or no credit history in their name, newly arrived immigrants, or people who avoid the traditional system by choice. • Thin file consumers:  Consumers who have less than three accounts and rarely utilize traditional credit and likely prefer using alternative credit tools and credit score trends. A study done by VantageScore® Solutions, LLC shows that a large percentage of the unscoreable population can be scored with VantageScore* and a portion of these are credit-worthy (defined as the population of consumers who have a cumulative likelihood to become 90 days or more delinquent is less than 5 percent).  The following is a high-level summary of the findings for consumers who had at least one trade: Lenders can review their credit decisioning process to determine if they have the tools in place to assess the risk of those unscoreable consumers.  As with this population there is an opportunity for portfolio expansion as demonstrated by the VantageScore study. *VantageScore is a generic credit scoring model introduced to meet the market demands for a highly predictive consumer score. Developed as a joint venture among the three major credit reporting companies (CRCs) – Equifax, Experian and TransUnion.    

Published: November 4, 2009 by Guest Contributor

Recent findings on vintage analysis Source: Experian-Oliver Wyman Market Intelligence Reports Analyzing recent vintage analysis provides insights gleaned from cursory review Analyzing recent trends from vintages published in the Experian-Oliver Wyman Market Intelligence Reports, there are numerous insights that can be gleaned from just a cursory review of the results. Mortgage vintage analysis trends As noted in an earlier posting, recent mortgage vintage analysis\' show a broad range of behaviors between more recent vintages and older, more established vintages that were originated before the significant run-up of housing prices seen in the middle of the decade. The 30+ delinquency levels for mortgage vintages in 2005, 2006, and 2007 approach and in two cases exceed 10 percent of trades in the last 12 months of performance, and have spiked from historical trends, beginning almost immediately after origination. On the other end of the spectrum, the vintages from 2003 and 2002 have barely approached or exceeded 5 percent for the last 6 or 7 years. Bandcard vintage analysis trends As one would expect, the 30+ delinquency trends demonstrated within bankcard vintage analysis are vastly different from the trends of mortgage vintages. Firstly, card delinquencies show a clear seasonal trend, with a more consistent yearly pattern evident in all vintages, resulting from the revolving structure of the product. The most interesting trends within the card vintages do show that the more recent vintages, 2005 to 2008, display higher 30+ delinquency levels, especially the Q2 2007 vintage, which is far and away the underperformer of the group. Within each vintage pool, an analysis can extend into the risk distribution and details of the portfolio and further segment the pool by credit score, specifically VantageScore.  In other words, the loans in this pool are only for the most creditworthy customers at the time of origination. The noticeable trend is that while these consumers were largely resistant to deteriorating economic conditions, each vintage segment has seen a spike in the most recent 9-12 months. Given that these consumers tend to have the highest limits and lowest utilization of any VantageScore band, this trend encourages further account management consideration and raises flags about overall bankcard performance in coming months. Even a basic review of vintage analysis pools and the subsequent analysis opportunities that result from this data can be extremely useful. This vintage analysis can add a new perspective to risk management, supplementing more established analysis techniques, and further enhancing the ability to see the risk within the risk. Purchase a complete picture of consumer credit trends from Experian’s database of over 230 million consumers with the Market Intelligence Brief.

Published: November 2, 2009 by Kelly Kent

By: Wendy Greenawalt In the last installment of my three part series dispelling credit attribute myths, we’ll discuss the myth that the lift achieved by utilizing new attributes is minimal, so it is not worth the effort of evaluating and/or implementing new credit attributes. First, evaluating accuracy and efficiency of credit attributes is hard to measure. Experian data experts are some of the best in the business and, in this edition, we will discuss some of the methods Experian uses to evaluate attribute performance. When considering any new attributes, the first method we use to validate statistical performance is to complete a statistical head-to-head comparison. This method incorporates the use of KS (Kolmogorov–Smirnov statistic), Gini coefficient, worst-scoring capture rate or odds ratio when comparing two samples. Once completed, we implement an established standard process to measure value from different outcomes in an automated and consistent format. While this process may be time and labor intensive, the reward can be found in the financial savings that can be obtained by identifying the right segments, including: • Risk models that better identify “bad” accounts and minimizing losses • Marketing models that improve targeting while maximizing campaign dollars spent • Collections models that enhance identification of recoverable accounts leading to more recovered dollars with lower fixed costs Credit attributes Recently, Experian conducted a similar exercise and found that an improvement of 2-to-22 percent in risk prediction can be achieved through the implementation of new attributes. When these metrics are applied to a portfolio where several hundred bad accounts are now captured, the resulting savings can add up quickly (500 accounts with average loss rate of $3,000 = $1.5M potential savings). These savings over time more than justify the cost of evaluating and implementing new credit attributes.  

Published: October 23, 2009 by Guest Contributor

By: Wendy Greenawalt In the second installment of my three part series, dispelling credit attribute myths, we will discuss why attributes with similar descriptions are not always the same. The U.S. credit reporting bureaus are the most comprehensive in the world. Creating meaningful attributes requires extensive knowledge of the three credit bureaus’ data. Ensuring credit attributes are up-to-date and created by informed data experts.  Leveraging complete bureau data is also essential to obtaining long-term strategic success. To illustrate why attributes with similar names may not be the same let’s discuss a basic attribute, such as “number of accounts paid satisfactory.” While the definition, may at first seem straight forward, once the analysis begins there are many variables that must be considered before finalizing the definition, including: Should the credit attributes include trades currently satisfactory or ever satisfactory? Do we include paid charge-offs, paid collections, etc.? Are there any date parameters for credit attributes? Are there any trades that should be excluded? Should accounts that have a final status of \"paid” be included? These types of questions and many others must be carefully identified and assessed to ensure the desired behavior is captured when creating credit attributes. Without careful attention to detail, a simple attribute definition could include behavior that was not intended.  This could negatively impact the risk level associated with an organization’s portfolio. Our recommendation is to complete a detailed analysis up-front and always validate the results to ensure the desired outcome is achieved. Incorporating this best practice will guarantee that credit attributes created are capturing the behavior intended.  

Published: October 21, 2009 by Guest Contributor

By: Wendy Greenawalt This blog kicks off a three part series exploring some common myths regarding credit attributes. Since Experian has relationships with thousands of organizations spanning multiple industries, we often get asked the same types of questions from clients of all sizes and industries. One of the questions we hear frequently from our clients is that they already have credit attributes in place, so there is little to no benefit in implementing a new attribute set. Our response is that while existing credit attributes may continue to be predictive, changes to the type of data available from the credit bureaus can provide benefits when evaluating consumer behavior. To illustrate this point, let’s discuss a common problem that most lenders are facing today-- collections. Delinquency and charge-off continue to increase and many organizations are having difficulty trying to determine the appropriate action to take on an account because consumer behavior has drastically changed regarding credit attributes. New codes and fields are now reported to the credit bureaus and can be effectively used to improve collection-related activities. Specifically, attributes can now be created to help identify consumers who are rebounding from previous account delinquencies. In addition, lenders can evaluate the number and outstanding balances of collection or other types of trades.  This can be achieved while considering the percentage of accounts that are delinquent and the specific type of accounts affected after assessing credit risk. The utilization of this type of data helps an organization to make collection decisions based on very granular account data.  This is done while considering new consumer trends such as strategic defaulters. Understanding all of the consumer variables will enable an organization to decide if the account should be allowed to self-cure.  If so, immediate action should be taken or modification of account terms should be contemplated. Incorporating new data sources and updating attributes on a regular basis allows lenders to react to market trends quickly by proactively managing strategies.  

Published: October 20, 2009 by Guest Contributor

When reviewing offers for prospective clients, lenders often deal with a significant amount of missing information in assessing the outcomes of lending decisions, such as: Why did a consumer accept an offer with a competitor? What were the differentiating factors between other offers and my offer, i.e. what were their credit score trends? What happened to consumers that we declined? Do they perform as expected or better than anticipated? What were their credit risk models? While lenders can easily understand the implications of the loans they have offered and booked with consumers, they often have little information about two important groups of consumers: 1. Lost leads: consumers to whom they made an offer but did not book 2. Proxy performance: consumers to whom financing was not offered, but where the consumer found financing elsewhere. Performing a lost lead analysis on the applications approved and declined, can provide considerable insight into the outcomes and credit performance of consumers that were not added to the lender’s portfolio. Lost lead analysis can also help answer key questions for each of these groups: How many of these consumers accepted credit elsewhere? What were their credit attributes? What are the credit characteristics of the consumers we\'re not booking? Were these loans booked by one of my peers or another type of lender? What were the terms and conditions of these offers? What was the performance of the loans booked elsewhere? Who did they choose for loan origination? Within each of these groups, further analysis can be conducted to provide lenders with actionable feedback on the implications of their lending policies, possibly identifying opportunities for changes to better fulfill lending objectives. Some key questions can be answered with this information: Are competitors offering longer repayment terms? Are peers offering lower interest rates to the same consumers? Are peers accepting lower scoring consumers to increase market share? The results of a lost lead analysis can either confirm that the competitive marketplace is behaving in a manner that matches a lender’s perspective.  It can also shine a light into aspects of the market where policy changes may lead to superior results. In both circumstances, the information provided is invaluable in making the best decision in today’s highly-sensitive lending environment.

Published: October 11, 2009 by Kelly Kent

By: Kristan Keelan What do you think of when you hear the word “fraud”?  Someone stealing your personal identity?  Perhaps the recent news story of the five individuals indicted for gaining more than $4 million from 95,000 stolen credit card numbers?  It’s unlikely that small business fraud was at the top of your mind.   Yet, just like consumers, businesses face a broad- range of first- and third-party fraud behaviors, varying significantly in frequency, severity and complexity. Business-related fraud trends call for new fraud best practices to minimize fraud. First let’s look at first-party fraud.  A first-party, or victimless, fraud profile is characterized by having some form of material misrepresentation (for example, misstating revenue figures on the application) by the business owner without  that owner’s intent or immediate capacity to pay the loan item.  Historically, during periods of economic downturn or misfortune, this type of fraud is more common.  This intuitively makes sense — individuals under extreme financial pressure are more likely to resort to desperate measures, such as misstating financial information on an application to obtain credit. Third-party commercial fraud occurs when a third party steals the identification details of a known business or business owner in order to open credit in the business victim’s name.  With creditors becoming more stringent with credit-granting policies on new accounts, we’re seeing seasoned fraudsters shift their focus on taking over existing business or business owner identities. Overall, fraudsters seem to be migrating from consumer to commercial fraud.   I think one of the most common reasons for this is that commercial fraud doesn’t receive the same amount of attention as consumer fraud.  Thus, it’s become easier for fraudsters to slip under the radar by perpetrating their crimes through the commercial channel.   Also, keep in mind that businesses are often not seen as victims in the same way that consumers are.  For example, victimized businesses aren’t afforded the protections that consumers receive under identity theft laws, such as access to credit information.   These factors, coupled with the fact that business-to-business fraud is approximately three-to-ten times more “profitable” per occurrence than consumer fraud, play a role in leading fraudsters increasingly toward commercial fraud.

Published: September 24, 2009 by Guest Contributor

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!