Financial Services

Loading...

Contributed by: David Daukus As the economy is starting to finally turn around albeit with hiccups and demand for new credit picking up, creditors are loosening their lending criteria to grab market share. However, it is important for lenders to keep lessons from the past to avoid the same mistakes. With multiple government agencies such as the CFPB, OCC, FDIC and NCUA and new regulations, banking compliance is more complex than ever. That said, there are certain foundational elements, which hold true. One such important aspect is keeping a consistent and well-balanced risk management approach.  Another key aspect is around concentration risk. This is where a significant amount of risk is focused in certain portfolios across specific regions, risk tiers, etc. (Think back to 2007/2008 where some financial institutions focused on making stated-income mortgages and other riskier loans.) In 2011, the Federal Reserve Board of Governors released a study outlining the key reasons for bank failures. This review focused mainly on 20 bank failures from June 29, 2009 thru June 30, 2011 where more in-depth reporting and analysis had been completed after each failure. According to the Federal Reserve Board of Governors, here are the four key reasons for the failed banks: (1) Management pursuing robust growth objectives and making strategic choices that proved to be poor decisions; (2) Rapid loan portfolio growth exceeding the bank’s risk management capabilities and/or internal controls; (3) Asset concentrations tied to commercial real estate or construction, land, and land development (CLD) loans; (4) Management failing to have sufficient capital to cushion mounting losses. So, what should be done? Besides adherence to new regulations, which have been sprouting up to save us all from another financial catastrophe, diversification of risk maybe the name of the game. The right mix of the following is needed for a successful risk management approach including the following steps: Analyze portfolios and needs Predict high risk accounts Create comprehensive credit policies Decision for risk and retention Refresh scores/attributes and policies So, now is a great time to renew your focus. Source: Federal Reserve Board of Governors: Summary Analysis of Failed Bank Reviews  (9/2011)

Published: July 26, 2012 by

By: Stacy Schulman Earlier this week the CFPB announced a final rule addressing its role in supervising certain credit reporting agencies, including Experian and others that are large market participants in the industry. To view this original content, Experian and the CFPB - Both Committed to Helping Consumers. During a field hearing in Detroit, CFPB Director Richard Cordray’s spoke about a new regulatory focus on the accuracy of the information received by the credit reporting companies, the role they play in assembling and maintaining that information, and the process available to consumers for correcting errors. We look forward to working with CFPB on these important priorities. To read more about how Experian prioritizes these information essentials for consumers, clients and shareholders, read more on the Experian News blog. Learn more about Experian\'s view of the Consumer Financial Protection Bureau. ___________________ Original content provided by: Tony Hadley, Senior Vice President of Government Affairs and Public Policy About Tony: Tony Hadley is Senior Vice President of Government Affairs and Public Policy for Experian. He leads the corporation’s legislative, regulatory and policy programs relating to consumer reporting, consumer finance, direct and digital marketing, e-commerce, financial education and data protection. Hadley leads Experian’s legislative and regulatory efforts with a number of trade groups and alliances, including the American Financial Services Association, the Direct Marketing Association, the Consumer Data Industry Association, the U.S. Chamber of Commerce and the Interactive Advertising Bureau. Hadley is Chairman of the National Business Coalition on E-commerce and Privacy.

Published: July 18, 2012 by Guest Contributor

With the constant (and improving!) changes in the consumer credit landscape, understanding the latest trends is vital for institutions to validate current business strategies or make adjustments to shifts in the marketplace.  For example, a recent article in American Banker described how a couple of housing advocates who foretold the housing crisis in 2005 are now promoting a return to subprime lending. Good story lead-in, but does it make sense for “my” business?  How do you profile this segment of the market and its recent performance?  Are there differences by geography?  What other products are attracting this risk segment that could raise concerns for meeting a new mortgage obligation?   There is a proliferation of consumer loan and credit information online from various associations and organizations, but in a static format that still makes it challenging to address these types of questions. Fortunately, new web-based solutions are being made available that allow users to access and interrogate consumer trade information 24x7 and keep abreast of constantly changing market conditions.  The ability to manipulate and tailor data by geography, VantageScore risk segments and institution type are just a mouse click away.  More importantly, these tools allow users to customize the data to meet specific business objectives, so the next subprime lending headline is not just a story, but a real business opportunity based on objective, real-time analysis.

Published: July 15, 2012 by Alan Ikemura

As a scoring manager, this question has always stumped me because there was never a clear answer. It simply meant less than prime – but how much less? What does the term actually mean? How do you quantify something so subjective? Do you assign it a credit score? Which one? There were definitely more questions than answers. But a new proposed ruling from the FDIC could change all that – at least when it comes to large bank pricing assessments. The proposed ruling does a couple of things to bring clarity to the murky waters of the subprime definition. First, it replaces the term “subprime” with “high-risk consumer loans”. Then they go one better: they quantify high-risk as having a 20% probability of default or higher. Finally, something we can calculate! The arbitrary 3-digit credit score that has been used in the past to define the line between prime and subprime has several flaws. First of all, if a subprime loan is defined as having any particular credit score, it has to be for a specific version of a specific model at a specific time. That’s because the default rates associated to any given score is relative to the model used to calculate it. There are hundreds of custom-build and generic scoring models in use by lenders today – does that single score represent the same level of risk to all of them? Absolutely not. And even if all risk models were calibrated exactly the same, just assigning credit risk a number has no real meaning over time. We all know what scores shift, that consumer credit behavior is not the same today as it was just 6 years ago. In 2006, if a score of X represented a 15% likelihood of default, that same score today could represent 20% or more. It is far better to align a definition of risk with its probability of default to begin with! While it only currently applies to the large bank pricing assessments with the FDIC, this proposed ruling is a great step in the right direction. As this new approach catches on, we may see it start to move into other polices and adopted by various organizations as they assess risk throughout the lending cycle.

Published: July 13, 2012 by Veronica Herrera

By: Mike Horrocks This week, several key financial institutions will be submitting their “living wills” to Washington as part of the Dodd-Frank legislation.  I have some empathy for how those institutions will feel as they submit these living wills.  I don’t think that anyone would say writing a living will is fun.  I remember when my wife and I felt compelled to have one in place as we realized that we did not want to have any questions unanswered for our family. For those not familiar with the concept of the living will, I thought I would first look at the more widely known medical description.   The Mayo Clinic describes living wills as follows, “Living wills and other advance directives describe your preferences regarding treatment if you\'re faced with a serious accident or illness. These legal documents speak for you when you\'re not able to speak for yourself — for instance, if you\'re in a coma.”   Now imagine a bank in a coma. I appreciate the fact that these living wills are taking place, but pulling back my business law books, I seem to recall that one of the benefits of a corporation versus say a sole proprietorship is that the corporation can basically be immortal or even eternal.  In fact the Dictionary.com reference calls out that a corporation has “a continuous existence independent of the existences of its members”.  So now imagine a bank eternally in a coma. Now, I cannot avoid all of those unexpected risks that will come up in my personal life, like an act of God, that may put me into a coma and invoke my living will, but I can do things voluntarily to make sure that I don’t visit the Emergency Room any time soon.  I can exercise, eat right, control my stress and other healthy steps and in fact I meet with a health coach to monitor and track these things. Banks can take those same steps too.  They can stay operationally fit, lend right, and monitor the stress in their portfolios.   They can have their health plans in place and have a personal trainer to help them stay fit (and maybe even push them to levels of fitness they did not think they could reach).  Now imagine a fit, strong bank. So as printers churn, inboxes get filled, and regulators read through thousands of pages of bank living wills, let’s think of the gym coach, or personal trainer that pushed us to improve and think about how we can be healthy and fit and avoid the not so pleasant alternatives of addressing a financial coma.

Published: July 2, 2012 by Guest Contributor

By: Joel Pruis From a score perspective we have established the high level standards/reporting that will be needed to stay on top of the resulting decisions.  But there is a lot of further detail that should be considered and further segmentation that must be developed or maintained. Auto Decisioning A common misperception around auto-decisioning and the use of scorecards is that it is an all or nothing proposition.  More specifically, if you use scorecards, you have to make the decision entirely based upon the score.  That is simply not the case.  I have done consulting after a decisioning strategy based upon this misperception and the results are not pretty.  Overall, the highest percentage for auto-decisioning that I have witnessed has been in the 25 – 30% range.  The emphasis is on the “segment”.  The segments is typically the lower dollar requests, say $50,000 or less, and is not the percentage across the entire application population.  This leads into the discussion around the various segments and the decisioning strategy around each segment. One other comment around auto-decisioning.  The definition related to this blog is the systematic decision without human intervention.  I have heard comments such as “competitors are auto-decisioning up to $1,000,000”.  The reality around such comments is that the institution is granting loan authority to an individual to approve an application should it meet the particular financial ratios and other criteria.  The human intervention comes from verifying that the information has been captured correctly and that the financial ratios make sense related to the final result.  The last statement is the key to the disqualification of “auto-decisioning”.  The individual is given the responsibility to ensure data quality and to ensure nothing else is odd or might disqualify the application from approval or declination.  Once a human eye is looking at an application, judgment comes into the picture and we introduce the potential for inconsistencies and or extension of time to render the decision.  Auto-decisioning is just that “Automatic”.  It is a yes/no decision and is based upon objective factors that if met, allow the decision to be made.  Other factors, if not included in the decision strategy, are not included. So, my fellow credit professionals, should you hear someone say they are auto-decisioning a high percent of their applications or a high dollar amount for an application, challenge, question and dig deeper.  Treat it like the fishing story “I caught a fish THIS BIG”. No financials segment The highest volume of applications and the lowest total dollar production area of any business banking/small business product set.  We had discussed the use of financials in the prior blog around application requirements so I will not repeat that discussion here.  Our focus will be on the  decisioning of these applications.  Using score and application characteristics as the primary data source, this segment is the optimal segment for auto-decisioning.  Speeds the  decision process and provides the greatest amount of consistency in the decisions rendered.  Two key areas for this segment are risk premiums and scorecard validations. The risk premium is important as you are going to accept a higher level of losses for the sake of efficiencies in the underwriting/processing of the application.  The end result is lower operational costs, relatively higher credit losses but the end yield on this segment meets the required, yet practical, thresholds for return. The one thing that I will repeat from a prior blog is that you may request financials after the initial review but the frequency should be low and should also be monitored.  The request of financials should not be the “belt and suspenders” approach.  If you know what the financials are likely to show, then don’t request them.  They are unnecessary.  You are probably right and the collection of the financials will only serve to elongate the response time, frustrate everyone involved in the process and not change the expected results. Financials segment The relatively lower unit volume but the higher dollar volume segment.  Likely this segment will have no auto-decisioning as the review of financials typically will mandate the judgmental review.  From an operational perspective, these are high dollar and thus the manual review does not push this segment into a losing proposition.  From a potential operational lift perspective, the ability to drive a higher volume of applications into auto-decisioning is simply not available as we are talking probably less than 40% (if not fewer) of all applications in this segment. In this segment, the consistency becomes more difficult as the underwriter tends to want to put his/her own approach on the deal.  Standardization of the analysis approach (at least initially) is critical for this segment.  Consistency in the underwriting and the various criteria allows for greater analysis to determine where issues are developing or where we are realizing the greatest success.  My recommended approach is to standardize (via automation in the origination platform) the various calculations in a manner that will generate the most conservative approach.  Bluntly put, my approach was to attempt to make the deal as ugly as possible and if it still passed the various criteria, no additional work was needed nor was there any need for detailed explanation around how I justified the deal/request.  Only if it did not meet the criteria using the most conservative approach would I need to do any work and only if it was truly going to make a difference. Basic characteristics in this segment include – business cash flow, personal debt to income, global cash flow and leverage.  Others may be added but on a case by case basis. What about the score?  If I am doing so much judgmental underwriting, why calculate the score in this segment?  In a nutshell, to act as the risk rating methodology for the portfolio approach. Even with the judgmental approach, we do not want to fall into the trap thinking we are going to be able to adequately monitor this segment in a proactive fashion to justify the risk rating at any point in time after the loan is booked.  We have been focusing on the origination process in this blog series but I need to point out that since we are not going to be doing a significant amount of financial statement monitoring in the small business segment, we need to begin to move away from the 1 – 8 (or 9 or 10 or whatever) risk rating method for the small business segment.  We cannot be granular enough with this rating system nor can we constantly stay on top of what may be changing risk levels related to the individual clients.  But I am going to save the portfolio management area for a future blog. Regardless of the segment, please keep in mind that we need to be able to access the full detail of the information that is being captured during the origination process along with the subsequent payment performance.  As you are capturing the data, keep in mind, the abilities to Access this data for purposes of analysis Connect the data from origination to the payment performance data to effectively validate the scorecard and my underwriting/decisioning strategies Dive into the details to find the root cause of the performance problem or success The topic of decisioning strategies is broad so please let me know if you have any specific topics that you would like addressed or questions that we might be able to post for responses from the industry.

Published: June 29, 2012 by Guest Contributor

Recently we released a white paper that emphasizes the need for better, more granular indicators of local home-market conditions and borrower home equity, with a very interesting new finding on leading indicators in local-area credit statistics.  Click here to download the white paper Home-equity indicators with new credit data methods for improved mortgage risk analytics Experian white paper, April 2012 In the run-up to the U.S. housing downturn and financial crisis, perhaps the greatest single risk-management shortfall was poorly predicted home prices and borrower home equity. This paper describes new improvements in housing market indicators derived from local-area credit and real-estate information. True housing markets are very local, and until recently, local real-estate data have not been systematically available and interpreted for broad use in modeling and analytics. Local-area credit data, similarly, is relatively new, and its potential for new indicators of housing market conditions is studied here in Experian’s Premier Aggregated Credit Statistics.SM Several examples provide insights into home-equity indicators for improved mortgage models, predictions, strategies, and combined LTV measurement. The paper finds that for existing mortgages evaluated with current combined LTV and borrower credit score, local-area credit statistics are an even stronger add-on default predictor than borrower credit attributes. Click here to download the white paper Authors: John Straka and Chuck Robida, Experian Michael Sklarz, Collateral Analytics  

Published: June 22, 2012 by Guest Contributor

Previously, we looked at the various ways a dual score strategy could help you focus in on an appropriate lending population. Find your mail-to population with a prospecting score on top of a risk score; locate the riskiest of all consumers by layering a bankruptcy score with your risk model. But other than multiple scores, what other tools can be used to improve credit scoring effectiveness? Credit attributes add additional layers of insight from a risk perspective. Not everyone who scores an 850 represent the same level of risk once you start interrogating their broader profile. How much total debt are they carrying? What is the nature of it - is it mortgage or mostly revolving? A credit score may not fully articulate a consumer as high risk, but if their debt obligations are high, they may represent a very different type of risk than from another consumer with the same 850 score.  Think of attribute overlays in terms of tuning the final score valuation of an individual consumer by making the credit profile more transparent, allowing a lender to see more than just the risk odds associated with the initial score. Attributes can also help you refine offers. A consumer may be right for you in terms of risk, but are you right for them? If they have 4 credit cards with $20K limits each, they’re likely going to toss your $5K card offer in the trash. Attributes can tell us these things, and more. For example, while a risk score can tell us what the risk of a consumer is within a set window, certain credit attributes can tell us something about the stability of that consumer to remain within that risk band. Recent trends in score migration – the change in a level of creditworthiness of a consumer subsequent to generation of a current credit score – can undermine the most conservative of risk management policies. At the height of the recession, VantageScore LLC studied the migration of scores across all risk bands and was able to identify certain financial management behaviors found within their credit files. These behaviors (signaling, credit footprint, and utility) assess the consumer’s likelihood of improving, significantly deteriorating, or maintaining a stable score over the next 12 months.  Knowing which subgroup of your low-risk population is deteriorating, or which high risk groups are improving, can help you make better decision today.

Published: June 12, 2012 by Veronica Herrera

One of the most successful best practices for improving agency performance is the use of scorecards for assessing and rank ordering performance of agencies in competition with each other. Much like people, agencies thrive when they understand how they are evaluated, how to influence those factors that contribute to success, and the recognition and reward for top tier performance. Rather than a simple view of performance based upon a recovery rate as a percentage of total inventory, best practice suggests that performance is more accurately reflected in vintage batch liquidation and peer group comparisons to the liquidation curve. Why? In a nutshell, differences in inventory aging and the liquidation curve. Let’s explain this in greater detail. Historically, collection agencies would provide their clients with better performance reporting than their clients had available to them. Clients would know how much business was placed in aggregate, but not by specific vintage relating to the month or year of placement. Thus, when a monthly remittance was received, the client would be incapable of understanding whether this month’s recoveries were from accounts placed last month, this year, or three years ago. This made forecasting of future cash flows from recoveries difficult, in that you would have no insight into where the funds were coming from. We know that as a charged off debt ages, its future liquidation rate generally downward sloping (the exception is auto finance debt, as there is a delay between the time of charge-off and rehabilitation of the debtor, thus future flows are higher beyond the 12-24 month timeframe). How would you know how to predict future cash flows and liquidation rates without understanding the different vintages in the overall charged off population available for recovery? This lack of visibility into liquidation performance created another issue. How do you compare the performance of two different agencies without understanding the age of the inventory and how it is liquidating? An as example, let’s assume that Agency A has been handling your recovery placements for a few years, and has an inventory of $10,000,000 that spans 3+ years, of which $1,500,000 has been placed this year. We know from experience that placements from 3 years ago experienced their highest liquidation rate earlier in their lifecycle, and the remaining inventory from those early vintages are uncollectible or almost full liquidated. Agency A remits $130,000 this month, for a recovery rate of 1.3%. Agency B is a new agency just signed on this year, and has an inventory of $2,000,000 assigned to them. Agency B remits $150,000 this month, for a recovery rate of 7.5%. So, you might assume that Agency B outperformed Agency A by a whopping 6.2%. Right? Er … no. Here’s why. If we had better visibility of Agency A’s inventory, and from where their remittance of $130,000 was derived, we would have known that only a couple of small insignificant payments came from the older vintages of the $10,000,000 inventory, and that of the $130,000 remitted, over $120,000 came from current year inventory (the $1,500,000 in current year placements). Thus, when analyzed in context with a vintage batch liquidation basis, Agency A collected $120,000 against inventory placed in the current year, for a liquidation rate of 8.0%. The remaining remittance of $10,000 was derived from prior years’ inventory. So, when we compare Agency A with current year placements inventory of $1,500,000 and a recovery rate against those placements of 8.0% ($120,000) versus Agency B, with current year placements inventory of $2,000,000 and a recovery rate of 7.5% ($150,000), it’s clear that Agency A outperformed Agency B. This is why the vintage batch liquidation model is the clear-cut best practice for analysis and MI. By using a vintage batch liquidation model and analyzing performance against monthly batches, you can begin to interpret and define the liquidation curve. A liquidation curve plots monthly liquidation rates against a specific vintage, usually by month, and typically looks like this: Exhibit 1: Liquidation Curve Analysis                           Note that in Exhibit 1, the monthly liquidation rate as a percentage of the total vintage batch inventory appears on the y-axis, and the month of funds received appears on the x-axis. Thus, for each of the three vintage batches, we can track the monthly liquidation rates for each batch from its initial placement throughout the recovery lifecycle. Future monthly cash flow for each discrete vintage can be forecasted based upon past performance, and then aggregated to create a future recovery projection. The most sophisticated and up to date collections technology platforms, including Experian’s Tallyman™ and Tallyman Agency Management™ solutions provide vintage batch or laddered reporting. These reports can then be used to create scorecards for comparing and weighing performance results of competing agencies for market share competition and performance management. Scorecards As we develop an understanding of liquidation rates using the vintage batch liquidation curve example, we see the obvious opportunity to reward performance based upon targeted liquidation performance in time series from initial placement batch. Agencies have different strategies for managing client placements and balancing clients’ liquidation goals with agency profitability. The more aggressive the collections process aimed at creating cash flow, the greater the costs. Agencies understand the concept of unit yield and profitability; they seek to maximize the collection result at the lowest possible cost to create profitability. Thus, agencies will “job slope” clients’ projects to ensure that as the collectability of the placement is lower (driven by balance size, customer credit score, date of last payment, phone number availability, type of receivable, etc.) For utility companies and other credit grantors with smaller balance receivables, this presents a greater problem, as smaller balances create smaller unit yield. Job sloping involves reducing the frequency of collection efforts, employing lower cost collectors to perform some of the collection efforts, and where applicable, engaging offshore resources at lower cost to perform collection efforts. You can often see the impact of various collection strategies by comparing agency performance in monthly intervals from batch placement. Again, using a vintage batch placement analysis, we track performance of monthly batch placements assigned to competing agencies. We compare the liquidation results on these specific batches in monthly intervals, up until the receivables are recalled. Typical patterns emerge from this analysis that inform you of the collection strategy differences. Let’s look at an example of differences across agencies and how these strategy differences can have an impact on liquidation:                     As we examine the results across both the first and second 30-day phases, we are likely to find that Agency Y performed the highest of the three agencies, with the highest collection costs and its impact on profitability. Their collection effort was the most uniform over the two 30-day segments, using the dialer at 3-day intervals in the first 30-day segment, and then using a balance segmentation scheme to differentiate treatment at 2-day or 4-day intervals throughout the second 30-day phase. Their liquidation results would be the strongest in that liquidation rates would be sustained into the second 30-day interval. Agency X would likely come in third place in the first 30-day phase, due to a 14-day delay strategy followed by two outbound dialer calls at 5-day intervals. They would have a better performance in the second 30-day phase due to the tighter 4-day intervals for dialing, likely moving into second place in that phase, albeit at higher collection costs for them. Agency Z would come out of the gates in the first 30-day phase in first place, due to an aggressive daily dialing strategy, and their takeoff and early liquidation rate would seem to suggest top tier performance. However, in the second 30-day phase, their liquidation rate would fall off significantly due to the use of a less expensive IVR strategy, negating the gains from the first phase, and potentially reducing their over position over the two 30-day segments versus their peers. The point is that with a vintage batch liquidation analysis, we can isolate performance of a specific placement across multiple phases / months of collection efforts, without having that performance insight obscured by new business blended into the analysis. Had we used the more traditional current month remittance over inventory value, Agency Z might be put into a more favorable light, as each month, they collect new paper aggressively and generate strong liquidation results competitively, but then virtually stop collecting against non-responders, thus “creaming” the paper in the first phase and leaving a lot on the table. That said, how do we ensure that an Agency Z is not rewarded with market share? Using the vintage batch liquidation analysis, we develop a scorecard that weights the placement across the entire placement batch lifecycle, and summarizes points in each 30-day phase. To read Jeff\'s related posts on the topic of agency management, check out: Vendor auditing best practices that will help your organization succeed Agency managment, vendor scorecards, auditing and quality monitoring  

Published: April 25, 2012 by Jeff Bernstein

Up to this point, I’ve been writing about loan originations and the prospects and challenges facing bankcard, auto and real estate lending this year.  While things are off to a good start, I’ll use my next few posts to discuss the other side of the loan equation: performance. If there’s one thing we learned during the post-recession era is that growth can have consequences if not managed properly.  Obviously real estate is the poster child for this phenomenon, but bankcards also realized significant and costly performance deterioration following the rapid growth generated by relaxed lending standards. Today, bankcard portfolios are in expansion mode once again, but with delinquency rates at their lowest point in years.  In fact, loan performance has improved nearly 50% in the past three years through a combination of tighter lending requirements and consumers’ self-imposed deleveraging.   Lessons learned from issuers and consumers have created a unique climate in which growth is now balanced with performance. Even areas with greater signs of payment stress have realized significant improvements.   For example, the South Atlantic region’s 4.2% 30+ DPD performance is 11% higher than the national average, but down 27% from a year ago.   Localized economic factors definitely play a part in performance, but the region’s higher than average origination growth from a broader range of VantageScore consumers could also explain some of the delinquency stress here. And that is the challenge going forward: maintaining bankcard’s recent growth while keeping performance in check.  As the economy and consumer confidence improves, this balancing act will become more difficult as issuers will want to meet the consumer’s appetite for spending and credit.  Increased volume and utilization is always good for business, but it won’t be until the performance of these loans materializes that we’ll know whether it was worth it.

Published: April 13, 2012 by Alan Ikemura

Last month, I wrote about seeking ways to ensure growth without increasing risk.  This month, I’ll present a few approaches that use multiple scores to give a more complete view into a consumer’s true profile. Let’s start with bankruptcy scores. You use a risk score to capture traditional risk, but bankruptcy behavior is significantly different from a consumer profile perspective. We’ve seen a tremendous amount of bankruptcy activity in the market. Despite the fact that filings were slightly lower than 2010 volume, bankruptcies remain a serious threat with over 1.3 million consumer filings in 2011; a number that is projected for 2012.  Factoring in a bankruptcy score over a traditional risk score, allows better visibility into consumers who may be “balance loading”, but not necessarily going delinquent, on their accounts. By looking at both aspects of risk, layering scores can identify consumers who may look good from a traditional credit score, but are poised to file bankruptcy. This way, a lender can keep their approval rates up and lower risk of overall dollar losses. Layering scores can be used in other areas of the customer life cycle as well. For example, as new lending starts to heat up in markets like Auto and Bankcard, adding a next generation response score to a risk score in your prospecting campaigns, can translate into a very clear definition of the population you want to target. By combining a prospecting score with a risk score to find credit worthy consumers who are most likely to open, you help mitigate the traditional inverse relationship between open rates and credit worthiness. Target the population that is worth your precious prospecting resources. Next time, we’ll look at other analytics that help complete our view of consumer risk. In the meantime, let me know what scoring topics are on your mind.

Published: April 3, 2012 by Veronica Herrera

By: Mike Horrocks Henry Ford is credited to have said “Coming together is a beginning. Keeping together is progress. Working together is success.”   This is so true with risk management, as you may consider bringing in different business units, policies, etc., into a culture of enterprise risk management.  Institutions that understand the concept of strength from unity are able to minimize risks at all levels, and not be exposed in unfamiliar areas. So how can this apply in your organization?  Is your risk management process united across all different business lines or are there potential chinks in your armor?  Are you using different guidelines to manage risk as it comes in the door, versus how you are looking at it once it is part of the portfolio, or are they closely unified in purpose? Now don’t get me wrong, I am not saying that blind cohesion is right for every risk management issue, but getting efficiencies and consistencies can do wonders for your overall risk management process.  Here are some great questions to help you evaluate where you are: Is there a well-understood risk management approach in place across the institution? How confident are you that risk management is a core competence of your institution? Does risk management run through the veins of the institution, or is it regarded as the domain of auditors and compliance? A review of these questions may bring you closer to being one in purpose when it comes to your risk management processes.  And while that oneness may not bring you Zen-like inner peace, it will bring your portfolio managers at least a little less stress.

Published: March 27, 2012 by Guest Contributor

By: Joel Pruis Some of you may be thinking finally we get to the meat of the matter.  Yes the decision strategies are extremely important when we talk about small business/business banking.  Just remember how we got to here though, we had to first define: Who are we going to pursue in this market segment? How are we going to pursue this market segment - part 1 &  part 2? What are we going to require of the applicants to request the funds? Without the above, we can create all the decision strategies we want but their ultimate effectiveness will be severely limited as they will not have a foundation based upon a successful execution. First we are going to lay the foundation for how we are going to create the decision strategy.  The next blog post (yes, there is one more!) will get into some more specifics.  With that said, it is still important that we go through the basics of establishing the decision strategy. These are not the same as investments. Decision strategies based upon scorecards We will not post the same disclosure as do the financial reporting of public corporations or investment solicitations.  This is the standard disclosure of “past performance is not an indication of future results”.  On the contrary, for scorecards, past performance is an indication of future results.  Scorecards are saying that if all conditions remain the same, future results should follow past performance.  This is the key. We need to fully understand what the expected results are to be for the portfolio originated using the scorecard.  Therefore we need to understand the population of applications used to develop the scorecards, basically the information that we had available to generate the scorecard.  This will tie directly with the information that we required of the applications to be submitted. As we understand the type of applications that we are taking from our client base we can start to understand some expected results. By analyzing what we have processed in the past we can start to build about model for the expected results going forward. Learn from the past and try not to repeat the mistakes we made. First we take a look at what we did approve and analyze the resulting performance of the portfolio. It is important to remember that we are not to be looking for the ultimate crystal ball rather a model that can work well to predict performance over the next 12 to 18 months. Those delinquencies and losses that take place 24, 36, 48 months later should not and cannot be tied back to the information that was available at the time we originated the credit. We will talk about how to refresh the score and risk assessment in a later blog on portfolio management. As we see what was approved and demonstrated acceptable performance we can now look back at those applications we processed and see if any applications that fit the acceptable profile were actually declined. If so, what were the reasons for the declinations?  Do these reasons conflict with our findings based upon portfolio performance? If so, we may have found some additional volume of acceptable loans. I say \"may\" because statistics by themselves do not tell the whole story, so be cautious of blindly following the statistical data. My statistics professor in college drilled into us the principle of \"correlation does not mean causation\".  Remember that the next time a study featured on the news.  The correlation may be interesting but it does not necessarily mean that those factors \"caused\" the result.  Just as important, challenge the results but don\'t use outliers to disprove here results or the effectiveness of the models. Once we have created the model and applied it to our typical application population we can now come up with some key metrics that we need to manage our decision strategies:     Expected score distributions of the applications     Expected approval percentage     Expected override percentage     Expected performance over the next 12-18 months Expected score distributions We build the models based upon what we expect to be the population of applications we process going forward. While we may target market certain segments we cannot control the walk-in traffic, the referral volume or the businesses that will ultimately respond to our marketing efforts. Therefore we consider the normal application distribution and its characteristics such as 1) score; 2) industry; 3) length of time in business; 4) sales size; etc.  The importance of understanding and measuring the application/score distributions is demonstrated in the next few items. Expected approval percentages First we need to consider the approval percentages as an indication of what percent of the business market to which we are extending credit. Assuming we have a good representative sample of the business population in the applications we are processing we need to determine what percentile of businesses will be our targeted market. Did our analysis show that we can accept the top 40%? 50%?  Whatever the percentage, it is important that we continue to monitor our approval percentage to determine if we are starting to get too conservative or too liberal in our decisioning. I typically counsel my client that “just because your approval percentage is going up is not necessarily an improvement!”  By itself an increase in approval percentage is not good.  I\'m not saying that it is bad just that when it goes up (or down!) you need to explain why. Was there a targeted marketing effort?  Did you run into a short term lucky streak? OR is it time to reassess the decision model and tighten up a bit? Think about what happens in an economic expansion. More businesses are surviving (note I said surviving not succeeding). Are more businesses meeting your minimum criteria?  Has the overall population shifted up?  If more businesses are qualifying but there has been no change in the industries targeted, we may need to increase our thresholds to maintain our targeted 50% of the market. Just because they met the standard criteria in the expansion does not mean they will survive in a recession. \"But Joel, the recession might be more than 18 months away so we have a good client for at least 18 months, don\'t we?\". I agree but we have to remember that we built the model assuming all things remain constant. Therefore if we are confident that the expansion will continue at the same pace infinitum, then go ahead and live with the increased approval percentage.  I will challenge you that it is those applicants that \"squeaked by\" during the expansion that will be the largest portion of the losses when the recession comes. I will also look to investigate the approval percentages when they go down.  Yes you can make the same claim that the scorecard is saying that the risk is too great over the next 12-18 months but again I will challenge that if we continue to provide credit to the top 40-50% of all businesses we are likely doing business with those clients that will survive and succeed when the expansion returns.  Again, do the analysis of “why” the approval percentage declined/dropped. Expected override percentage While the approval percentage may fluctuate or stay the same, another area to be reviewed is that of the override.  Overrides can be score overrides or a decision override.  Score override would be contradicting the decision that was recommended based upon the score and/or overall decision strategy.  Decision override would be when the market/field has approval authority and overturns the decision made by the central underwriting group.  Consequently you can have a score override, a decision override or both.  Overrides can be an explanation for the change in approval percentages.  While we anticipate a certain degree of overrides (say around 5%), should the overrides become too significant we start to lose control of the expected outcomes of the portfolio performance.  As such we need to determine why the overrides have increase (or potentially decrease) and the overrides impact on the approval percentage.  We will address some specifics around override management in a later blog.  Suffice to say, overrides will always be present but we need to keep the amount of overrides within tolerances to be sure we can accurate assess future performance. Expected performance over next 12-18 months The measure of expected performance is at minimum the expected probability/propensity of repayment.  This may be labeled as the bad rate or the probability of default (PD).  In a nutshell it is the probability that the credit facility will be a certain level of delinquency over the next 12-18 months.  What the base level expected performance based upon score is not the expected “loss” on the account.  That is a combination of the probability of default combined with the expected loss given event of default. For the purpose of this post we are talking about the probability of default and not the loss given event of default.  For reinforcement we are simply talking about the percentage of accounts that go 30 or 60 or 90 days past due during the 12 – 18 months after origination. So bottom line, if we maintain a score distribution of the applications processed by the financial institution, maintain the approval percentage as well as the override percentages we should be able to accurately assess the future performance of the newly originated portfolio. Coming up next… A more tactical discussion of the decision strategy  

Published: March 23, 2012 by Guest Contributor

In my last two posts on bankcard and auto originations, I provided evidence as to why lenders have reason to feel optimistic about their growth prospects in 2012.  With real estate lending however, the recovery, or lack thereof looks like it may continue to struggle throughout the year. At first glance, it would appear that the stars have aligned for a real estate turnaround.  Interest rates are at or near all-time lows, housing prices are at post-bubble lows and people are going back to work with the unemployment rate at a 3-year low just above 8%. However, mortgage originations and HELOC limits were at $327B and $20B for Q3 2011, respectively.  Admittedly not all-time quarterly lows, but well off levels of just a couple years ago.  And according to the Mortgage Bankers Association, 65% of the mortgage volume was from refinance activity. So why the lull in real estate originations?  Ironically, the same reasons I just mentioned that should drive a recovery. Low interest rates – That is, for those that qualify.  The most creditworthy, VantageScore A and B consumers made up nearly 77% of the $327B mortgage volume and 87% of the $20B HELOC volume in Q3 2011.  While continuing to clean up their portfolios, lenders are adjusting their risk exposure accordingly. Housing prices at multi-year lows - According to the S&P Case Shiller index, housing prices were 4% lower at the end of 2011 when compared to the end of 2010 and at the lowest level since the real estate bubble.  Previous to this report, many thought housing prices had stabilized, but the excess inventory of distressed properties continues to drive down prices, keeping potential buyers on the sidelines. Unemployment rate at 3-year low – Sure, 8.3% sounds good now when you consider we were near 10% throughout 2010.  But this is a far cry from the 4-5% rate we experienced just five years ago.   Many consumers continue to struggle, affecting their ability to make good on their debt obligations, including their mortgage (see “Housing prices at multi-year lows” above), in turn affecting their credit status (see “Low interest rates” above)… you get the picture. Ironic or not, the good news is that these forces will be the same ones to drive the turnaround in real estate originations.  Interest rates are projected to remain low for the foreseeable future, foreclosures and distressed inventory will eventually clear out and the unemployment rate is headed in the right direction.  The only missing ingredient to make these variables transform from the hurdle to the growth factor is time.

Published: March 16, 2012 by Alan Ikemura

Organizations approach agency management from three perspectives: (1) the need to audit vendors to ensure that they are meeting contractual, financial and legal compliance requirements; (2) ensure that the organization’s clients are being treated fairly and ethically in order to limit brand reputation risk and maintain a customer-centric commitment; (3) maximize revenue opportunities through collection of write-offs through successful performance management of the vendor. Larger organizations manage this process often by embedding an agency manager into the vendor’s site, notably on early out / pre charge-off outsourcing projects. As many utilities leverage the services of outsourcers for managing pre-final bill collections, this becomes an important tool in managing quality and driving performance. The objective is to build a brand presence in the outsourcer’s site, and focusing its employees and management team on your customers and daily performance metrics and outcomes. This is particularly useful in vendor locations in which there are a number of high profile client projects with larger resource pools competing for attention and performance, as an embedded manager can ensure that the brand gets the right level of attention and focus. For post write off recovery collections in utility companies, embedding an agency manager becomes cost-prohibitive and less of an opportunity from an ROI perspective, due to the smaller inventories of receivables at any agency. We urge that clients not spread out their placements to many vendors where each project is potentially small, as the vendors will more likely focus on larger client projects and dilute the performance on your receivables. Still, creating a smaller pool of agency partners often does not provide a resource pool of >50-100 collectors at a vendor location to warrant an embedded agency management approach. Even without an embedded agency manager, organizations can use some of the techniques that are often used by onsite managers to ensure that the focus is on their projects, and maintain an ongoing quality review and performance management process. The tools are fairly common in today’s environment --- remote monitoring and quality reviews of customer contacts (i.e., digital logging), monthly publishing of competitive liquidation results to a competitive agency process with market share incentives, weekly updates of month-to-date competitive results to each vendor to promote competition, periodic “special” promotions / contests tied to performance where below target MTD, and monthly performance “kickers” for exceeding monthly liquidation targets at certain pre-determined levels. Agencies have selective memory, and so it’s vital to keep your projects on their radar. Remember, they have many more clients, all of whom want the same thing – performance. Some are less vocal and focused on results than others. Those that are always providing competitive feedback, quality reviews and feedback, contests, and market share opportunities are top of mind, and generally get the better selection of collectors, team /project managers, and overall vendor attention. The key is to maintain constant visibility and a competitive atmosphere. Over the next several weeks, we'll dive into more detail for each of these areas: Auditing and monitoring, onsite and remote Best practices for improving agency performance Scorecards and strategies Market share competition and scorecards

Published: March 6, 2012 by Jeff Bernstein

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!