Uncategorized

Loading...

By: Amanda Roth To refine your risk-based pricing another level, it is important to analyze where your tiers are set and determine if they are set appropriately.  (We find many of the regulators / examiners are looking for this next level of analysis.) This analysis begins with the results of the scoring model validation.  Not only will the distributions from that analysis determine if the score can predict between good and delinquent accounts, but it will also highlight which score ranges have similar delinquency rates, allowing you to group your tiers together appropriately.  After all, you do not want to have applicants with a 1 percent chance of delinquency priced the same as someone with an 8 percent chance of delinquency.  By reviewing the interval delinquency rates as well as the odds ratios, you should be able to determine where a significant enough difference occurs to warrant different pricing. You will increase the opportunity for portfolio profitability through this analysis, as you are reducing the likelihood that higher risk applicants are receiving lower pricing.  As expected, the overall risk management of the portfolio will increase when a proper risk-based pricing program is developed. In my next post we will look the final level of validation which does provide insight into pricing for profitability.  

Published: December 18, 2009 by Guest Contributor

By: Amanda Roth As discussed earlier, the validation of a risk based-pricing program can mean several different things. Let’s break these options down. The first option is to complete a validation of the scoring model being used to set the pricing for your program. This is the most basic validation of the program, and does not guarantee any insight on loan profitability expectations. A validation of this nature will help you to determine if the score being used is actually helping to determine the risk level of an applicant. This analysis is completed by using a snapshot of new booked loans received during a period of time usually 18–24 months prior to the current period. It is extremely important to view only the new booked loans taken during the time period and the score they received at the time of application. By maintaining this specific population only, you will ensure the analysis is truly indicative of the predictive nature of your score at the time you make the decision and apply the recommended risk-base pricing. By analyzing the distribution of good accounts vs. the delinquent accounts, you can determine if the score being used is truly able to separate these groups. Without acceptable separation, it would be difficult to make any decisions based on the score models, especially risk-based pricing. Although beneficial in determining whether you are using the appropriate scoring models for pricing, this analysis does not provide insight into whether your risk-based pricing program is set up correctly or not. Please join me next time to take a look at another option for this analysis.

Published: December 18, 2009 by Guest Contributor

By: Roger Ahern It’s been proven in practice many times that by optimizing decisions (through improved decisioning strategies, credit risk modeling, risk-based pricing, enhanced scoring models, etc.) you will realize significant business benefits in key metrics, such as net interest margin, collections efficiency, fraud referral rates and many more.  However, given that a typical company may make more than eight million decisions per year, which decisions should one focus on to deliver the greatest business benefit? In working with our clients, Experian has compiled the following list of relevant types of decisions that can be improved through improvements in decision analytics.  As you review the list below, you should identify those decisions that are relevant to your organization, and then determine which decision types would warrant the greatest opportunity for improvement. • Cross-sell determination • Prospect determination • Prescreen decision • Offer/treatment determination • Fraud determination • Approve/decline decision • Initial credit line/limit/usage amount • Initial pricing determination • Risk-based pricing • NSF pay/no-pay decision • Over-limit/shadow limit authorization • Credit line/limit/usage/ management • Retention decisions • Loan/payment modification • Repricing determination • Predelinquency treatment • Early/late-stage delinquency treatment • Collections agency placement • Collection/recovery treatment  

Published: December 14, 2009 by Roger Ahern

I have already commented on “secret questions” as the root of all evil when considering tools to reduce identity theft and minimize fraud losses.  No, I’m not quite ready to jump off  that soapbox….not just yet, not when we’re deep into the season of holiday deals, steals and fraud.  The answers to secret questions are easily guessed, easily researched, or easily forgotten.  Is this the kind of security you want standing between your account and a fraudster during the busiest shopping time of the year? There is plenty of research demonstrating that fraud rates spike during the holiday season.  There is also plenty of research to demonstrate that fraudsters perpetrate account takeover by changing the pin, address, or e-mail address of an account – activities that could be considered risky behavior in decisioning strategies.  So, what is the best approach to identity theft red flags and fraud account management?  A risk based authentication approach, of course! Knowledge Based Authentication (KBA) provides strong authentication and can be a part of a multifactor authentication environment without a negative impact on the consumer experience, if the purpose is explained to the consumer.  Let’s say a fraudster is trying to change the pin or e-mail address of an account.  When one of these risky behaviors is initiated, a Knowledge Based Authentication session begins. To help minimize fraud, the action is prevented if the KBA session is failed.  Using this same logic, it is possible to apply a risk based authentication approach to overall account management at many points of the lifecycle: • Account funding • Account information change (pin, e-mail, address, etc.) • Transfers or wires • Requests for line/limit increase • Payments • Unusual account activity • Authentication before engaging with a fraud alert representative Depending on the risk management strategy, additional methods may be combined with KBA; such as IVR or out-of-band authentication, and follow-up contact via e-mail, telephone or postal mail.  Of course, all of this ties in with what we would consider to be a comprehensive Red Flag Rules program. Risk based authentication, as part of a fraud account management strategy, is one of the best ways we know to ensure that customers aren’t left singing, “On the first day of Christmas, the fraudster stole from me…”  

Published: December 7, 2009 by Monica Pearson

For the past couple years, the deterioration of the real estate market and the economy as a whole has been widely reported as a national and international crisis. There are several significant events that have contributed to this situation, such as, 401k plans have fallen, homeowners have simply abandoned their now under-valued properties, and the federal government has raced to save the banking and automotive sectors. While the perspective of most is that this is a national decline, this is clearly a situation where the real story is in the details. A closer look reveals that while there are places that have experienced serious real estate and employment issues (California, Florida, Michigan, etc.), there are also areas (Texas) that did not experience the same deterioration in the same manner. Flash forward to November, 2009 – with signs of recovery seemingly beginning to appear on the horizon – there appears to be a great deal of variability between areas that seem poised for recovery and those that are continuing down the slope of decline. Interestingly though, this time the list of usual suspects is changing. In a recent article posted to CNN.com, Julianne Pepitone observes that many cities that were tops in foreclosure a year ago have since shown stabilization, while at the same time, other cities have regressed. A related article outlines a growing list of cities that, not long ago, considered themselves immune from the problems being experienced in other parts of the country. Previous economic success stories are now being identified as economic laggards and experiencing the same pains, but only a year or two later. So – is there a lesson to be taken from this? From a business intelligence perspective, the lesson is generalized reporting information and forecasting capabilities are not going to be successful in managing risk. Risk management and forecasting techniques will need to be developed around specific macro- and micro-economic changes.  They will also need to incorporate a number of economic scenarios to properly reflect the range of possible future outcomes about risk management and risk management solutions. Moving forward, it will be vital to understand the differences in unemployment between Dallas and Houston and between regions that rely on automotive manufacturing and those with hi-tech jobs. These differences will directly impact the performance of lenders’ specific footprints, as this year’s “Best Place to Live” according to Money.CNN.com can quickly become next year’s foreclosure capital. ihttp://money.cnn.com/2009/10/28/real_estate/foreclosures_worst_cities/index.htm?postversion=2009102811 iihttp://money.cnn.com/galleries/2009/real_estate/0910/gallery.foreclosures_worst_cities/2.html  

Published: November 30, 2009 by Kelly Kent

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

By: Kari Michel The U.S. government and mortgage lenders have developed various loan modification programs to help homeowners better manage their mortgage debt so that they can meet their monthly payment obligations. Given these new programs, what is the impact to the consumer’s score? Do consumer scores drop more if they work with their lenders to get their mortgage loan restructured or if they file for bankruptcy? The finding from a study conducted by VantageScore ® Solutions* reveals that a delinquency on a mortgage has a greater impact on the consumer’s score than a loan modification. Bankruptcy, short sale, and foreclosure have the greatest impact to a score. A bankruptcy or poor bankruptcy score can negatively impact a consumer for a minimum of seven years with a potential score decrease of 365 points. However, with a loan modification, consumers can rehabilitate their scores to an acceptable risk level within nine months.  This depends on them bringing all their delinquent accounts to current status. Loan modifications have little impact on their consumer credit score and the influence on their score can range from a 20 point decrease to an increase of 30 points. Lenders should proactively seek out a mortgage loan modification before consumers experience severe delinquency in their credit files and credit score trends. The restructured mortgage should provide sufficient cash availability to remain with the consumer.  This ensures that any other delinquent debts can be updated to current status. Whenever possible, bankruptcy should be avoided because it has the greatest consequences for the lender and the consumer. *For more detailed information on this study, Credit Scoring and Mortgage Modifications: What lenders need to know, please click on this link to access an archived file of a recent webinar:  http://register.sourcemediaconferences.com/click/clickReg.cfm?URLID=5258

Published: November 16, 2009 by Guest Contributor

The value of a good decision can generate $150 or more in customer net present value, while the cost of a bad decision can cost you $1,000 or more.  For example, acquiring a new and profitable customer by making good prospecting and approval and pricing decisions and decisioning strategies may generate $150 or much more in customer net present value and help you increase net interest margin and other key metrics.  While the cost of a bad decision (such as approving a fraudulent applicant or inappropriately extending credit that ultimately results in a charge-off) can cost you $1,000 or more. Why is risk management decisioning important? This issue is critical because average-sized financial institutions or telecom carriers make as many as eight million customer decisions each year (more than 20,000 per day!).  To add to that, very large financial institutions make as many as 50 billion customer decisions annually.  By optimizing decisions, even a small 10-to-15 percent improvement in the quality of these customer life cycle decisions can generate substantial business benefit. Experian recommends that clients examine the types of decisioning strategies they leverage across the customer life cycle, from prospecting and acquisition, to customer management and collections.  By examining each type of decision, you can identify those opportunities for improvement that will deliver the greatest return on investment by leveraging credit risk attributes, credit risk modeling, predictive analytics and decision-management software.        

Published: November 13, 2009 by Roger Ahern

Well, here we are nearly at the beginning of November and the Red Flags Rule has been with us for nearly two years and the FTC’s November 1, 2009 enforcement date is upon us as well (I know I’ve said that before).  There is little value in me chatting about the core requirements of the Red Flags Rule at this point.  Instead, I’d like to shed some light on what we are seeing and hearing these days from our clients and industry experts related to this initiative: Red Flags Rule responses clients 1. Most clients have a solid written and operational Identity Theft Prevention Program in place that arguably meets their interpretation of the Red Flags Rule requirements. 2. Most clients have a solid written and operational Identity Theft Prevention Program in place that creates a boat-load of referrals due to the address mismatches generated in their process(es) and the requirement to do something with them. 3. Most clients are now focusing on ways in which to reduce the number of referrals generated and procedures to clear the remaining referrals via a cost-effective and automated manner…of course, while preventing fraud and staying compliant to Red Flags Rule. In 2008, a key focus at Experian was to help educate the market around the Red Flags Rule concepts and requirements. The concentration in 2009 has nearly fully shifted to assisting the market in creating risk-based authentication programs that leverage holistic views of a consumer, flexible tools that are pointed to a consumer based on that person’s authentication and risk profile. There is also an overall decisioning strategy that balances risk, compliance, and resource constraints. Spirit of Red Flags Rule The spirit of the Red Flags Rule is intended to ensure all covered institutions are employing basic identity theft prevention procedures (a pretty good idea).  I believe most of these institutions (even those that had very robust programs in place years before the rule was introduced) can appreciate this requirement that brings all institutions up to speed.  It is now, however, a matter of managing process within the realities of, and costs associated with, manpower, IT resources, and customer experience sensitivities.  

Published: November 2, 2009 by Keir Breitenfeld

Recent findings on vintage analysis Source: Experian-Oliver Wyman Market Intelligence Reports Analyzing recent vintage analysis provides insights gleaned from cursory review Analyzing recent trends from vintages published in the Experian-Oliver Wyman Market Intelligence Reports, there are numerous insights that can be gleaned from just a cursory review of the results. Mortgage vintage analysis trends As noted in an earlier posting, recent mortgage vintage analysis' show a broad range of behaviors between more recent vintages and older, more established vintages that were originated before the significant run-up of housing prices seen in the middle of the decade. The 30+ delinquency levels for mortgage vintages in 2005, 2006, and 2007 approach and in two cases exceed 10 percent of trades in the last 12 months of performance, and have spiked from historical trends, beginning almost immediately after origination. On the other end of the spectrum, the vintages from 2003 and 2002 have barely approached or exceeded 5 percent for the last 6 or 7 years. Bandcard vintage analysis trends As one would expect, the 30+ delinquency trends demonstrated within bankcard vintage analysis are vastly different from the trends of mortgage vintages. Firstly, card delinquencies show a clear seasonal trend, with a more consistent yearly pattern evident in all vintages, resulting from the revolving structure of the product. The most interesting trends within the card vintages do show that the more recent vintages, 2005 to 2008, display higher 30+ delinquency levels, especially the Q2 2007 vintage, which is far and away the underperformer of the group. Within each vintage pool, an analysis can extend into the risk distribution and details of the portfolio and further segment the pool by credit score, specifically VantageScore.  In other words, the loans in this pool are only for the most creditworthy customers at the time of origination. The noticeable trend is that while these consumers were largely resistant to deteriorating economic conditions, each vintage segment has seen a spike in the most recent 9-12 months. Given that these consumers tend to have the highest limits and lowest utilization of any VantageScore band, this trend encourages further account management consideration and raises flags about overall bankcard performance in coming months. Even a basic review of vintage analysis pools and the subsequent analysis opportunities that result from this data can be extremely useful. This vintage analysis can add a new perspective to risk management, supplementing more established analysis techniques, and further enhancing the ability to see the risk within the risk. Purchase a complete picture of consumer credit trends from Experian’s database of over 230 million consumers with the Market Intelligence Brief.

Published: November 2, 2009 by Kelly Kent

By: Kennis Wong In Part 1 of Generic fraud score, we emphasized the importance of a risk-based approach when it comes to fraud detection. Here are some further questions you may want to consider. What is the performance window? When a model is built, it has a defined performance window. That means the score is predicting a certain outcome within that time period. For example, a traditional risk score may be predicting accounts that are decreasing in twenty-four months. That score may not perform well if your population typically worsens in two months. This question is particularly important when it relates to scoring your population. For example, if a bust-out score has a performance window of three months, and you score your accounts at the time of acquisition, it would only catch accounts that are busting-out within the next three months. As a result, you should score your accounts during periodic account reviews in addition to the time of acquisition to ensure you catch all bust-outs.  Therefore, bust out fraud is an important indicator. Which accounts should I score? While it’s typical for creditors to use a fraud score on every applicant at the time of acquisition, they may not score all their accounts during review. For example, they may exclude inactive accounts or older accounts assuming those with a long history means less likelihood of fraud. This mistake may be expensive. For instance, the typical bust-out behavior is for fraudsters to apply for cards way before they intend to bust out. This may be forty-eight months or more. So when you think they are good and profitable customers, they can strike and leave you with seriously injury. Make sure that your fraud database is updated and accurate.  As a result, the recommended approach is to score your entire portfolio during account review. How often do I validate the score? The answer is very often -- this may be monthly or quarterly. You want to understand whether the score is working for you – do your actual results match the volume and risk projections? Shifts of your score distribution will almost certainly occur over time. To meet your objectives over the long run, continue to monitor and adjust cutoffs.  Keep your fraud database updated at all times.    

Published: October 12, 2009 by Guest Contributor

When reviewing offers for prospective clients, lenders often deal with a significant amount of missing information in assessing the outcomes of lending decisions, such as: Why did a consumer accept an offer with a competitor? What were the differentiating factors between other offers and my offer, i.e. what were their credit score trends? What happened to consumers that we declined? Do they perform as expected or better than anticipated? What were their credit risk models? While lenders can easily understand the implications of the loans they have offered and booked with consumers, they often have little information about two important groups of consumers: 1. Lost leads: consumers to whom they made an offer but did not book 2. Proxy performance: consumers to whom financing was not offered, but where the consumer found financing elsewhere. Performing a lost lead analysis on the applications approved and declined, can provide considerable insight into the outcomes and credit performance of consumers that were not added to the lender’s portfolio. Lost lead analysis can also help answer key questions for each of these groups: How many of these consumers accepted credit elsewhere? What were their credit attributes? What are the credit characteristics of the consumers we're not booking? Were these loans booked by one of my peers or another type of lender? What were the terms and conditions of these offers? What was the performance of the loans booked elsewhere? Who did they choose for loan origination? Within each of these groups, further analysis can be conducted to provide lenders with actionable feedback on the implications of their lending policies, possibly identifying opportunities for changes to better fulfill lending objectives. Some key questions can be answered with this information: Are competitors offering longer repayment terms? Are peers offering lower interest rates to the same consumers? Are peers accepting lower scoring consumers to increase market share? The results of a lost lead analysis can either confirm that the competitive marketplace is behaving in a manner that matches a lender’s perspective.  It can also shine a light into aspects of the market where policy changes may lead to superior results. In both circumstances, the information provided is invaluable in making the best decision in today’s highly-sensitive lending environment.

Published: October 11, 2009 by Kelly Kent

-- by Heather Grover I’m often asked in various industry forums to give talks about, or opinions on, the latest fraud trends and fraud best practices. Let’s face it –  fraudsters are students of their craft and continue to study the latest defenses and adapt to controls that may be in place. You may be surprised, then, to learn that our clients’ top-of-mind issues are not only how to fight the latest fraud trends, but how they can do so while maximizing use of automation, managing operational costs, and preserving customer experience -- all while meeting compliance requirements. Many times, clients view these goals as being unique goals that do not affect one another. Not only can these be accomplished simultaneously, but, in my opinion, they can be considered causal. Let me explain. By looking at fraud detection as its own goal, automation is not considered as a potential way to improve this metric. By applying analytics, or basic fraud risk scores, clients can easily incorporate many different potential risk factors into a single calculation without combing through various data elements and reports. This calculation or score can predict multiple fraud types and risks with less effort, than could a human manually, and subjectively reviewing specific results. Through an analytic score, good customers can be positively verified in an automated fashion; while only those with the most risky attributes can be routed for manual review. This allows expensive human resources and expertise to be used for only the most risky consumers. Compliance requirements can also mandate specific procedures, resulting in arduous manual review processes. Many requirements (Patriot Act, Red Flag, eSignature) mandate verification of identity through match results. Automated decisioning based on these results (or analytic score) can automate this process – in turn, reducing operational expense. While the above may seem to be an oversimplification or simple approach, I encourage you to consider how well you are addressing financial risk management.  How are you managing automation, operational costs, and compliance – while addressing fraud?  

Published: August 30, 2009 by Guest Contributor

By: Kari Michel This blog completes my discussion on monitoring new account decisions with a final focus: scorecard monitoring and performance.  It is imperative to validate acquisitions scorecards regularly to measure how well a model is able to distinguish good accounts from bad accounts. With a sufficient number of aged accounts, performance charts can be used to: • Validate the predictive power of a credit scoring model; • Determine if the model effectively ranks risk; and • Identify the delinquency rate of recently booked accounts at various intervals above and below the primary cutoff score. To summarize, successful lenders maximize their scoring investment by incorporating a number of best practices into their account acquisitions processes: 1. They keep a close watch on their scores, policies, and strategies to improve portfolio strength. 2. They create monthly reports to look at population stability, decision management, scoring models and scorecard performance. 3. They update their strategies to meet their organization’s profitability goals through sound acquisition strategies, scorecard monitoring and scorecard management.

Published: August 18, 2009 by Guest Contributor

There are a lot of areas covered in your comment: efficiency; credit quality (human side or character in an impersonal environment); and policy adherence. We define efficiency and effectiveness using these metrics: • Turnaround time from application submission to decision; • Resulting delinquencies based upon type of underwriting (centralized vs. decentralized); • Production levels between centralized and decentralized; • Performance of the portfolio based upon type of underwriting; and • Turnaround time from application submission to decision Due to the nature of Experian’s technology, we are able to capture start and stop times of the typical activities related to loan origination.  After analyzing the data from 160+ financial institutions of all sizes, Experian publishes an annual small business benchmark report that documents loan origination process efficiencies and inefficiencies, benchmarking these as industry standards. Turnaround Time From the benchmark report, we’ve seen that institutions that are centralized have consistently had a turnaround time that is half of those with decentralized environments. Interestingly, turnaround time is also much faster for the larger institutions than for smaller.  This is confusing because the smaller community banks tend to promote the close relationship they have with their clients and their communities. Yet, when it comes to actually making a loan decision, it tends to take longer. In addition to speed, another aspect of turnaround is consistency.  We all can think of situations where we were able to beat the stated turnaround times of the larger or the centralized institutions.  Unfortunately, these tend to be isolated instances versus the consistent performance that is delivered in the centralized environment. Resulting delinquencies based upon type of underwriting/Performance of the portfolio based upon type of underwriting Again, referring to the annual small business lending benchmark report, delinquencies in a centralized environment are 50% of those in a decentralized environment. I have worked with a number of institutions that allow the loan officer/relationship manager to “reverse the decision” made by a centralized underwriting group.  The thinking is that the human aspect is otherwise missing in centralized underwriting.  When the data is collected, though, the incremental business/portfolio that is approved by the loan officer (who is close to the client and knows the human side) is not profitable from a credit quality perspective.  Specifically, this incremental portfolio typically has a net charge-off rate that exceeds the net interest margin -- and this is before we even consider the non-interest expense incurred. Your choice: is the incremental business critical to your success…or could you more fruitfully direct your relationship officer’s attention elsewhere? Production levels between centralized and decentralized Not to beat a dead horse, but the multiple of two comes into play here too.  As one looks at the throughput of each role (data entry, underwriter, relationship manager/lender), the production levels of a centralized environment are typically double that of a decentralized. It’s clear that the data point to the efficiency and effectiveness of a centralized environment    

Published: August 7, 2009 by Guest Contributor

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!