All posts by Guest Contributor

Loading...

By: Heather Grover In my previous blog, I covered top of mind issues that our clients are challenged with related to their risk based authentication efforts and fraud account management. My goal in this blog is to share many of the specific fraud trends we have seen in recent months, as well as those that you – our clients and the industry as a whole – are experiencing.  Management of risk and strategies to minimize fraud is on your mind. 1. Migration of fraud from Internet to call centers - and back again. Channel specific fraud is nothing new. Criminals prefer non-face-to-face channels because they can preserve anonymity, while increasing their number of attempts. The Internet has been long considered a risky channel, because many organizations have built defenses around transaction velocity checks, IP address matching and other tools. Once fraudsters were unable to pass through this channel, the call center became the new target, and path of least resistance. Not surprisingly, once the industry began to address the call center, fraud began to migrate, yet again. Increasingly we hear that the interception and compromise of online credentials due to keystroke loggers and other malware is on the rise. 2. Small business fraud on the rise. As the industry has built defenses in their consumer business, fraudsters have again migrated -- this time to commercial products. Historically, small business has not been a target for fraud, which is changing. We see and hear that, while similar to consumer fraud in many ways, small business fraud is often more difficult to detect many times due to “shell businesses” that are established. 3. Synthetic ID becoming less of an issue.  As lenders tighten their criteria, not only are they turning down those less likely to pay, but their higher standards are likely affecting Synthetic ID fraud, which many times creates identities with similar characteristics that mirror “thin file” consumers. 4. Family fraud continues. We have seen consumers using the identities of members of their family in an attempt to gain and draw down credit. These occurrences are nothing new, but   sadly this continues in the current economic environment. Desperate parents use their children’s identities to apply for new credit, or other family may use an elderly person’s dormant accounts with a goal of finding a short term lifeline in a bad credit situation. 5. Fraud increasing from specific geographic regions. Some areas are notorious for perpetrating fraud – not too long ago it was Nigeria and Russia. We have seen and are hearing that the new hot spots are Vietnam and other Eastern Europe countries that neighbor Russia. 6. Falsely claiming fraud. There has been an increase of consumers who claim fraud to avoid an account going into delinquency. Given the poor state of many consumers credit status, this pattern is not unexpected. The challenge many clients face is the limited ability to detect this occurrence. As a result, many clients are seeing an increase in fraud rates. This misclassification is masking what should be bad debt.  

Published: August 30, 2009 by Guest Contributor

-- by Heather Grover I’m often asked in various industry forums to give talks about, or opinions on, the latest fraud trends and fraud best practices. Let’s face it –  fraudsters are students of their craft and continue to study the latest defenses and adapt to controls that may be in place. You may be surprised, then, to learn that our clients’ top-of-mind issues are not only how to fight the latest fraud trends, but how they can do so while maximizing use of automation, managing operational costs, and preserving customer experience -- all while meeting compliance requirements. Many times, clients view these goals as being unique goals that do not affect one another. Not only can these be accomplished simultaneously, but, in my opinion, they can be considered causal. Let me explain. By looking at fraud detection as its own goal, automation is not considered as a potential way to improve this metric. By applying analytics, or basic fraud risk scores, clients can easily incorporate many different potential risk factors into a single calculation without combing through various data elements and reports. This calculation or score can predict multiple fraud types and risks with less effort, than could a human manually, and subjectively reviewing specific results. Through an analytic score, good customers can be positively verified in an automated fashion; while only those with the most risky attributes can be routed for manual review. This allows expensive human resources and expertise to be used for only the most risky consumers. Compliance requirements can also mandate specific procedures, resulting in arduous manual review processes. Many requirements (Patriot Act, Red Flag, eSignature) mandate verification of identity through match results. Automated decisioning based on these results (or analytic score) can automate this process – in turn, reducing operational expense. While the above may seem to be an oversimplification or simple approach, I encourage you to consider how well you are addressing financial risk management.  How are you managing automation, operational costs, and compliance – while addressing fraud?  

Published: August 30, 2009 by Guest Contributor

By: Kari Michel Bankruptcies continue to rise and are expected to exceed 1.4 million by the end of this year, according to American Bankruptcy Institute Executive Director, Samuel J. Gerdano.  Although, the overall bankruptcy rates for a lender’s portfolio is small (about 1 percent), bankruptcies result in high dollar losses for lenders.  Bankruptcy losses as a percentage of total dollar losses are estimated to range from 45 percent for bankcard portfolios to 82 percent for credit unions.  Additionally, collection activity is restricted because of legislation around bankruptcy.  As a result, many lenders are using a bankruptcy score in conjunction with their new applicant risk score to make better acquisition decisions. This concept is a dual score strategy.  It is key in management of risk, to minimize fraud, and in managing the cost of credit. Traditional risk scores are designed to predict risk (typically predicting 90 days past due or greater).  Although bankruptcies are included within this category, the actual count is relatively small.   For this reason the ability to distinguish characteristics typical of a “bankruptcy” are more difficult.  In addition, often times a consumer who filed bankruptcy was in “good standings” and not necessarily reflective of a typical risky consumer.   By separating out bankrupt consumers, you can more accurately identify characteristics specific to bankruptcy.  As mentioned previously, this is important because they account for a significant portion of the losses. Bankruptcy scores provide added value when used with a risk score. A matrix approach is used to evaluate both scores to determine effective cutoff strategies.   Evaluating applicants with both a risk score and a bankruptcy score can identify more potentially profitable applicants and more high- risk accounts.  

Published: August 28, 2009 by Guest Contributor

By: Wendy Greenawalt In my last blog post I discussed the value of leveraging optimization within your collections strategy. Next, I would like to discuss in detail the use of optimizing decisions within the account management of an existing portfolio. Account Management decisions vary from determining which consumers to target with cross-sell or up-sell campaigns to line management decisions where an organization is considering line increases or decreases.  Using optimization in your collections work stream is key. Let’s first look at lines of credit and decisions related to credit line management. Uncollectible debt, delinquencies and charge-offs continue to rise across all line of credit products. In response, credit card and home equity lenders have begun aggressively reducing outstanding lines of credit.    One analyst predicts that the credit card industry will reduce credit limits by $2 trillion by 2010. If materialized, that would represent a 45 percent reduction in credit currently available to consumers. This estimate illustrates the immediate reaction many lenders have taken to minimize loss exposure. However, lenders should also consider the long-term impacts to customer retention, brand-loyalty and portfolio profitability before making any account management decision. Optimization is a fundamental tool that can help lenders easily identify accounts that are high risk versus those that are profit drivers. In addition, optimization provides precise action that should be taken at the individual consumer level. For example, optimization (and optimizing decisions) can provide recommendations for: • when to contact a consumer; • how to contact a consumer; and • to what level a credit line could be reduced or increased... …while considering organizational/business objectives such as: • profits/revenue/bad debt; • retention of desirable consumers; and • product limitations (volume/regional). In my next few blogs I will discuss each of these variables in detail and the complexities that optimization can consider.  

Published: August 23, 2009 by Guest Contributor

By: Kari Michel This blog completes my discussion on monitoring new account decisions with a final focus: scorecard monitoring and performance.  It is imperative to validate acquisitions scorecards regularly to measure how well a model is able to distinguish good accounts from bad accounts. With a sufficient number of aged accounts, performance charts can be used to: • Validate the predictive power of a credit scoring model; • Determine if the model effectively ranks risk; and • Identify the delinquency rate of recently booked accounts at various intervals above and below the primary cutoff score. To summarize, successful lenders maximize their scoring investment by incorporating a number of best practices into their account acquisitions processes: 1. They keep a close watch on their scores, policies, and strategies to improve portfolio strength. 2. They create monthly reports to look at population stability, decision management, scoring models and scorecard performance. 3. They update their strategies to meet their organization’s profitability goals through sound acquisition strategies, scorecard monitoring and scorecard management.

Published: August 18, 2009 by Guest Contributor

By: Wendy Greenawalt The combined impact of rising unemployment, increasing consumer debt burdens and decreasing home values have caused lenders to shift resources away from prospecting and acquisitions to collection and recovery activities. As delinquencies and charge-off rates continue to increase, the likelihood of collecting on delinquent accounts decreases -- because outstanding debts mount for consumers and their ability to pay declines. Integrating optimized decisions into a collection strategy enables a lenders to assign appropriate collection treatments by assessing the level of risk associated with a consumer while considering a customer’s responsiveness to particular treatment options. Specifically, collections optimization uses mathematical algorithms to maximize organizational goals while applying constraints such as budget and call center capacity  -- providing explicit treatment strategies at the consumer level -- while producing the highest probability of collecting outstanding dollars. Optimization can be integrated into a real-time call center environment by targeting the right consumers for outbound calls and assigning resources to consumers most likely to pay.  It can also be integrated into traditional lettering campaigns to determine the number and frequency of letters, and the tone of each correspondence. The options for account treatment are virtually limitless and, unlike other techniques, optimization will determine the most profitable strategy while meeting operational and business constraints without simplification of the problem. By incorporating optimization into a collection strategy that includes a predictive model or score and advanced segmentation, an organization can maximize collected dollars, minimize the costs of collection efforts, improve collections efficiency, and determine which accounts to sell off – all while maximizing organizational profits.  

Published: August 18, 2009 by Guest Contributor

There are a lot of areas covered in your comment: efficiency; credit quality (human side or character in an impersonal environment); and policy adherence. We define efficiency and effectiveness using these metrics: • Turnaround time from application submission to decision; • Resulting delinquencies based upon type of underwriting (centralized vs. decentralized); • Production levels between centralized and decentralized; • Performance of the portfolio based upon type of underwriting; and • Turnaround time from application submission to decision Due to the nature of Experian’s technology, we are able to capture start and stop times of the typical activities related to loan origination.  After analyzing the data from 160+ financial institutions of all sizes, Experian publishes an annual small business benchmark report that documents loan origination process efficiencies and inefficiencies, benchmarking these as industry standards. Turnaround Time From the benchmark report, we’ve seen that institutions that are centralized have consistently had a turnaround time that is half of those with decentralized environments. Interestingly, turnaround time is also much faster for the larger institutions than for smaller.  This is confusing because the smaller community banks tend to promote the close relationship they have with their clients and their communities. Yet, when it comes to actually making a loan decision, it tends to take longer. In addition to speed, another aspect of turnaround is consistency.  We all can think of situations where we were able to beat the stated turnaround times of the larger or the centralized institutions.  Unfortunately, these tend to be isolated instances versus the consistent performance that is delivered in the centralized environment. Resulting delinquencies based upon type of underwriting/Performance of the portfolio based upon type of underwriting Again, referring to the annual small business lending benchmark report, delinquencies in a centralized environment are 50% of those in a decentralized environment. I have worked with a number of institutions that allow the loan officer/relationship manager to “reverse the decision” made by a centralized underwriting group.  The thinking is that the human aspect is otherwise missing in centralized underwriting.  When the data is collected, though, the incremental business/portfolio that is approved by the loan officer (who is close to the client and knows the human side) is not profitable from a credit quality perspective.  Specifically, this incremental portfolio typically has a net charge-off rate that exceeds the net interest margin -- and this is before we even consider the non-interest expense incurred. Your choice: is the incremental business critical to your success…or could you more fruitfully direct your relationship officer’s attention elsewhere? Production levels between centralized and decentralized Not to beat a dead horse, but the multiple of two comes into play here too.  As one looks at the throughput of each role (data entry, underwriter, relationship manager/lender), the production levels of a centralized environment are typically double that of a decentralized. It’s clear that the data point to the efficiency and effectiveness of a centralized environment    

Published: August 7, 2009 by Guest Contributor

By: Kari Michel This blog is a continuation of my previous discussion about monitoring your new account acquisition decisions with a focus on decision management. Decision management reports provide the insight to make more targeted decisions that are sound and profitable. These reports are used to identify: which lending decisions are consistent with scorecard recommendations; the effectiveness of overrides; and/or whether cutoffs should be adjusted. Decision management reports include: • Accept versus decline score distributions • Override rates • Override reason report • Override by loan officer • Decision by loan officer Successful lending organizations review this type of information regularly to make better lending policy decisions.  Proactive monitoring provides feedback on existing strategies and helps evaluate if you are making the most effective use of your score(s). It helps to identify areas of opportunity to improve portfolio profitability. In my next blog, I will discuss the last set of monitoring reports, scorecard performance.  

Published: August 6, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: August 4, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: July 30, 2009 by Guest Contributor

By: Wendy Greenawalt When consulting with lenders, we are frequently asked what credit attributes are most predictive and valuable when developing models and scorecards. Because we receive this request often, we recently decided to perform the arduous analysis required to determine if there are material differences in the attribute make up of a credit risk model based on the portfolio on which it is applied. The process we used to identify the most predictive attributes was a combination of art and sciences -- for which our data experts drew upon their extensive data bureau experience and knowledge obtained through engagements with clients from all types of industries. In addition, they applied an empirical process which provided statistical analysis and validation of the credit attributes included. Next, we built credit risk models for a variety of portfolios including bankcard, mortgage and auto and compared the credit attribute included in each. What we found is that there are some attributes that are inherently predictive regardless for which portfolio the model was being developed. However, when we took the analysis one step further, we identified that there can be significant differences in the account-level data when comparing different portfolio models. This discovery pointed to differences, not just in the behavior captured with the attributes, but in the mix of account designations included in the model. For example, in an auto risk model, we might see a mix of attributes from all trades, auto, installment and personal finance…as compared to a bankcard risk model which may be mainly comprised of bankcard, mortgage, student loan and all trades.  Additionally, the attribute granularity included in the models may be quite different, from specific derogatory and public record data to high level account balance or utilization characteristics. What we concluded is that it is a valuable exercise to carefully analyze available data and consider all the possible credit attribute options in the model-building process – since substantial incremental lift in model performance can be gained from accounts and behavior that may not have been previously considered when assessing credit risk.  

Published: July 30, 2009 by Guest Contributor

By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development:  scorecard development; implementation/documentation; and scorecard monitoring.

Published: July 30, 2009 by Guest Contributor

By: Kari Michel In my last blog I gave an overview of monitoring reports for new account acquisition decisions listing three main categories that reports typically fall into:  (1) population stability; (2) decision management; (3) scorecard performance. Today, I want to focus on population stability.   Applicant pools may change over time as a result of new marketing strategies, changes in product mix, pricing updates, competition, economic changes or a combination of these. Population stability reports identify acquisition trends and the degree to which the applicant pool has shifted over time, including the scorecard components driving the shift in custom credit scoring models. Population stability reports include: • Actual versus expected score distribution • Actual versus expected scorecard characteristics distributions (available with custom models) • Mean applicant scores • Volumes, approval and booking rates These types of reports provide information to help monitor trends over time, rather than spikes from month to month.  Understanding the trends allows one to be proactive in determining if the shifts warrant changes to lending policies or cut-off scores. Population stability is only one area that needs to be monitored; in my next blog I will discuss decision management reports.  

Published: July 30, 2009 by Guest Contributor

By: Wendy Greenawalt On any given day, US credit bureaus contain consumer trade data on approximately four billion trades. Interpreting data and defining how to categorize the accounts and build attributes, models and decisioning tools can and does change over time, due to the fact that the data reported to the bureaus by lenders and/or servicers also changes. Over the last few years, new data elements have enabled organizations to create attributes to identify very specific consumer behavior. The challenge for organizations is identifying what reporting changes have occurred and the value that the new consumer data can bring to decisioning. For example, a new reporting standard was introduced nearly a decade ago which enabled lenders to report if a trade was secured by money or real property. Before the change, lenders would report the accounts as secured trades making it nearly impossible to determine if the account was a home equity line of credit or a secured credit card. Since then, lender reporting practices have changed and, now, reports clearly state that home equity lines of credit are secured by property making it much easier to delineate the two types of accounts from one another. By taking advantage of the most current credit bureau account data, lenders can create attributes to capture new account types.  They can also capture information (such as: past due amounts; utilization; closed accounts and derogatory information including foreclosure; charge-off and/or collection data) to make informed decisions across the customer life cycle.

Published: July 14, 2009 by Guest Contributor

By: Tom Hannagan Some articles that I’ve come across recently have puzzled me. In those articles, authors use the terms “monetary base” and “money supply” synonymously -- but those terms are actually very different. The monetary base (currency plus Fed deposits) is a much smaller number than the money supply (M1). The huge change in the “base”, which the Fed did affect by adding $1T or so to infuse a lot of quick liquidity into the financial system late in 2007/early 2008, does not necessarily impact M1 (which includes the base plus all bank demand deposits) all that much in the short-term, and may impact it even less in the intermediate-term if the Fed reduces its holdings of securities.  Some are correct, of course, in positing that a rotation out of securities by the Fed will tend to put pressure on market rates. Some are equivocating the 2007 liquidity moves of the Fed, with a major monetary policy change. When the capital markets froze due to liquidity and credit risks in August/September of 2007, monetary policy was not the immediate risk, or even a consideration. Without the liquidity injections in that timeframe, monetary policy would have become less than an academic consideration. Tying the “constrained” (which actually was a slowdown in growth of) bank lending to bank reserves on account at the Fed I don’t think their Fed reserve balance was ever an issue for lending. Banks slowed down lending because the level of credit risk increased. Borrowers were defaulting. Bank deposit balances were actually increasing through the financial crisis. [See my Feb 26 and March 5 blogs] So, loan funding, at least from deposit sources was not the problem for most banks. Of course, for a small number of banks that had major securities losses, capital was being lost and therefore not available to back increased lending. But demand deposit balances were growing. Some authors are linking bank reserves to the ability of banks to raise liabilities, which makes little sense. Banks’ respective abilities to gather demand deposits (insured by the FDIC, at no small expense to the banks) was always wide open, and their ability to borrow funds is much more a function of asset quality (or net asset value) more than it relates their relatively small reserve balances at the Fed. These actions may result in high inflation levels and high interest rates -- but it will be because of poor Fed decisions in the future, not because of the Fed’s action of last year. It will also depend on whether the fiscal (deficit) actions of the government are: 1) economically productive and 2) tempered to a recovery, or not. I think that is a bigger macro-economic risk than Fed monetary policy. In fact, the only way bank executives can wisely manage the entity over an extended timeframe is to be able to direct resources across all possibilities on a risk-adjusted basis. The question isn’t whether risk-based pricing is appropriate for all lines of business, but rather how might or should it be applied. For commercial lending into the middle and corporate markets, there is enough money at stake to warrant evaluating each loan and deposit, as well as the status of the client relationship, on an individual basis. This means some form of simulation modeling by relationship managers on new sales opportunities (including renewals) and the model’s ready access to current data on all existing pieces of business with each relationship. [See my April 24 blog entry.] This process also implies the ability to easily aggregate the risk-return status of a group of related clients and to show lenders how their portfolio of accounts is performing on a risk-adjusted basis. This type of model-based analysis needs to be flexible enough to handle differing loan structures, easy for a lender to use and quick. The better models can perform such analysis in minutes. I’ve discussed the elements of such models in earlier posts. But, with small business and consumer lending there are other considerations that come into play. The principles of risk-based pricing are consistent across any loan or deposit. With small business lending, the process of selling, negotiating, underwriting and origination is significantly more streamlined and under some form of workflow control. With consumer lending, there are more regulations to take into account and there are mass marketing considerations driving the “sales” process. Agreement covers what the new owner wants now and may decide it wants in the future. This a form of strategic business risk that comes with accepting the capital infusion from this particular source.  

Published: June 30, 2009 by Guest Contributor

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!