All posts by Guest Contributor

Loading...

By: Mike Horrocks In 1950 Alice Stewart, a British medical professor, embarked on a study to identify what was causing so many cases of cancer in children.  Her broad study covered many aspects of the lives of both child and mother, and the final result was that a large spike in the number of children struck with cancer came from mothers that were x-rayed during pregnancy.   The data was clear and statistically beyond reproach and yet for nearly 25 more years, the practice of using x-rays during pregnancy continued. Why didn't doctors stop using x-rays?  They clearly thought the benefits outweighed the risk and they also had a hard time accepting Dr. Stewart’s study.  So how, did Dr. Stewart gain more acceptance of the study – she had a colleague, George Kneale, whose sole job was to disprove her study.  Only by challenging her theories, could she gain the confidence to prove them right.  I believe that theory of challenging the outcome carries over to the practice of risk management as well, as we look to avoid or exploit the next risk around the corner. So how can we as risk managers find the next trends in risk management?  I don’t pretend to have all the answers, but here are some great ideas. Analyze your analysis.  Are you drawing conclusions off of what would be obvious data sources or a rather simplified hypothesis?  If you are, you can bet your competitors are too.  Look for data, tools and trends that can enrich your analysis.  In a recent discussion with a lending institution that has a relationship with a logistics firm, they said that the insights they get from the logistical experts has been spot-on in terms of regional business indicators and lending risks.   Stop thinking about the next 90 days and start thinking about the next 9 quarters. Don’t get me wrong, the next 90 days are vital, but what is coming in the next 2+ years is critical.   Expand the discussion around risk with a holistic risk team. Seek out people with different backgrounds, different ways of thinking and different experiences as a part of your risk management team.  The broader the coverage of disciplines the more likely opportunities will be uncovered. Taking these steps may introduce some interesting discussions, even to the point of conflict in some meetings.  However, when we look back at Dr. Stewart and Mr. Kneale, their conflicts brought great results and allowed for some of the best thinking at the time.   So go ahead, open yourself and your organization to a little conflict and let’s discover the best thinking in risk management.

Published: August 15, 2012 by Guest Contributor

By: Teri Tassara The intense focus and competition among lenders for the super prime and prime prospect population has become saturated, requiring lenders to look outside of their safety net for profitable growth.  This leads to the question “Where are the growth opportunities in a post-recession world?” Interestingly, the most active and positive movement in consumer credit is in what we are terming “emerging prime” consumers, represented by a VantageScore® of 701-800, or letter grade “C”. We’ve seen that of those consumers classified as VantageScore C in 3Q 2006, 32% had migrated to a VantageScore B and another 4% to an A grade over a 5-year window of time.  And as more of the emerging prime consumers rebuild credit and recover from the economic downturn, demand for credit is increasing once again.  Case in point, the auto lending industry to the “subprime” population is expected to increase the most, fueled by consumer demand.  Lenders striving for market advantage are looking to find the next sweet spot, and ahead of the competition. Fortunately, lenders can apply sophisticated and advanced analytical methods to confidently segment the emerging prime consumers into the appropriate risk classification and predict their responsiveness for a variety of consumer loans.  Here are some recommended steps to identifying consumers most likely to add significant value to a lender’s portfolio: Identify emerging prime consumers Understand how prospects are using credit Apply the most predictive credit attributes and scores for risk assessment Understand responsiveness level The stops and starts that have shaped this recovery have contributed to years of slow growth and increased competition for the same “super prime” consumers.  However, these post-recession market conditions are gradually paving the way to opportunistic profitable growth.  With advanced science, lenders can pair caution with a profitable growth strategy, applying greater rigor and discipline in their decision-making.

Published: August 10, 2012 by Guest Contributor

By: Shannon Lois These are challenging times for large financial institutions. Still feeling the impact from the financial crisis of 2007, the banking industry must endure increased oversight, declining margins, and fierce competition—all in a lackluster economy. Financial institutions are especially subject to closer regulatory scrutiny. As part of this stepped-up oversight, the Federal Reserve Board (FRB) conducts annual assessments, including  “stress tests”, of the capital planning processes and capital adequacy of BHCs to ensure that these institutions can continue operations in the event of economic distress. The Fed expects banks to have credible plans, which are evaluated across a range of criteria, showing that they have adequate capital to continue to lend, even under adverse economic conditions. Minimum capital standards are governed by both the FRB and under Basel III. The International Basel Committee established the Basel accords to provide revised safeguards following the financial crisis, as an effort to ensure that banks met capital requirements and were not overly leveraged. Using input data provided by the BHCs themselves, FRB analysts have developed stress scenario methodology for banks to follow. These models generate loss estimates and post-stress capital ratios. The CCAR includes a somewhat unnerving hypothetical scenario that depicts a severe recession in the U.S. economy with an unemployment rate of 13%, a 50% drop in equity prices, and 21% decline in housing market. Stress testing is intended to measure how well a bank could endure this gloomy picture. Between meeting the compliance requirements of both BASEL III and the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR), financial institutions commit sizeable time and resources to administrative tasks that offer few easily quantifiable returns. Nevertheless—in addition to ensuring they don’t suddenly discover themselves in a trillion-dollar hole—these audit responsibilities do offer some other benefits and considerations.

Published: August 1, 2012 by Guest Contributor

The CFPB, the FTC and other regulatory authorities have been building up their presence in debt collections. Are you in the line of fire, or are you already prepared to effectively manage your riskiest accounts?  This year’s collections headlines show an increased need to manage account risk. Consumers have been filing suits for improper collections under the Fair Debt Collection Practices Act (FDCPA), the Servicemembers Civil Relief Act (SCRA), and the Telephone Consumer Protection Act (TCPA), to name a few. Agencies have already paid millions in fines due to increased agency scrutiny.   One collections mistake could cost thousands or even millions to your business—a cost any collector would hate to face. So, what can you do about better managing your regulatory risk?  1.       First of all, it is always important to understand and follow the collection regulations associated with your accounts. 2.       Secondly, follow the headlines and pay close attention to your regulatory authorities.  3.       Lastly, leverage data filtering tools to identify accounts in a protected status. The best solution to help you is a streamlined tool that includes filters to identify multiple types of regulatory risk in one place. At minimum, you should be able to identify the following types of risk associated with your accounts: Bankruptcy status and details Deceased indicator and dates Military indicator Cell phone type indicator Fraud indicators Litigious consumers Why wait? Start identifying and mitigating your risk as early in your collections efforts as possible. 

Published: July 31, 2012 by Guest Contributor

By: Stacy Schulman Earlier this week the CFPB announced a final rule addressing its role in supervising certain credit reporting agencies, including Experian and others that are large market participants in the industry. To view this original content, Experian and the CFPB - Both Committed to Helping Consumers. During a field hearing in Detroit, CFPB Director Richard Cordray’s spoke about a new regulatory focus on the accuracy of the information received by the credit reporting companies, the role they play in assembling and maintaining that information, and the process available to consumers for correcting errors. We look forward to working with CFPB on these important priorities. To read more about how Experian prioritizes these information essentials for consumers, clients and shareholders, read more on the Experian News blog. Learn more about Experian's view of the Consumer Financial Protection Bureau. ___________________ Original content provided by: Tony Hadley, Senior Vice President of Government Affairs and Public Policy About Tony: Tony Hadley is Senior Vice President of Government Affairs and Public Policy for Experian. He leads the corporation’s legislative, regulatory and policy programs relating to consumer reporting, consumer finance, direct and digital marketing, e-commerce, financial education and data protection. Hadley leads Experian’s legislative and regulatory efforts with a number of trade groups and alliances, including the American Financial Services Association, the Direct Marketing Association, the Consumer Data Industry Association, the U.S. Chamber of Commerce and the Interactive Advertising Bureau. Hadley is Chairman of the National Business Coalition on E-commerce and Privacy.

Published: July 18, 2012 by Guest Contributor

By: Mike Horrocks This week, several key financial institutions will be submitting their “living wills” to Washington as part of the Dodd-Frank legislation.  I have some empathy for how those institutions will feel as they submit these living wills.  I don’t think that anyone would say writing a living will is fun.  I remember when my wife and I felt compelled to have one in place as we realized that we did not want to have any questions unanswered for our family. For those not familiar with the concept of the living will, I thought I would first look at the more widely known medical description.   The Mayo Clinic describes living wills as follows, “Living wills and other advance directives describe your preferences regarding treatment if you're faced with a serious accident or illness. These legal documents speak for you when you're not able to speak for yourself — for instance, if you're in a coma.”   Now imagine a bank in a coma. I appreciate the fact that these living wills are taking place, but pulling back my business law books, I seem to recall that one of the benefits of a corporation versus say a sole proprietorship is that the corporation can basically be immortal or even eternal.  In fact the Dictionary.com reference calls out that a corporation has “a continuous existence independent of the existences of its members”.  So now imagine a bank eternally in a coma. Now, I cannot avoid all of those unexpected risks that will come up in my personal life, like an act of God, that may put me into a coma and invoke my living will, but I can do things voluntarily to make sure that I don’t visit the Emergency Room any time soon.  I can exercise, eat right, control my stress and other healthy steps and in fact I meet with a health coach to monitor and track these things. Banks can take those same steps too.  They can stay operationally fit, lend right, and monitor the stress in their portfolios.   They can have their health plans in place and have a personal trainer to help them stay fit (and maybe even push them to levels of fitness they did not think they could reach).  Now imagine a fit, strong bank. So as printers churn, inboxes get filled, and regulators read through thousands of pages of bank living wills, let’s think of the gym coach, or personal trainer that pushed us to improve and think about how we can be healthy and fit and avoid the not so pleasant alternatives of addressing a financial coma.

Published: July 2, 2012 by Guest Contributor

By: Joel Pruis From a score perspective we have established the high level standards/reporting that will be needed to stay on top of the resulting decisions.  But there is a lot of further detail that should be considered and further segmentation that must be developed or maintained. Auto Decisioning A common misperception around auto-decisioning and the use of scorecards is that it is an all or nothing proposition.  More specifically, if you use scorecards, you have to make the decision entirely based upon the score.  That is simply not the case.  I have done consulting after a decisioning strategy based upon this misperception and the results are not pretty.  Overall, the highest percentage for auto-decisioning that I have witnessed has been in the 25 – 30% range.  The emphasis is on the “segment”.  The segments is typically the lower dollar requests, say $50,000 or less, and is not the percentage across the entire application population.  This leads into the discussion around the various segments and the decisioning strategy around each segment. One other comment around auto-decisioning.  The definition related to this blog is the systematic decision without human intervention.  I have heard comments such as “competitors are auto-decisioning up to $1,000,000”.  The reality around such comments is that the institution is granting loan authority to an individual to approve an application should it meet the particular financial ratios and other criteria.  The human intervention comes from verifying that the information has been captured correctly and that the financial ratios make sense related to the final result.  The last statement is the key to the disqualification of “auto-decisioning”.  The individual is given the responsibility to ensure data quality and to ensure nothing else is odd or might disqualify the application from approval or declination.  Once a human eye is looking at an application, judgment comes into the picture and we introduce the potential for inconsistencies and or extension of time to render the decision.  Auto-decisioning is just that “Automatic”.  It is a yes/no decision and is based upon objective factors that if met, allow the decision to be made.  Other factors, if not included in the decision strategy, are not included. So, my fellow credit professionals, should you hear someone say they are auto-decisioning a high percent of their applications or a high dollar amount for an application, challenge, question and dig deeper.  Treat it like the fishing story “I caught a fish THIS BIG”. No financials segment The highest volume of applications and the lowest total dollar production area of any business banking/small business product set.  We had discussed the use of financials in the prior blog around application requirements so I will not repeat that discussion here.  Our focus will be on the  decisioning of these applications.  Using score and application characteristics as the primary data source, this segment is the optimal segment for auto-decisioning.  Speeds the  decision process and provides the greatest amount of consistency in the decisions rendered.  Two key areas for this segment are risk premiums and scorecard validations. The risk premium is important as you are going to accept a higher level of losses for the sake of efficiencies in the underwriting/processing of the application.  The end result is lower operational costs, relatively higher credit losses but the end yield on this segment meets the required, yet practical, thresholds for return. The one thing that I will repeat from a prior blog is that you may request financials after the initial review but the frequency should be low and should also be monitored.  The request of financials should not be the “belt and suspenders” approach.  If you know what the financials are likely to show, then don’t request them.  They are unnecessary.  You are probably right and the collection of the financials will only serve to elongate the response time, frustrate everyone involved in the process and not change the expected results. Financials segment The relatively lower unit volume but the higher dollar volume segment.  Likely this segment will have no auto-decisioning as the review of financials typically will mandate the judgmental review.  From an operational perspective, these are high dollar and thus the manual review does not push this segment into a losing proposition.  From a potential operational lift perspective, the ability to drive a higher volume of applications into auto-decisioning is simply not available as we are talking probably less than 40% (if not fewer) of all applications in this segment. In this segment, the consistency becomes more difficult as the underwriter tends to want to put his/her own approach on the deal.  Standardization of the analysis approach (at least initially) is critical for this segment.  Consistency in the underwriting and the various criteria allows for greater analysis to determine where issues are developing or where we are realizing the greatest success.  My recommended approach is to standardize (via automation in the origination platform) the various calculations in a manner that will generate the most conservative approach.  Bluntly put, my approach was to attempt to make the deal as ugly as possible and if it still passed the various criteria, no additional work was needed nor was there any need for detailed explanation around how I justified the deal/request.  Only if it did not meet the criteria using the most conservative approach would I need to do any work and only if it was truly going to make a difference. Basic characteristics in this segment include – business cash flow, personal debt to income, global cash flow and leverage.  Others may be added but on a case by case basis. What about the score?  If I am doing so much judgmental underwriting, why calculate the score in this segment?  In a nutshell, to act as the risk rating methodology for the portfolio approach. Even with the judgmental approach, we do not want to fall into the trap thinking we are going to be able to adequately monitor this segment in a proactive fashion to justify the risk rating at any point in time after the loan is booked.  We have been focusing on the origination process in this blog series but I need to point out that since we are not going to be doing a significant amount of financial statement monitoring in the small business segment, we need to begin to move away from the 1 – 8 (or 9 or 10 or whatever) risk rating method for the small business segment.  We cannot be granular enough with this rating system nor can we constantly stay on top of what may be changing risk levels related to the individual clients.  But I am going to save the portfolio management area for a future blog. Regardless of the segment, please keep in mind that we need to be able to access the full detail of the information that is being captured during the origination process along with the subsequent payment performance.  As you are capturing the data, keep in mind, the abilities to Access this data for purposes of analysis Connect the data from origination to the payment performance data to effectively validate the scorecard and my underwriting/decisioning strategies Dive into the details to find the root cause of the performance problem or success The topic of decisioning strategies is broad so please let me know if you have any specific topics that you would like addressed or questions that we might be able to post for responses from the industry.

Published: June 29, 2012 by Guest Contributor

Recently we released a white paper that emphasizes the need for better, more granular indicators of local home-market conditions and borrower home equity, with a very interesting new finding on leading indicators in local-area credit statistics.  Click here to download the white paper Home-equity indicators with new credit data methods for improved mortgage risk analytics Experian white paper, April 2012 In the run-up to the U.S. housing downturn and financial crisis, perhaps the greatest single risk-management shortfall was poorly predicted home prices and borrower home equity. This paper describes new improvements in housing market indicators derived from local-area credit and real-estate information. True housing markets are very local, and until recently, local real-estate data have not been systematically available and interpreted for broad use in modeling and analytics. Local-area credit data, similarly, is relatively new, and its potential for new indicators of housing market conditions is studied here in Experian’s Premier Aggregated Credit Statistics.SM Several examples provide insights into home-equity indicators for improved mortgage models, predictions, strategies, and combined LTV measurement. The paper finds that for existing mortgages evaluated with current combined LTV and borrower credit score, local-area credit statistics are an even stronger add-on default predictor than borrower credit attributes. Click here to download the white paper Authors: John Straka and Chuck Robida, Experian Michael Sklarz, Collateral Analytics  

Published: June 22, 2012 by Guest Contributor

Outstanding automotive loan balances were at $708 billion in Q1 2012 – a figure last seen two years ago. Banks and captive auto lenders hold two-thirds of the outstanding balances (34 percent and 33 percent respectively), while credit unions hold 21 percent. Listen to the latest automotive credit trends by attending our upcoming webinar. Source: Experian-Oliver Wyman Market Intelligence Reports.  

Published: May 30, 2012 by Guest Contributor

The average turnaround time to make a lending decision varies materially between financial institutions. Institutions with low-level automation are typically less competitive on price due to the higher cost of manual reviews. For customers, it leads to high levels of dissatisfaction, complaints and switching of institutions. To learn more practical insights and best practices for key areas of business banking and to look at the features of a leading-edge approach to customer management, download the full white paper. Source: Strategic customer management for business banking portfolios by Experian's Global Consulting Practice.

Published: May 18, 2012 by Guest Contributor

As part of its expanded guidance, the Office of the Comptroller of the Currency explicitly recommends that financial services firms utilizing predictive models and decision analytics run regular validations to gauge model efficacy. The VantageScore® credit score model was recently measured against the best credit score models from each of the three largest credit reporting companies (CRCs). When comparing KS values, there is exceptionally strong performance for mortgage originations, with the VantageScore® credit score model outperforming the CRC models in a range from 8 percent to 12 percent. The average range of outperformance is 3 percent to 4 percent across the board for most of the key industries. View the VantageScore® Webinar: Executing Effective Validations in 2011 and Beyond. Source: Executing Effective Validations, American Banker. VantageScore® is owned by VantageScore Solutions, LLC.

Published: May 15, 2012 by Guest Contributor

A vintage analysis comparing 60 or more days past due (DPD) delinquency performance at the one-year mark for mortgages originated between 2002 and 2010 shows that 2010 outperformed previous years, with a delinquency rate of 0.37 percent. The worst- performing vintage was 2006, with a 60 or more DPD delinquency rate of 3.84 percent – more than 10 times the delinquency rate of 2010. Listen to our recorded Webinar for a detailed look at the current state of strategic default in mortgage and an update on consumer credit trends. Source: Experian-Oliver Wyman Market Intelligence Reports

Published: May 10, 2012 by Guest Contributor

After increasing for the first time in nearly two years, the 30 and 60 days past due (DPD) mortgage delinquencies as a percentage of balances returned to their downward trend, with Q4 delinquency rates of 2.18 percent and 1.06 percent, respectively. This represents a decline of 3.5 percent for the 30 DPD category and a 2.8 percent decline for 60 DPD. Listen to our recorded Webinar for a detailed look at the current state of mortgage strategic default and an update on consumer credit trends from the Q4 2011 Experian-Oliver Wyman Market Intelligence Reports. Source: Experian-Oliver Wyman Market Intelligence Reports.

Published: April 26, 2012 by Guest Contributor

One of the most successful best practices for improving agency performance is the use of scorecards for assessing and rank ordering performance of agencies in competition with each other. Much like people, agencies thrive when they understand how they are evaluated, how to influence those factors that contribute to success, and the recognition and reward for top tier performance. Rather than a simple view of performance based upon a recovery rate as a percentage of total inventory, best practice suggests that performance is more accurately reflected in vintage batch liquidation and peer group comparisons to the liquidation curve. Why? In a nutshell, differences in inventory aging and the liquidation curve. Let’s explain this in greater detail. Historically, collection agencies would provide their clients with better performance reporting than their clients had available to them. Clients would know how much business was placed in aggregate, but not by specific vintage relating to the month or year of placement. Thus, when a monthly remittance was received, the client would be incapable of understanding whether this month’s recoveries were from accounts placed last month, this year, or three years ago. This made forecasting of future cash flows from recoveries difficult, in that you would have no insight into where the funds were coming from. We know that as a charged off debt ages, its future liquidation rate generally downward sloping (the exception is auto finance debt, as there is a delay between the time of charge-off and rehabilitation of the debtor, thus future flows are higher beyond the 12-24 month timeframe). How would you know how to predict future cash flows and liquidation rates without understanding the different vintages in the overall charged off population available for recovery? This lack of visibility into liquidation performance created another issue. How do you compare the performance of two different agencies without understanding the age of the inventory and how it is liquidating? An as example, let’s assume that Agency A has been handling your recovery placements for a few years, and has an inventory of $10,000,000 that spans 3+ years, of which $1,500,000 has been placed this year. We know from experience that placements from 3 years ago experienced their highest liquidation rate earlier in their lifecycle, and the remaining inventory from those early vintages are uncollectible or almost full liquidated. Agency A remits $130,000 this month, for a recovery rate of 1.3%. Agency B is a new agency just signed on this year, and has an inventory of $2,000,000 assigned to them. Agency B remits $150,000 this month, for a recovery rate of 7.5%. So, you might assume that Agency B outperformed Agency A by a whopping 6.2%. Right? Er … no. Here’s why. If we had better visibility of Agency A’s inventory, and from where their remittance of $130,000 was derived, we would have known that only a couple of small insignificant payments came from the older vintages of the $10,000,000 inventory, and that of the $130,000 remitted, over $120,000 came from current year inventory (the $1,500,000 in current year placements). Thus, when analyzed in context with a vintage batch liquidation basis, Agency A collected $120,000 against inventory placed in the current year, for a liquidation rate of 8.0%. The remaining remittance of $10,000 was derived from prior years’ inventory. So, when we compare Agency A with current year placements inventory of $1,500,000 and a recovery rate against those placements of 8.0% ($120,000) versus Agency B, with current year placements inventory of $2,000,000 and a recovery rate of 7.5% ($150,000), it’s clear that Agency A outperformed Agency B. This is why the vintage batch liquidation model is the clear-cut best practice for analysis and MI. By using a vintage batch liquidation model and analyzing performance against monthly batches, you can begin to interpret and define the liquidation curve. A liquidation curve plots monthly liquidation rates against a specific vintage, usually by month, and typically looks like this: Exhibit 1: Liquidation Curve Analysis                           Note that in Exhibit 1, the monthly liquidation rate as a percentage of the total vintage batch inventory appears on the y-axis, and the month of funds received appears on the x-axis. Thus, for each of the three vintage batches, we can track the monthly liquidation rates for each batch from its initial placement throughout the recovery lifecycle. Future monthly cash flow for each discrete vintage can be forecasted based upon past performance, and then aggregated to create a future recovery projection. The most sophisticated and up to date collections technology platforms, including Experian’s Tallyman™ and Tallyman Agency Management™ solutions provide vintage batch or laddered reporting. These reports can then be used to create scorecards for comparing and weighing performance results of competing agencies for market share competition and performance management. Scorecards As we develop an understanding of liquidation rates using the vintage batch liquidation curve example, we see the obvious opportunity to reward performance based upon targeted liquidation performance in time series from initial placement batch. Agencies have different strategies for managing client placements and balancing clients’ liquidation goals with agency profitability. The more aggressive the collections process aimed at creating cash flow, the greater the costs. Agencies understand the concept of unit yield and profitability; they seek to maximize the collection result at the lowest possible cost to create profitability. Thus, agencies will “job slope” clients’ projects to ensure that as the collectability of the placement is lower (driven by balance size, customer credit score, date of last payment, phone number availability, type of receivable, etc.) For utility companies and other credit grantors with smaller balance receivables, this presents a greater problem, as smaller balances create smaller unit yield. Job sloping involves reducing the frequency of collection efforts, employing lower cost collectors to perform some of the collection efforts, and where applicable, engaging offshore resources at lower cost to perform collection efforts. You can often see the impact of various collection strategies by comparing agency performance in monthly intervals from batch placement. Again, using a vintage batch placement analysis, we track performance of monthly batch placements assigned to competing agencies. We compare the liquidation results on these specific batches in monthly intervals, up until the receivables are recalled. Typical patterns emerge from this analysis that inform you of the collection strategy differences. Let’s look at an example of differences across agencies and how these strategy differences can have an impact on liquidation:                     As we examine the results across both the first and second 30-day phases, we are likely to find that Agency Y performed the highest of the three agencies, with the highest collection costs and its impact on profitability. Their collection effort was the most uniform over the two 30-day segments, using the dialer at 3-day intervals in the first 30-day segment, and then using a balance segmentation scheme to differentiate treatment at 2-day or 4-day intervals throughout the second 30-day phase. Their liquidation results would be the strongest in that liquidation rates would be sustained into the second 30-day interval. Agency X would likely come in third place in the first 30-day phase, due to a 14-day delay strategy followed by two outbound dialer calls at 5-day intervals. They would have a better performance in the second 30-day phase due to the tighter 4-day intervals for dialing, likely moving into second place in that phase, albeit at higher collection costs for them. Agency Z would come out of the gates in the first 30-day phase in first place, due to an aggressive daily dialing strategy, and their takeoff and early liquidation rate would seem to suggest top tier performance. However, in the second 30-day phase, their liquidation rate would fall off significantly due to the use of a less expensive IVR strategy, negating the gains from the first phase, and potentially reducing their over position over the two 30-day segments versus their peers. The point is that with a vintage batch liquidation analysis, we can isolate performance of a specific placement across multiple phases / months of collection efforts, without having that performance insight obscured by new business blended into the analysis. Had we used the more traditional current month remittance over inventory value, Agency Z might be put into a more favorable light, as each month, they collect new paper aggressively and generate strong liquidation results competitively, but then virtually stop collecting against non-responders, thus “creaming” the paper in the first phase and leaving a lot on the table. That said, how do we ensure that an Agency Z is not rewarded with market share? Using the vintage batch liquidation analysis, we develop a scorecard that weights the placement across the entire placement batch lifecycle, and summarizes points in each 30-day phase. To read Jeff's related posts on the topic of agency management, check out: Vendor auditing best practices that will help your organization succeed Agency managment, vendor scorecards, auditing and quality monitoring  

Published: April 25, 2012 by Guest Contributor

A recent study compiled by VantageScore® Solutions found that default risk associated with mortgage originations has improved. The likelihood that a borrower will become 90 or more days past due after a mortgage has been originated was 2.5 percent in 2011, far lower than in 2009, where it hovered at 7 percent. Get your VantageScore® credit score. Source: View the complete VantageScore Solutions 2011 Annual Validation study. VantageScore® is owned by VantageScore Solutions, LLC.

Published: April 23, 2012 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe