All posts by Guest Contributor

Loading...

By: Kyle Aiman Let’s face it, debt collectors often get a bad rap.  Sure, some of it is deserved, but the majority of the nation’s estimated 157,000 collectors strive to do their job in a way that will satisfy both their employer and the debtor.  One way to improve collector/debtor interaction is for the collector to be trained in consumer credit and counseling. In a recent article published on Collectionsandcreditrisk.com, Trevor Carone, Vice President of Portfolio and Collection Solutions at Experian, explored the concept of using credit education to help debt collectors function more like advisors instead of accusers.  If collectors gain a better understanding of consumer credit – how to read a credit report, how items may affect a credit score, how a credit score is compiled and what factors influence the score – perhaps they can offer suggestions for improvement. Will providing past-due consumers with a plan to help improve their credit increase payments?  Read the article and let us know what you think!

Published: October 10, 2012 by Guest Contributor

By: Mike Horrocks It has been over a year that in Zuccotti Park the Occupy Wall Street crowd made their voices heard.  At the anniversary point of that movement, there has been a lot of debate on if the protest has fizzled away or is still alive and planning its next step.  Either way, it cannot be ignored that it did raise a voice in how consumers view their financial institutions and what actions they are willing to take i.e. “Bank Transfer Day”. In today’s market customer risk management must be balanced with retention strategies.  For example, here at Experian we value the voice of our clients and prospects and I personally lead our win/loss analysis efforts.  The feedback we get from our customers is priceless.   In a recent American Banker article, some great examples were given on how tuning into the voice of the consumer can turn into new business and an expanded market footprint. Some consumers however will do their talking by looking at other financial institutions or by slowly (or maybe rapidly) using your institution’s services less and less.   Technology Credit Union saw great results when they utilized retention triggers off of the credit data to get back out in front of their members with meaningful offers.     Maximizing the impact of internal data and spotting the customer-focused trends that can help with retention is even a better approach, since that data is taken at the “account on-us” level and can help stop risks before the customer starts to walk out the door. Phillip Knight, the founder of Nike once said, “My job is to listen to ideas”.  Your customers have some of the best ideas on how they can be retained and not lost to the competitors.  So, think how you can listen to the voice and the actions of your customers better, before they leave and take a walk in the park.

Published: October 4, 2012 by Guest Contributor

By: Maria Moynihan State and local governments responsible for growth may be missing out on an immediate and sizeable revenue opportunity if their data and processes for collections are not up to par. The Experian Public Sector team recently partnered with Governing Magazine to conduct a nationwide survey with state and local government professionals to better understand how their debt collections efforts are helping to address current revenue gaps. Interestingly enough, 81% stated that the economic climate has negatively impacted their collections efforts, either through reduced staff or reduced budgets, while 30% of respondents are actively looking for new technologies to aid in their debt collections processes. New technologies are always a worthwhile investment. Operational efficiencies will ultimately ensue, but those government organizations who are coupling this investment with improved data and analytics are even better positioned to optimize collections processes and benefit from growth in revenue streams. No longer does the public sector need to lag behind the private sector in debt recovery. With the total outstanding debt among the 50 states reaching an astounding size of approximately $631 billion dollars, why delay? Check out Experian\'s guide to improving debt collections efforts in the public sector. What is your agency doing to capitalize on revenue from overdue obligations?

Published: October 3, 2012 by Guest Contributor

By: Teri Tassara Negative liquidity, or owing more on your home than its value, has become a much too common theme in the past few years.  According to CoreLogic, 11 million consumers are underwater, representing 1 out of 4 homeowners in the nation.  The irony is with mortgage rates remaining at historic lows, consumers who can benefit the most from refinancing can’t qualify due to their negative liquidity situation. Mortgage Banker’s Association recently reported that approximately 74% of home loan volumes were mortgage re-finances in 2Q 2012.  Consumers who have been able to refinance to take advantage of the low interest rates already have, some even several times over.  But there is a segment of underwater consumers who are paying more than their scheduled amount in order to qualify for refinancing – which translates to growth opportunity in mortgage loan volume. Based on an Experian analysis of actual payment amount on mortgages, actual payment amount was reported on about 65% of open mortgages (actual payment amount is the amount the consumer paid the prior month).  And when the actual payment is reported, the study found that 82% of the consumers pay within their $100 of scheduled payment and 18% pay more than their scheduled amount. Actual payment amount information as reported on the credit file, used in combination with other analytics, can be a powerful tool to identify viable candidates for a mortgage refinance, versus those who may benefit from a loan modification offer.  Consumers methodically paying more than the scheduled payment amount may indicate that the consumer is trying to qualify for refinancing.  Conversely, if the consumer is not able to pay the scheduled payment about, that consumer may be an ideal candidate for a loan modification program.   Either way, actual payment amount can provide insight that can create a favorable situation for both the consumer and the lender, mitigating additional and unnecessary risk while providing growth opportunity. Find other related blog posts on credit and housing market trends.

Published: September 20, 2012 by Guest Contributor

By: Kyle Aiman For more than 20 years, creditors have been using scores in their lending operations.  They use risk models such as the VantageScore credit score, FICO or others to predict what kind of risk to expect before making credit-granting decisions. Risk models like these do a great job of separating the “goods” from the “bads.” Debt recovery models are built differently-their job is to predict who is likely to pay once they have already become delinquent. While recovery models have not been around as long as risk models, recent improvements in analytics are producing great results.  In fact, the latest generation of recovery models can even predict who will pay the most. Hopefully, you are not using a risk model in your debt collection operations.  If you are, or if you are not using a model at all, here are five reasons to start using a recovery model: Increase debt recovery rates – Segmenting and prioritizing your portfolios will help increase recovery rates by allowing you to place emphasis on those accounts most likely to pay. Manage and reduce debt recovery costs – Develop treatment strategies of varying costs and apply appropriately. Do not waste time and money on uncollectible accounts. Outsource accounts to third party collection agencies – If you use outside agencies, use recovery scoring to identify accounts best suited for assignment; take the cream off the top to keep in house. Send accounts to legal – Identify accounts that would be better served using a legal strategy versus spending time and money using traditional treatments. Price accounts appropriately for sale – If you are in a position to sell accounts, recovery scoring can help you develop a pricing strategy based on expected collectibility. What recovery scoring tools are you using to optimize your company\'s debt collection efforts? Feel free to ask questions or share your thoughts below.   VantageScore is a registered trademark of VantageScore Solutions, LLC.

Published: September 10, 2012 by Guest Contributor

By: Uzma Aziz They say, “a bird in the hand is better than two in the bush” …and the same can be said about customers in a portfolio. Studies have shown time and again that the cost of acquiring a new financial services customer is many times higher than the cost of keeping an existing one. Retention has always been an integral part of portfolio management, and with the market finally on an upward trajectory, there is all the more need to hold on to profitable customers. Experts at CEB TowerGroup are forecasting a combined annual growth rate of over 12% for new credit cards alone through 2015. Combine that with a growing market with better-informed and savvy customers, and you have a very good reason to be diligent about retaining your best ones. Also, different sized institutions have varying degrees of success. According to a study by J.D. Power & Associates, in 2011 overall, 9.6% of customers indicated they switched their primary bank account during the past year, up from 8.7% a year ago. Smaller banks and credit unions did see drastically lower attrition than they did in prior years: just 0.9% on average, down from 8.8% a year earlier. For large, mid-sized and regional banks unfortunately, it was a different story with attrition rates at 10 to 11.3%. It gets even more complex when you drill down to a specific type of financial product such as a credit card. Experian’s own analysis of credit card customer retention shows that while the majority of customers are loyal, a good percentage attrite actively—that is, close their accounts and open new ones—while a bigger percent are silent attriters, those that do not close accounts but pay down balances and move their spend to others. Obviously, attrition is a continual topic that needs to be addressed, but to minimize it you first need to understand the root cause. Poor service seems to be the leading factor and one study* showed that 31% of consumers who switched banks did so because of poor service, followed by product features and finding a better offer elsewhere. So what are financial institutions doing to retain their profitable customers? There are lots of tools ranging from easy to more complex e.g., fee and interest waiver, line increases, rewards, and call center priority to name a few. But the key to successful customer retention is to look within the portfolio combining both internal and external information. This encompasses both proactive and reactive strategies. Proactive strategies include identifying customer behaviors which lead to balance or account attrition and taking action before a customer does. This includes monitoring changes over time and identifying thresholds for action as well as segmentation and modeling to identify problem. Reactive strategies, as the name suggests, is reacting to when a customer has already taken action which will lead to attrition; these include monitoring portfolios for new inquiries and account openings or response to customer complaints. In some cases, this maybe too little too late, but in others reactive response may be what saves a customer relationship. Whichever strategy or combination of these you choose, the key points to remember to retain customers and keep them happy are: Understand your current customers’ perceptions about credit, as they many have changed—customers are likely to be more educated, and the most profitable ones expect only the best customer service experience Be approachable and personal – meet customer needs—or better yet, anticipate those needs, focusing on loyalty and customer experience You don’t need to “give away the farm” – sometimes a partial fee waiver works * Global Consumer Banking Survey 2011, by Ernst & Young  

Published: August 20, 2012 by Guest Contributor

By: Ken Pruett The great thing about being in front of customers is that you learn something from every meeting.  Over the years I have figured out that there is typically no “right” or “wrong” way to do something.  Even in the world of fraud and compliance I find that each client\'s approach varies greatly.  It typically comes down to what the business need is in combination with meeting some sort of compliance obligation like the Red Flag Rules or the Patriot Act.  For example, the trend we see in the prepaid space is that basic verification of common identity elements is really the only need.   The one exception might be the use of a few key fraud indicators like a deceased SSN.  The thought process here is that the fraud risk is relatively low vs. someone opening up a credit card account.  So in this space, pass rates drive the business objective of getting customers through the application process as quickly and easily as possible….while meeting basic compliance obligations. In the world of credit, fraud prevention is front and center and plays a key role in the application process.  Our most conservative customers often use the traditional bureau alerts to drive fraud prevention.  This typically creates high manual review rates but they feel that they want to be very customer focused. Therefore, they are willing to take on the costs of these reviews to maintain that focus.  The feedback we often get is that these alerts often lead to a high number of false positives. Examples of messages they may key off of are things like the SSN not being issued or the On-File Inquiry address not matching.  The trend is this space is typically focused on fraud scoring. Review rates are what drive score cut-offs leading to review rates that are typically 5% or less.  Compliance issues are often resolved by using some combination of the score and data matching. For example, if there is a name and address mismatch that does not necessarily mean the application will kick out for review.  If the Name, SSN, and DOB match…and the score shows very little chance of fraud, the application can be passed through in an automated fashion.  This risk based approach is typically what we feel is a best practice.  This moves them away from looking at the binary results from individual messages like the SSN alerts mentioned above. The bottom line is that everyone seems to do things differently, but the key is that each company takes compliance and fraud prevention seriously.  That is why meeting with our customers is such an enjoyable part of my job.

Published: August 19, 2012 by Guest Contributor

Join us Sept 12-13 in New York City for the Finovate conference to check out the best new innovations in financial and banking technology from a mixture of leading established companies and startups. As part of Finovate\'s signature demo-only format for this event, Steve Wagner, President, Consumer Information Services and Michele Pearson, Vice President of Marketing, Consumer Information Services, from Experian will demonstrate how providers and lead generators can access a powerful new marketing tool to: Drive new traffic Lower online customer acquisition costs Generate high-quality, credit-qualified leads Proactively utilize individual consumer credit data online in real time Networking sessions will follow company demos each day, giving attendees the chance to speak directly with the Experian innovators they saw on stage. Finovate 2011 had more than 1,000 financial institution executives, venture capitalists, members of the press and entrepreneurs in attendance, and they expecting an even larger audience at the 2012 event. We look forward to seeing you at Finovate! 

Published: August 16, 2012 by Guest Contributor

By: Mike Horrocks In 1950 Alice Stewart, a British medical professor, embarked on a study to identify what was causing so many cases of cancer in children.  Her broad study covered many aspects of the lives of both child and mother, and the final result was that a large spike in the number of children struck with cancer came from mothers that were x-rayed during pregnancy.   The data was clear and statistically beyond reproach and yet for nearly 25 more years, the practice of using x-rays during pregnancy continued. Why didn\'t doctors stop using x-rays?  They clearly thought the benefits outweighed the risk and they also had a hard time accepting Dr. Stewart’s study.  So how, did Dr. Stewart gain more acceptance of the study – she had a colleague, George Kneale, whose sole job was to disprove her study.  Only by challenging her theories, could she gain the confidence to prove them right.  I believe that theory of challenging the outcome carries over to the practice of risk management as well, as we look to avoid or exploit the next risk around the corner. So how can we as risk managers find the next trends in risk management?  I don’t pretend to have all the answers, but here are some great ideas. Analyze your analysis.  Are you drawing conclusions off of what would be obvious data sources or a rather simplified hypothesis?  If you are, you can bet your competitors are too.  Look for data, tools and trends that can enrich your analysis.  In a recent discussion with a lending institution that has a relationship with a logistics firm, they said that the insights they get from the logistical experts has been spot-on in terms of regional business indicators and lending risks.   Stop thinking about the next 90 days and start thinking about the next 9 quarters. Don’t get me wrong, the next 90 days are vital, but what is coming in the next 2+ years is critical.   Expand the discussion around risk with a holistic risk team. Seek out people with different backgrounds, different ways of thinking and different experiences as a part of your risk management team.  The broader the coverage of disciplines the more likely opportunities will be uncovered. Taking these steps may introduce some interesting discussions, even to the point of conflict in some meetings.  However, when we look back at Dr. Stewart and Mr. Kneale, their conflicts brought great results and allowed for some of the best thinking at the time.   So go ahead, open yourself and your organization to a little conflict and let’s discover the best thinking in risk management.

Published: August 15, 2012 by Guest Contributor

By: Teri Tassara The intense focus and competition among lenders for the super prime and prime prospect population has become saturated, requiring lenders to look outside of their safety net for profitable growth.  This leads to the question “Where are the growth opportunities in a post-recession world?” Interestingly, the most active and positive movement in consumer credit is in what we are terming “emerging prime” consumers, represented by a VantageScore® of 701-800, or letter grade “C”. We’ve seen that of those consumers classified as VantageScore C in 3Q 2006, 32% had migrated to a VantageScore B and another 4% to an A grade over a 5-year window of time.  And as more of the emerging prime consumers rebuild credit and recover from the economic downturn, demand for credit is increasing once again.  Case in point, the auto lending industry to the “subprime” population is expected to increase the most, fueled by consumer demand.  Lenders striving for market advantage are looking to find the next sweet spot, and ahead of the competition. Fortunately, lenders can apply sophisticated and advanced analytical methods to confidently segment the emerging prime consumers into the appropriate risk classification and predict their responsiveness for a variety of consumer loans.  Here are some recommended steps to identifying consumers most likely to add significant value to a lender’s portfolio: Identify emerging prime consumers Understand how prospects are using credit Apply the most predictive credit attributes and scores for risk assessment Understand responsiveness level The stops and starts that have shaped this recovery have contributed to years of slow growth and increased competition for the same “super prime” consumers.  However, these post-recession market conditions are gradually paving the way to opportunistic profitable growth.  With advanced science, lenders can pair caution with a profitable growth strategy, applying greater rigor and discipline in their decision-making.

Published: August 10, 2012 by Guest Contributor

By: Shannon Lois These are challenging times for large financial institutions. Still feeling the impact from the financial crisis of 2007, the banking industry must endure increased oversight, declining margins, and fierce competition—all in a lackluster economy. Financial institutions are especially subject to closer regulatory scrutiny. As part of this stepped-up oversight, the Federal Reserve Board (FRB) conducts annual assessments, including  “stress tests”, of the capital planning processes and capital adequacy of BHCs to ensure that these institutions can continue operations in the event of economic distress. The Fed expects banks to have credible plans, which are evaluated across a range of criteria, showing that they have adequate capital to continue to lend, even under adverse economic conditions. Minimum capital standards are governed by both the FRB and under Basel III. The International Basel Committee established the Basel accords to provide revised safeguards following the financial crisis, as an effort to ensure that banks met capital requirements and were not overly leveraged. Using input data provided by the BHCs themselves, FRB analysts have developed stress scenario methodology for banks to follow. These models generate loss estimates and post-stress capital ratios. The CCAR includes a somewhat unnerving hypothetical scenario that depicts a severe recession in the U.S. economy with an unemployment rate of 13%, a 50% drop in equity prices, and 21% decline in housing market. Stress testing is intended to measure how well a bank could endure this gloomy picture. Between meeting the compliance requirements of both BASEL III and the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR), financial institutions commit sizeable time and resources to administrative tasks that offer few easily quantifiable returns. Nevertheless—in addition to ensuring they don’t suddenly discover themselves in a trillion-dollar hole—these audit responsibilities do offer some other benefits and considerations.

Published: August 1, 2012 by Guest Contributor

By: Stacy Schulman Earlier this week the CFPB announced a final rule addressing its role in supervising certain credit reporting agencies, including Experian and others that are large market participants in the industry. To view this original content, Experian and the CFPB - Both Committed to Helping Consumers. During a field hearing in Detroit, CFPB Director Richard Cordray’s spoke about a new regulatory focus on the accuracy of the information received by the credit reporting companies, the role they play in assembling and maintaining that information, and the process available to consumers for correcting errors. We look forward to working with CFPB on these important priorities. To read more about how Experian prioritizes these information essentials for consumers, clients and shareholders, read more on the Experian News blog. Learn more about Experian\'s view of the Consumer Financial Protection Bureau. ___________________ Original content provided by: Tony Hadley, Senior Vice President of Government Affairs and Public Policy About Tony: Tony Hadley is Senior Vice President of Government Affairs and Public Policy for Experian. He leads the corporation’s legislative, regulatory and policy programs relating to consumer reporting, consumer finance, direct and digital marketing, e-commerce, financial education and data protection. Hadley leads Experian’s legislative and regulatory efforts with a number of trade groups and alliances, including the American Financial Services Association, the Direct Marketing Association, the Consumer Data Industry Association, the U.S. Chamber of Commerce and the Interactive Advertising Bureau. Hadley is Chairman of the National Business Coalition on E-commerce and Privacy.

Published: July 18, 2012 by Guest Contributor

By: Mike Horrocks This week, several key financial institutions will be submitting their “living wills” to Washington as part of the Dodd-Frank legislation.  I have some empathy for how those institutions will feel as they submit these living wills.  I don’t think that anyone would say writing a living will is fun.  I remember when my wife and I felt compelled to have one in place as we realized that we did not want to have any questions unanswered for our family. For those not familiar with the concept of the living will, I thought I would first look at the more widely known medical description.   The Mayo Clinic describes living wills as follows, “Living wills and other advance directives describe your preferences regarding treatment if you\'re faced with a serious accident or illness. These legal documents speak for you when you\'re not able to speak for yourself — for instance, if you\'re in a coma.”   Now imagine a bank in a coma. I appreciate the fact that these living wills are taking place, but pulling back my business law books, I seem to recall that one of the benefits of a corporation versus say a sole proprietorship is that the corporation can basically be immortal or even eternal.  In fact the Dictionary.com reference calls out that a corporation has “a continuous existence independent of the existences of its members”.  So now imagine a bank eternally in a coma. Now, I cannot avoid all of those unexpected risks that will come up in my personal life, like an act of God, that may put me into a coma and invoke my living will, but I can do things voluntarily to make sure that I don’t visit the Emergency Room any time soon.  I can exercise, eat right, control my stress and other healthy steps and in fact I meet with a health coach to monitor and track these things. Banks can take those same steps too.  They can stay operationally fit, lend right, and monitor the stress in their portfolios.   They can have their health plans in place and have a personal trainer to help them stay fit (and maybe even push them to levels of fitness they did not think they could reach).  Now imagine a fit, strong bank. So as printers churn, inboxes get filled, and regulators read through thousands of pages of bank living wills, let’s think of the gym coach, or personal trainer that pushed us to improve and think about how we can be healthy and fit and avoid the not so pleasant alternatives of addressing a financial coma.

Published: July 2, 2012 by Guest Contributor

By: Joel Pruis From a score perspective we have established the high level standards/reporting that will be needed to stay on top of the resulting decisions.  But there is a lot of further detail that should be considered and further segmentation that must be developed or maintained. Auto Decisioning A common misperception around auto-decisioning and the use of scorecards is that it is an all or nothing proposition.  More specifically, if you use scorecards, you have to make the decision entirely based upon the score.  That is simply not the case.  I have done consulting after a decisioning strategy based upon this misperception and the results are not pretty.  Overall, the highest percentage for auto-decisioning that I have witnessed has been in the 25 – 30% range.  The emphasis is on the “segment”.  The segments is typically the lower dollar requests, say $50,000 or less, and is not the percentage across the entire application population.  This leads into the discussion around the various segments and the decisioning strategy around each segment. One other comment around auto-decisioning.  The definition related to this blog is the systematic decision without human intervention.  I have heard comments such as “competitors are auto-decisioning up to $1,000,000”.  The reality around such comments is that the institution is granting loan authority to an individual to approve an application should it meet the particular financial ratios and other criteria.  The human intervention comes from verifying that the information has been captured correctly and that the financial ratios make sense related to the final result.  The last statement is the key to the disqualification of “auto-decisioning”.  The individual is given the responsibility to ensure data quality and to ensure nothing else is odd or might disqualify the application from approval or declination.  Once a human eye is looking at an application, judgment comes into the picture and we introduce the potential for inconsistencies and or extension of time to render the decision.  Auto-decisioning is just that “Automatic”.  It is a yes/no decision and is based upon objective factors that if met, allow the decision to be made.  Other factors, if not included in the decision strategy, are not included. So, my fellow credit professionals, should you hear someone say they are auto-decisioning a high percent of their applications or a high dollar amount for an application, challenge, question and dig deeper.  Treat it like the fishing story “I caught a fish THIS BIG”. No financials segment The highest volume of applications and the lowest total dollar production area of any business banking/small business product set.  We had discussed the use of financials in the prior blog around application requirements so I will not repeat that discussion here.  Our focus will be on the  decisioning of these applications.  Using score and application characteristics as the primary data source, this segment is the optimal segment for auto-decisioning.  Speeds the  decision process and provides the greatest amount of consistency in the decisions rendered.  Two key areas for this segment are risk premiums and scorecard validations. The risk premium is important as you are going to accept a higher level of losses for the sake of efficiencies in the underwriting/processing of the application.  The end result is lower operational costs, relatively higher credit losses but the end yield on this segment meets the required, yet practical, thresholds for return. The one thing that I will repeat from a prior blog is that you may request financials after the initial review but the frequency should be low and should also be monitored.  The request of financials should not be the “belt and suspenders” approach.  If you know what the financials are likely to show, then don’t request them.  They are unnecessary.  You are probably right and the collection of the financials will only serve to elongate the response time, frustrate everyone involved in the process and not change the expected results. Financials segment The relatively lower unit volume but the higher dollar volume segment.  Likely this segment will have no auto-decisioning as the review of financials typically will mandate the judgmental review.  From an operational perspective, these are high dollar and thus the manual review does not push this segment into a losing proposition.  From a potential operational lift perspective, the ability to drive a higher volume of applications into auto-decisioning is simply not available as we are talking probably less than 40% (if not fewer) of all applications in this segment. In this segment, the consistency becomes more difficult as the underwriter tends to want to put his/her own approach on the deal.  Standardization of the analysis approach (at least initially) is critical for this segment.  Consistency in the underwriting and the various criteria allows for greater analysis to determine where issues are developing or where we are realizing the greatest success.  My recommended approach is to standardize (via automation in the origination platform) the various calculations in a manner that will generate the most conservative approach.  Bluntly put, my approach was to attempt to make the deal as ugly as possible and if it still passed the various criteria, no additional work was needed nor was there any need for detailed explanation around how I justified the deal/request.  Only if it did not meet the criteria using the most conservative approach would I need to do any work and only if it was truly going to make a difference. Basic characteristics in this segment include – business cash flow, personal debt to income, global cash flow and leverage.  Others may be added but on a case by case basis. What about the score?  If I am doing so much judgmental underwriting, why calculate the score in this segment?  In a nutshell, to act as the risk rating methodology for the portfolio approach. Even with the judgmental approach, we do not want to fall into the trap thinking we are going to be able to adequately monitor this segment in a proactive fashion to justify the risk rating at any point in time after the loan is booked.  We have been focusing on the origination process in this blog series but I need to point out that since we are not going to be doing a significant amount of financial statement monitoring in the small business segment, we need to begin to move away from the 1 – 8 (or 9 or 10 or whatever) risk rating method for the small business segment.  We cannot be granular enough with this rating system nor can we constantly stay on top of what may be changing risk levels related to the individual clients.  But I am going to save the portfolio management area for a future blog. Regardless of the segment, please keep in mind that we need to be able to access the full detail of the information that is being captured during the origination process along with the subsequent payment performance.  As you are capturing the data, keep in mind, the abilities to Access this data for purposes of analysis Connect the data from origination to the payment performance data to effectively validate the scorecard and my underwriting/decisioning strategies Dive into the details to find the root cause of the performance problem or success The topic of decisioning strategies is broad so please let me know if you have any specific topics that you would like addressed or questions that we might be able to post for responses from the industry.

Published: June 29, 2012 by Guest Contributor

Recently we released a white paper that emphasizes the need for better, more granular indicators of local home-market conditions and borrower home equity, with a very interesting new finding on leading indicators in local-area credit statistics.  Click here to download the white paper Home-equity indicators with new credit data methods for improved mortgage risk analytics Experian white paper, April 2012 In the run-up to the U.S. housing downturn and financial crisis, perhaps the greatest single risk-management shortfall was poorly predicted home prices and borrower home equity. This paper describes new improvements in housing market indicators derived from local-area credit and real-estate information. True housing markets are very local, and until recently, local real-estate data have not been systematically available and interpreted for broad use in modeling and analytics. Local-area credit data, similarly, is relatively new, and its potential for new indicators of housing market conditions is studied here in Experian’s Premier Aggregated Credit Statistics.SM Several examples provide insights into home-equity indicators for improved mortgage models, predictions, strategies, and combined LTV measurement. The paper finds that for existing mortgages evaluated with current combined LTV and borrower credit score, local-area credit statistics are an even stronger add-on default predictor than borrower credit attributes. Click here to download the white paper Authors: John Straka and Chuck Robida, Experian Michael Sklarz, Collateral Analytics  

Published: June 22, 2012 by Guest Contributor

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!