At A Glance
It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.Paragraph Block- is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
New Text!


Heading 2
Heading 3
Heading 4
Heading 5
- This is a list
- Item 1
- Item 2
- Sub list
- Sub list 2
- Sub list 3
- More list
- More list 2
- More list 3
- More more
- More more
This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation

This is the pull quote block Lorem Ipsumis simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s,
ExperianThis is the citation
| Table element | Table element | Table element |
| my table | my table | my table |
| Table element | Table element | Table element |

Media Text Block
of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum
My Small H5 Title


By: Joel Pruis From a score perspective we have established the high level standards/reporting that will be needed to stay on top of the resulting decisions. But there is a lot of further detail that should be considered and further segmentation that must be developed or maintained. Auto Decisioning A common misperception around auto-decisioning and the use of scorecards is that it is an all or nothing proposition. More specifically, if you use scorecards, you have to make the decision entirely based upon the score. That is simply not the case. I have done consulting after a decisioning strategy based upon this misperception and the results are not pretty. Overall, the highest percentage for auto-decisioning that I have witnessed has been in the 25 – 30% range. The emphasis is on the “segment”. The segments is typically the lower dollar requests, say $50,000 or less, and is not the percentage across the entire application population. This leads into the discussion around the various segments and the decisioning strategy around each segment. One other comment around auto-decisioning. The definition related to this blog is the systematic decision without human intervention. I have heard comments such as “competitors are auto-decisioning up to $1,000,000”. The reality around such comments is that the institution is granting loan authority to an individual to approve an application should it meet the particular financial ratios and other criteria. The human intervention comes from verifying that the information has been captured correctly and that the financial ratios make sense related to the final result. The last statement is the key to the disqualification of “auto-decisioning”. The individual is given the responsibility to ensure data quality and to ensure nothing else is odd or might disqualify the application from approval or declination. Once a human eye is looking at an application, judgment comes into the picture and we introduce the potential for inconsistencies and or extension of time to render the decision. Auto-decisioning is just that “Automatic”. It is a yes/no decision and is based upon objective factors that if met, allow the decision to be made. Other factors, if not included in the decision strategy, are not included. So, my fellow credit professionals, should you hear someone say they are auto-decisioning a high percent of their applications or a high dollar amount for an application, challenge, question and dig deeper. Treat it like the fishing story “I caught a fish THIS BIG”. No financials segment The highest volume of applications and the lowest total dollar production area of any business banking/small business product set. We had discussed the use of financials in the prior blog around application requirements so I will not repeat that discussion here. Our focus will be on the decisioning of these applications. Using score and application characteristics as the primary data source, this segment is the optimal segment for auto-decisioning. Speeds the decision process and provides the greatest amount of consistency in the decisions rendered. Two key areas for this segment are risk premiums and scorecard validations. The risk premium is important as you are going to accept a higher level of losses for the sake of efficiencies in the underwriting/processing of the application. The end result is lower operational costs, relatively higher credit losses but the end yield on this segment meets the required, yet practical, thresholds for return. The one thing that I will repeat from a prior blog is that you may request financials after the initial review but the frequency should be low and should also be monitored. The request of financials should not be the “belt and suspenders” approach. If you know what the financials are likely to show, then don’t request them. They are unnecessary. You are probably right and the collection of the financials will only serve to elongate the response time, frustrate everyone involved in the process and not change the expected results. Financials segment The relatively lower unit volume but the higher dollar volume segment. Likely this segment will have no auto-decisioning as the review of financials typically will mandate the judgmental review. From an operational perspective, these are high dollar and thus the manual review does not push this segment into a losing proposition. From a potential operational lift perspective, the ability to drive a higher volume of applications into auto-decisioning is simply not available as we are talking probably less than 40% (if not fewer) of all applications in this segment. In this segment, the consistency becomes more difficult as the underwriter tends to want to put his/her own approach on the deal. Standardization of the analysis approach (at least initially) is critical for this segment. Consistency in the underwriting and the various criteria allows for greater analysis to determine where issues are developing or where we are realizing the greatest success. My recommended approach is to standardize (via automation in the origination platform) the various calculations in a manner that will generate the most conservative approach. Bluntly put, my approach was to attempt to make the deal as ugly as possible and if it still passed the various criteria, no additional work was needed nor was there any need for detailed explanation around how I justified the deal/request. Only if it did not meet the criteria using the most conservative approach would I need to do any work and only if it was truly going to make a difference. Basic characteristics in this segment include – business cash flow, personal debt to income, global cash flow and leverage. Others may be added but on a case by case basis. What about the score? If I am doing so much judgmental underwriting, why calculate the score in this segment? In a nutshell, to act as the risk rating methodology for the portfolio approach. Even with the judgmental approach, we do not want to fall into the trap thinking we are going to be able to adequately monitor this segment in a proactive fashion to justify the risk rating at any point in time after the loan is booked. We have been focusing on the origination process in this blog series but I need to point out that since we are not going to be doing a significant amount of financial statement monitoring in the small business segment, we need to begin to move away from the 1 – 8 (or 9 or 10 or whatever) risk rating method for the small business segment. We cannot be granular enough with this rating system nor can we constantly stay on top of what may be changing risk levels related to the individual clients. But I am going to save the portfolio management area for a future blog. Regardless of the segment, please keep in mind that we need to be able to access the full detail of the information that is being captured during the origination process along with the subsequent payment performance. As you are capturing the data, keep in mind, the abilities to Access this data for purposes of analysis Connect the data from origination to the payment performance data to effectively validate the scorecard and my underwriting/decisioning strategies Dive into the details to find the root cause of the performance problem or success The topic of decisioning strategies is broad so please let me know if you have any specific topics that you would like addressed or questions that we might be able to post for responses from the industry.

Recently we released a white paper that emphasizes the need for better, more granular indicators of local home-market conditions and borrower home equity, with a very interesting new finding on leading indicators in local-area credit statistics. Click here to download the white paper Home-equity indicators with new credit data methods for improved mortgage risk analytics Experian white paper, April 2012 In the run-up to the U.S. housing downturn and financial crisis, perhaps the greatest single risk-management shortfall was poorly predicted home prices and borrower home equity. This paper describes new improvements in housing market indicators derived from local-area credit and real-estate information. True housing markets are very local, and until recently, local real-estate data have not been systematically available and interpreted for broad use in modeling and analytics. Local-area credit data, similarly, is relatively new, and its potential for new indicators of housing market conditions is studied here in Experian’s Premier Aggregated Credit Statistics.SM Several examples provide insights into home-equity indicators for improved mortgage models, predictions, strategies, and combined LTV measurement. The paper finds that for existing mortgages evaluated with current combined LTV and borrower credit score, local-area credit statistics are an even stronger add-on default predictor than borrower credit attributes. Click here to download the white paper Authors: John Straka and Chuck Robida, Experian Michael Sklarz, Collateral Analytics

As the need for password management increases, consumers’ options leave even the strictest cybersecurity aficionado pleased with the service.
In this article…
typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.


