All posts by Guest Contributor

Loading...

By: Tom Hannagan Apparently my last post on the role of risk management in the pricing of deposit services hit some nerve ends. That’s good. The industry needs its “nerve ends” tweaked after the dearth of effective risk management that contributed to the financial malaise of the last couple of years. Banks, or any business, can prosper by simply following their competitors’ marketing strategies and meeting or slightly undercutting their prices. The actions of competitors are an important piece of intelligence to consider, but not necessarily optimal for your bank to copy. One question is regarding the “how-to” behind risk-based pricing (RBP) of deposits. The answer has four parts. Let’s see. First, because of the importance and size of the deposit business (yes, it’s a line of business) as a funding source, one needs to isolate the interest rate risk. This is done by transfer pricing, or in a sense, crediting the deposit balances for their marginal value as an offset to borrowing funds. This transfer price has nothing to do with the earnings credit rate used in account analysis – that is a merchandising issue used to generate fee income. Fees, resulting from account analysis, when not waived, affect the profitability of deposit services, but are not a risk element. Two things are critical to the transfer of funding credit: 1) the assumptions regarding the duration, or reliability of the deposit balances and 2) the rate curve used to match the duration. Different types of deposit behave differently based on changes in rates paid. Checking account deposit funds tend to be very loyal or “sticky” - they don’t move around a lot (or easily) because of rate paid, if any. At the other extreme, time deposits tend to be very rate-sensitive and can move (in or out) for small incremental gains. Savings, money market and NOW accounts are in-between. Since deposits are an offset (ultimately) to marginal borrowing, just as loans might (ultimately) require marginal borrowing, we recommend using the same rate curve for both asset and liability transfer pricing. The money is the same thing on both sides of the balance sheet and the rate curve used to fund a loan or credit a deposit should be the same. We believe this will help, greatly, to isolate IRR. It is also seems more fair when explaining the concept to line management. Secondly, although there is essentially no credit risk associated with deposits, there is operational risk. Deposit make up most of the liability side of the balance sheet and therefore the lion’s share of institutional funding. Deposits are also a major source of operational expense. The mitigated operational risks such as physical security, backup processing arrangements, various kinds of insurance and catastrophe plans, are normal expenses of doing business and included in a bank’s financial statements. The costs need to be broken down by deposit category to get a picture of the risk-adjusted operating expenses. The third major consideration for analyzing risk-adjusted deposit profitability is its revenue contribution. Deposit-related fee income can be a very significant number and needs to be allocated to particular deposit category that generates this income. This is an important aspect of the return, along with the risk-adjusted funding value of the balances. It will vary substantially for various deposit types. Time deposits have essentially zero fee income, whereas checking accounts can produce significant revenues. The fourth major consideration is capital. There are unexpected losses associated with deposits that must be covered by risk-based capital – or equity. The unexpected losses include: unmitigated operational risks, any error in transfer pricing the market risk, and business or strategic risk. Although the unexpected losses associated with deposit products are substantially less than found in the lending products, they needs to be taken into account to have a fully risk-adjusted view. It is also necessary to be able to compare the risk-adjusted profit and profitability of such diverse services as found within banking. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization, on a risk-adjusted performance basis. Otherwise it is impossible to decide on the allocation of resources, including precious capital. Without this risk management view of deposits (just as with loans) it is impossible to price the services in a completely knowledgeable fashion. Good entity governance, asset and liability posturing, and competent line of business management, all require more and better risk-based profit considerations to be an important part of the intelligence used to optimally price deposits.      

Published: January 20, 2010 by Guest Contributor

By: Ken Pruett The use of Knowledge Based Authentication (KBA) or out of wallet questions continues to grow. For many companies, this solution is used as one of its primary means for fraud prevention.  The selection of the proper tool often involves a fairly significant due diligence process to evaluate various offerings before choosing the right partner and solution.  They just want to make sure they make the right choice. I am often surprised that a large percentage of customers just turn these tools on and never evaluate or even validate ongoing performance.  The use of performance monitoring is a way to make sure you are getting the most out of the product you are using for fraud prevention.  This exercise is really designed to take an analytical look at what you are doing today when it comes to Knowledge Based Authentication. There are a variety of benefits that most customers experience after undergoing this fraud analytics exercise.  The first is just to validate that the tool is working properly.  Some questions to ponder include: Are enough frauds being identified? Is the manual review rate in-line with what was expected?  In almost every case I have worked on as it relates to these engagements, there were areas that were not in-line with what the customer was hoping to achieve.  Many had no idea that they were not getting the expected results. Taking this one step further, changes can also be made to improve upon what is already in place.  For example, you can evaluate how well each question is performing.  The analysis can show you which questions are doing the best job at predicting fraud.  The use of better performing questions can allow you the ability to find more fraud while referring fewer applications for manual review.  This is a great way to optimize how you use the tool. In most organizations there is increased pressure to make sure that every dollar spent is bringing value to the organization.  Performance monitoring is a great way to show the value that your KBA tool is bringing to the organization.  The exercise can also be used to show how you are proactively managing your fraud prevention process.   You accomplish this by showing how well you are optimizing how you use the tool today while addressing emerging fraud trends. The key message is to continuously measure the performance of the KBA tool you are using.  An exercise like performance monitoring could provide you with great insight on a quarterly basis.  This will allow you to get the most out of your product and help you keep up with a variety of emerging fraud trends. Doing nothing is really not an option in today’s even changing environment.  

Published: January 18, 2010 by Guest Contributor

By: Amanda Roth The reality of risk-based pricing is that there is not one “end all be all” way of determining what pricing should be applied to your applicants.  The truth is that statistics will only get you so far.  It may get you 80 percent of the final answer, but to whom is 80 percent acceptable?  The other 20 percent must also be addressed. I am specifically referring to those factors that are outside of your control.  For example, does your competition’s pricing impact your ability to price loans?  Have you thought about how loyal customer discounts or incentives may contribute to the success or demise of your program?  Do you have a sensitive population that may have a significant reaction to any risk-base pricing changes?  These questions must be addressed for sound pricing and risk management. Over the next few weeks, we will look at each of these questions in more detail along with tips on how to apply them in your organization.  As the new year is often a time of reflection and change, I would encourage you to let me know what experiences you may be having in your own programs.  I would love to include your thoughts and ideas in this blog.  

Published: January 18, 2010 by Guest Contributor

By: Tom Hannagan This blog has often discussed many aspects of risk-adjusted pricing for loans. Loans, with their inherent credit risk, certainly deserve a lot of attention when it comes to risk management in banking. But, that doesn’t mean you should ignore the risk management implications found in the other product lines. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization. This would include the deposit services arena. Deposits make up roughly 65 percent to 75 percent of the liability side of the balance sheet for most financial institutions, representing the lion’s share of their funding source. This is a major source of operational expense and also represents most of the bank’s interest expense. The deposit activity has operational risk, and this large funding source plays a huge role in market risk – including both interest rate risk and liquidity risk. It stands to reason that such risks are considered when pricing deposit services. Unfortunately it is not always the case. Okay, to be honest, it’s too rarely the case. This raises serious entity governance questions. How can such a large operational undertaking, not withstanding the criticality of the funding implications, not be subjected to risk-based pricing considerations? We have seen warnings already that the current low interest rate environment will not last forever. When the economy improves and rates head upwards, banks need to understand the bottom line profit implications. Deposit rate sensitivity across the various deposit types is a huge portion of the impact on net interest income. Risk-based pricing of these services should be considered before committing to provide them. Even without the credit risk implications found on the loan side of the balance sheet, there is still plenty of operational and market risk impact that needs to be taken into account from the liability side. When risk management is not considered and mitigated as part of the day-to-day management of the deposit line of business, the bank is leaving these risks completely to chance. This unmitigated risk increases the portion of overall risk that is then considered to be “unexpected” in nature and thereby increases the equity capital required to support the bank.

Published: January 12, 2010 by Guest Contributor

By: Wendy Greenawalt Given the current volatile market conditions and rising unemployment rates, no industry is immune from delinquent accounts. However, recent reports have shown a shift in consumer trends and attitudes related to cellular phones. For many consumers, a cell phone is an essential tool for business and personal use, and staying connected is a very high priority. Given this, many consumers pay their cellular bill before other obligations, even if facing a poor bank credit risk. Even with this trend, cellular providers are not immune from delinquent accounts and determining the right course of action to take to improve collection rates. By applying optimization, technology for account collection decisions, cellular providers can ensure that all variables are considered given the multiple contact options available. Unlike other types of services, cellular providers have numerous options available in an attempt to collect on outstanding accounts.  This, however, poses other challenges because collectors must determine the ideal method and timing to attempt to collect while retaining the consumers that will be profitable in the long term.  Optimizing decisions can consider all contact methods such as text, inbound/outbound calls, disconnect, service limitation, timing and diversion of calls.  At the same time, providers are considering constraints such as likelihood of curing, historical consumer behavior, such as credit score trends, and resource costs/limitations.  Since the cellular industry is one of the most competitive businesses, it is imperative that it takes advantage of every tool that can improve optimizing decisions to drive revenue and retention.  An optimized strategy tree can be easily implemented into current collection processes and provide significant improvement over current processes.

Published: January 7, 2010 by Guest Contributor

By: Heather Grover In my previous entry, I covered how fraud prevention affected the operational side of new DDA account opening. To give a complete picture, we need to consider fraud best practices and their impact on the customer experience. As earlier mentioned, the branch continues to be a highly utilized channel and is the place for “customized service.” In addition, for retail banks that continue to be the consumer\'s first point of contact, fraud detection is paramount IF we should initiate a relationship with the consumer. Traditional thinking has been that DDA accounts are secured by deposits, so little risk management policy is applied. The reality is that the DDA account can be a fraud portal into the organization’s many products. Bank consolidations and lower application volumes are driving increased competition at the branch – increased demand exists to cross-sell consumers at the point of new account opening. As a result, banks are moving many fraud checks to the front end of the process: know your customer and Red Flag guideline checks are done sooner in the process in a consolidated and streamlined fashion. This is to minimize fraud losses and meet compliance in a single step, so that the process for new account holders are processed as quickly through the system as possible. Another recent trend is the streamlining of a two day batch fraud check process to provide account holders with an immediate and final decision. The casualty of a longer process could be a consumer who walks out of your branch with a checkbook in hand – only to be contacted the next day to tell that his/her account has been shut down. By addressing this process, not only will the customer experience be improved with  increased retention, but operational costs will also be reduced. Finally, relying on documentary evidence for ID verification can be viewed by some consumers as being onerous and lengthy. Use of knowledge based authentication can provide more robust authentication while giving assurance of the consumer’s identity. The key is to use a solution that can authenticate “thin file” consumers opening DDA accounts. This means your out of wallet questions need to rely on multiple data sources – not just credit. Interactive questions can give your account holders peace of mind that you are doing everything possible to protect their identity – which builds the customer relationship…and your brand.  

Published: January 4, 2010 by Guest Contributor

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits.  Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

Published: December 30, 2009 by Guest Contributor

By: Amanda Roth The final level of validation for your risk-based pricing program is to validate for profitability.  Not only will this analysis build on the two previous analyses, but it will factor in the cost of making a loan based on the risk associated with that applicant.  Many organizations do not complete this crucial step.  Therefore, they may have the applicants grouped together correctly, but still find themselves unprofitable. The premise of risk-based pricing is that we are pricing to cover the cost associated with an applicant.  If an applicant has a higher probability of delinquency, we can assume there will be additional collection costs, reporting costs, and servicing costs associated with keeping this applicant in good standing.  We must understand what these cost may be, though, before we can price accordingly.  Information of this type can be difficult to determine based on the resources available to your organization.  If you aren’t able to determine the exact amount of time and costs associated with the different loans at different risk levels, there are industry best practices that can be applied. Of primary importance is to factor in the cost to originate, service and terminate a loan based on varying risk levels.  This is the only true way to validate that your pricing program is working to provide profitability to your loan portfolio.  

Published: December 28, 2009 by Guest Contributor

By: Amanda Roth To refine your risk-based pricing another level, it is important to analyze where your tiers are set and determine if they are set appropriately.  (We find many of the regulators / examiners are looking for this next level of analysis.) This analysis begins with the results of the scoring model validation.  Not only will the distributions from that analysis determine if the score can predict between good and delinquent accounts, but it will also highlight which score ranges have similar delinquency rates, allowing you to group your tiers together appropriately.  After all, you do not want to have applicants with a 1 percent chance of delinquency priced the same as someone with an 8 percent chance of delinquency.  By reviewing the interval delinquency rates as well as the odds ratios, you should be able to determine where a significant enough difference occurs to warrant different pricing. You will increase the opportunity for portfolio profitability through this analysis, as you are reducing the likelihood that higher risk applicants are receiving lower pricing.  As expected, the overall risk management of the portfolio will increase when a proper risk-based pricing program is developed. In my next post we will look the final level of validation which does provide insight into pricing for profitability.  

Published: December 18, 2009 by Guest Contributor

By: Amanda Roth As discussed earlier, the validation of a risk based-pricing program can mean several different things. Let’s break these options down. The first option is to complete a validation of the scoring model being used to set the pricing for your program. This is the most basic validation of the program, and does not guarantee any insight on loan profitability expectations. A validation of this nature will help you to determine if the score being used is actually helping to determine the risk level of an applicant. This analysis is completed by using a snapshot of new booked loans received during a period of time usually 18–24 months prior to the current period. It is extremely important to view only the new booked loans taken during the time period and the score they received at the time of application. By maintaining this specific population only, you will ensure the analysis is truly indicative of the predictive nature of your score at the time you make the decision and apply the recommended risk-base pricing. By analyzing the distribution of good accounts vs. the delinquent accounts, you can determine if the score being used is truly able to separate these groups. Without acceptable separation, it would be difficult to make any decisions based on the score models, especially risk-based pricing. Although beneficial in determining whether you are using the appropriate scoring models for pricing, this analysis does not provide insight into whether your risk-based pricing program is set up correctly or not. Please join me next time to take a look at another option for this analysis.

Published: December 18, 2009 by Guest Contributor

By: Kari Michel   Lenders are looking for ways to improve their collections strategy as they continue to deal with unprecedented consumer debt, significant increases in delinquency, charge-off rates and unemployment and, declining collectability on accounts. Improve collections To maximize recovered dollars while minimizing collections costs and resources, new collections strategies are a must. The standard assembly line “bucket” approach to collection treatment no longer works because lenders can not afford the inefficiencies and costs of working each account equally without any intelligence around likelihood of recovery. Using a segmentation approach helps control spend and reduces labor costs to maximize the dollars collected. Credit based data can be utilized in decision trees to create segments that can be used with or without collection models. For example, below is a portion of a full decision tree that shows the separation in the liquidation rates by applying an attribute to a recovery score This entire segment has an average of 21.91 percent liquidation rate. The attribute applied to this score segment is the aggregated available credit on open bank card trades updated within 12 months. By using just this one attribute for this score band, we can see that the liquidation rates range from 11 to 35 percent. Additional attributes can be applied to grow the tree to isolate additional pockets of customers that are more recoverable, and identify segments that are not likely to be recovered. From a fully-developed segmentation analysis, appropriate collections strategies can be determined to prioritize those accounts that are most likely to pay, creating new efficiencies within existing collection strategies to help improve collections.

Published: December 17, 2009 by Guest Contributor

By: Amanda Roth During the past few months, we have been hearing from our clients that there is a renewed focus from the regulators/examiners on risk-based pricing strategies.  Many are requesting a validation of the strategies to ensure acceptable management of risk through proper loan pricing and profitability. The question we often receive is “what exactly are they requiring?” In some cases, a simple validation of the scoring models used in the strategies will be sufficient.  However, many require a deeper dive into where the risk bands are set and how pricing is determined.  They are looking to see if applicants of the same risk level are being priced the same, and when the price is increased from tier A to B, for example.  Also, they\'re checking that the change in rate is in line with the change in risk.  Some are even requiring a profitability analysis to show the expected impact of delinquency, loss and operating expense on net revenue for the product, tier and total portfolio. We\'ll address each of these analyses in more detail over the next few weeks.  In the meantime, what are you hearing from your regulators/examiners?  

Published: December 8, 2009 by Guest Contributor

By: Wendy Greenawalt In my last blog on optimization we discussed how optimized strategies can improve collection strategies. In this blog, I would like to discuss how optimization can bring value to decisions related to mortgage delinquency/modification. Over the last few years mortgage lenders have seen a sharp increase in the number of mortgage account delinquencies and a dramatic change in consumer mortgage payment trends.   Specifically, lenders have seen a shift in consumer willingness from paying their mortgage obligation first, while allowing other debts to go delinquent. This shift in borrower behavior appears unlikely to change anytime soon, and therefore lenders must make smarter account management decisions for mortgage accounts. Adding to this issue, property values continue to decline in many areas and lenders must now identify if a consumer is a strategic defaulter, a candidate for loan modification, or a consumer affected by the economic downturn. Many loans that were modified at the beginning of the mortgage crisis have since become delinquent and have ultimately been foreclosed upon by the lender. Making optimizing decisions related to collection action for mortgage accounts is increasingly complex, but optimization can assist lenders in identifying the ideal consumer collection treatment. This is taking place while lenders considering organizational goals, such as minimizing losses and maximizing internal resources, are retaining the most valuable consumers. Optimizing decisions can assist with these difficult decisions by utilizing a mathematical algorithm that can assess all possible options available and select the ideal consumer decision based on organizational goals and constraints. This technology can be implemented into current optimizing decisioning processes, whether it is in real time or batch processing, and can provide substantial lift in prediction over business as usual techniques.    

Published: December 7, 2009 by Guest Contributor

By: Wendy Greenawalt Optimization has become a \"buzz word\" in the financial services marketplace, but some organizations still fail to realize all the possible business applications for optimization. As credit card lenders scramble to comply with the pending credit card legislation, optimization can be a quick and easily implemented solution that fits into current processes to ensure compliance with the new regulations. Optimizing decisions Specifically, lenders will now be under strict guidelines of when an APR can be changed on an existing account, and the specific circumstances under which the account must return to the original terms. Optimization can easily handle these constraints and identify which accounts should be modified based on historical account information and existing organizational policies. APR account changes can require a great deal of internal resources to implement and monitor for on-going performance. Implementing an optimized strategy tree within an existing account management strategy will allow an organization to easily identify consumer level decisions.  This can be accomplished while monitoring accounts through on-going batch processing. New delivery options are now available for lenders to receive optimized strategies for decisions related to: Account acquisition Customer management Collections Organizations who are not currently utilizing this technology within their  processes should investigate the new delivery options. Recent research suggests optimizing decisions can provide an improvement of 7-to-16 percent over current processes.  

Published: November 30, 2009 by Guest Contributor

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!