If your company is like many financial institutions, it’s likely the discussion around big data and financial analytics has been an ongoing conversation. For many financial institutions, data isn’t the problem, but rather what could or should be done with it. Research has shown that only about 30% of financial institutions are successfully leveraging their data to generate actionable insights, and customers are noticing. According to a recent study from Capgemini, 30% of US customers and 26% of UK customers feel like their financial institutions understand their needs. No matter how much data you have, it’s essentially just ones and zeroes if you’re not using it. So how do banks, credit unions, and other financial institutions who capture and consume vast amounts of data use that data to innovate, improve the customer experience and stay competitive? The answer, you could say, is written in the sand. The most forward-thinking financial institutions are turning to analytical environments, also known as a sandbox, to solve the business problem of big data. Like the name suggests, a sandbox is an environment that contains all the materials and tools one might need to create, build, and collaborate around their data. A sandbox gives data-savvy banks, credit unions and FinTechs access to depersonalized credit data from across the country. Using custom dashboards and data visualization tools, they can manipulate the data with predictive models for different micro and macro-level scenarios. The added value of a sandbox is that it becomes a one-stop shop data tool for the entire enterprise. This saves the time normally required in the back and forth of acquiring data for a specific to a project or particular data sets. The best systems utilize the latest open source technology in artificial intelligence and machine learning to deliver intelligence that can inform regional trends, consumer insights and highlight market opportunities. From industry benchmarking to market entry and expansion research and campaign performance to vintage analysis, reject inferencing and much more. An analytical sandbox gives you the data to create actionable analytics and insights across the enterprise right when you need it, not months later. The result is the ability to empower your customers to make financial decisions when, where and how they want. Keeping them happy keeps your financial institution relevant and competitive. Isn’t it time to put your data to work for you? Learn more about how Experian can solve your big data problems. >> Interested to see a live demo of the Ascend Sandbox? Register today for our webinar “Big Data Can Lead to Even Bigger ROI with the Ascend Sandbox.”
Big Data is no longer a new concept. Once thought to be an overhyped buzzword, it now underpins and drives billions in dollars of revenue across nearly every industry. But there are still companies who are not fully leveraging the value of their big data and that’s a big problem. In a recent study, Experian and Forrester surveyed nearly 600 business executives in charge of enterprise risk, analytics, customer data and fraud management. The results were surprising: while 78% of organizations said they have made recent investments in advanced analytics, like the proverbial strategic plan sitting in a binder on a shelf, only 29% felt they were successfully using these investments to combine data sources to gather more insights. Moreover, 40% of respondents said they still rely on instinct and subjectivity when making decisions. While gut feeling and industry experience should be a part of your decision-making process, without data and models to verify or challenge your assumptions, you’re taking a big risk with bigger operations budgets and revenue targets. Meanwhile, customer habits and demands are quickly evolving beyond a fundamental level. The proliferation of mobile and online environments are driving a paradigm shift to omnichannel banking in the financial sector and with it, an expectation for a customized but also digitized customer experience. Financial institutions have to be ready to respond to and anticipate these changes to not only gain new customers but also retain current customers. Moreover, you can bet that your competition is already thinking about how they can respond to this shift and better leverage their data and analytics for increased customer acquisition and engagement, share of wallet and overall reach. According to a recent Accenture study, 79% of enterprise executives agree that companies that fail to embrace big data will lose their competitive position and could face extinction. What are you doing to help solve the business problem around big data and stay competitive in your company?
Machine learning (ML), the newest buzzword, has swept into the lexicon and captured the interest of us all. Its recent, widespread popularity has stemmed mainly from the consumer perspective. Whether it’s virtual assistants, self-driving cars or romantic matchmaking, ML has rapidly positioned itself into the mainstream. Though ML may appear to be a new technology, its use in commercial applications has been around for some time. In fact, many of the data scientists and statisticians at Experian are considered pioneers in the field of ML, going back decades. Our team has developed numerous products and processes leveraging ML, from our world-class consumer fraud and ID protection to producing credit data products like our Trended 3DTM attributes. In fact, we were just highlighted in the Wall Street Journal for how we’re using machine learning to improve our internal IT performance. ML’s ability to consume vast amounts of data to uncover patterns and deliver results that are not humanly possible otherwise is what makes it unique and applicable to so many fields. This predictive power has now sparked interest in the credit risk industry. Unlike fraud detection, where ML is well-established and used extensively, credit risk modeling has until recently taken a cautionary approach to adopting newer ML algorithms. Because of regulatory scrutiny and perceived lack of transparency, ML hasn’t experienced the broad acceptance as some of credit risk modeling’s more utilized applications. When it comes to credit risk models, delivering the most predictive score is not the only consideration for a model’s viability. Modelers must be able to explain and detail the model’s logic, or its “thought process,” for calculating the final score. This means taking steps to ensure the model’s compliance with the Equal Credit Opportunity Act, which forbids discriminatory lending practices. Federal laws also require adverse action responses to be sent by the lender if a consumer’s credit application has been declined. This requires the model must be able to highlight the top reasons for a less than optimal score. And so, while ML may be able to deliver the best predictive accuracy, its ability to explain how the results are generated has always been a concern. ML has been stigmatized as a “black box,” where data mysteriously gets transformed into the final predictions without a clear explanation of how. However, this is changing. Depending on the ML algorithm applied to credit risk modeling, we’ve found risk models can offer the same transparency as more traditional methods such as logistic regression. For example, gradient boosting machines (GBMs) are designed as a predictive model built from a sequence of several decision tree submodels. The very nature of GBMs’ decision tree design allows statisticians to explain the logic behind the model’s predictive behavior. We believe model governance teams and regulators in the United States may become comfortable with this approach more quickly than with deep learning or neural network algorithms. Since GBMs are represented as sets of decision trees that can be explained, while neural networks are represented as long sets of cryptic numbers that are much harder to document, manage and understand. In future blog posts, we’ll discuss the GBM algorithm in more detail and how we’re using its predictability and transparency to maximize credit risk decisioning for our clients.
The August 2018 LinkedIn Workforce Report states some interesting facts about data science and the current workforce in the United States. Demand for data scientists is off the charts, but there is a data science skills shortage in almost every U.S. city — particularly in the New York, San Francisco and Los Angeles areas. Nationally, there is a shortage of more than 150,000 people with data science skills. One way companies in financial services and other industries have coped with the skills gap in analytics is by using outside vendors. A 2017 Dun & Bradstreet and Forbes survey reported that 27 percent of respondents cited a skills gap as a major obstacle to their data and analytics efforts. Outsourcing data science work makes it easier to scale up and scale down as needs arise. But surprisingly, more than half of respondents said the third-party work was superior to their in-house analytics. At Experian, we have participated in quite a few outsourced analytics projects. Here are a few of the lessons we’ve learned along the way: Manage expectations: Everyone has their own management style, but to be successful, you must be proactively involved in managing the partnership with your provider. Doing so will keep them aligned with your objectives and prevent quality degradation or cost increases as you become more tied to them. Communication: Creating open and honest communication between executive management and your resource partner is key. You need to be able to discuss what is working well and what isn’t. This will help to ensure your partner has a thorough understanding of your goals and objectives and will properly manage any bumps in the road. Help external resources feel like a part of the team: When you’re working with external resources, either offshore or onshore, they are typically in an alternate location. This can make them feel like they aren’t a part of the team and therefore not directly tied to the business goals of the project. To help bridge the gap, performing regular status meetings via video conference can help everyone feel like a part of the team. Within these meetings, providing information on the goals and objectives of the project is key. This way, they can hear the message directly from you, which will make them feel more involved and provide a clear understanding of what they need to do to be successful. Being able to put faces to names, as well as having direct communication with you, will help external employees feel included. Drive engagement through recognition programs: Research has shown that employees are more engaged in their work when they receive recognition for their efforts. While you may not be able to provide a monetary award, recognition is still a big driver for engagement. It can be as simple as recognizing a job well done during your video conference meetings, providing certificates of excellence or sending a simple thank-you card to those who are performing well. Either way, taking the extra time to make your external workforce feel appreciated will produce engaged resources that will help drive your business goals forward. Industry training: Your external resources may have the necessary skills needed to perform the job successfully, but they may not have specific industry knowledge geared towards your business. Work with your partner to determine where they have expertise and where you can work together to providing training. Ensure your external workforce will have a solid understanding of the business line they will be supporting. If you’ve decided to augment your staff for your next big project, Experian® can help. Our Analytics on DemandTM service provides senior-level analysts, either onshore or offshore, who can help with analytical data science and modeling work for your organization.
As more financial institutions express interest and leverage alternative credit data sources to decision and assess consumers, lenders want to be assured of how they can best utilize this data source and maintain compliance. Experian recently interviewed Philip Bohi, Vice President for Compliance Education for the American Financial Services Association (AFSA), to learn more about his perspective on this topic, as well as to gain insights on what lenders should consider as they dive into the world of alternative credit data. Alternative data continues to be a hot topic in the financial services space. How have you seen it evolve over the past few years? It’s hard to pinpoint where it began, but it has been interesting to observe how technology firms and people have changed our perceptions of the value and use of data in recent years. Earlier, a company’s data was just the information needed to conduct business. It seems like people are waking up to the realization that their business data can be useful internally, as well as to others. And we have come to understand how previously disregarded data can be profoundly valuable. These insights provide a lot of new opportunities, but also new questions. I would also say that the scope of alternative credit data use has changed. A few years ago, alternative credit data was a tool to largely address the thin- and no-file consumer. More recently, we’ve seen it can provide a lift across the credit spectrum. We recently conducted a survey with lenders and 23% of respondents cited “complying with laws and regulations” as the top barrier to utilizing alternative data. Why do you think this is the case? What are the top concerns you hear from lenders as it relates to compliance on this topic? The consumer finance industry is very focused on compliance, because failure to maintain compliance can kill a business, either directly through fines and expenses, or through reputation damage. Concerns about alternative data come from a lack of familiarity. There is uncertainty about acquiring the data, using the data, safeguarding the data, selling the data, etc. Companies want to feel confident that they know where the limits are in creating, acquiring, using, storing and selling data. Alternative data is a broad term. When it comes to utilizing it for making a credit decision, what types of alternative data can actually be used? Currently the scope is somewhat limited. I would describe the alternative data elements as being analogous to traditional credit data. Alternative data includes rent payments, utility payments, cell phone payments, bank deposits, and similar records. These provide important insights into whether a given consumer is keeping up with financial obligations. And most importantly, we are seeing that the particular types of obligations reflected in alternative data reflect the spending habits of people whose traditional credit files are thin or non-existent. This is a good thing, as alternative data captures consumers who are paying their bills consistently earlier than traditional data does. Serving those customers is a great opportunity. If a lender wants to begin utilizing alternative credit data, what must they know from a compliance standpoint? I would begin with considering what the lender’s goal is and letting that guide how it will explore using alternative data. For some companies, accessing credit scores that include some degree of alternative data along with traditional data elements is enough. Just doing that provides a good business benefit without introducing a lot of additional risk as compared to using traditional credit score information. If the company wants to start leveraging its own customer data for its own purposes, or making it available to third parties, that becomes complex very quickly. A company can find itself subject to all the regulatory burdens of a credit-reporting agency very quickly. In any case, the entire lifecycle of the data has to be considered, along with how the data will be protected when the data is “at rest,” “in use,” or “in transit.” Alternative data used for credit assessment should additionally be FCRA-compliant. How do you see alternative credit data evolving in the future? I cannot predict where it will go, but the unfettered potential is dizzying. Think about how DNA-based genealogy has taken off, telling folks they have family members they did not know and providing information to solve old crimes. I think we need to carefully balance personal privacy and prudent uses of customer data. There is also another issue with wide-ranging uses of new data. I contend it takes time to discern whether an element of data is accurately predictive. Consider for a moment a person’s utility bills. If electricity usage in a household goes down when the bills in the neighborhood are going up, what does that tell us? Does it mean the family is under some financial strain and using the air conditioning less? Or does it tell us they had solar panels installed? Or they’ve been on vacation? Figuring out what a particular piece of data means about someone’s circumstances can be difficult. About Philip Bohi Philip joined AFSA in 2017 as Vice President, Compliance Education. He is responsible for providing strategic direction and leadership for the Association’s compliance activities, including AFSA University, and is the staff liaison to the Operations and Regulatory Compliance Committee and Technology Task Forces. He brings significant consumer finance legal and compliance experience to AFSA, having served as in-house counsel at Toyota Motor Credit Corporation and Fannie Mae. At those companies, Philip worked closely with compliance staff supporting technology projects, legislative tracking, and vendor management. His private practice included work on manufactured housing, residential mortgage compliance, and consumer finance matters at McGlinchey Stafford, PLLC and Lotstein Buckman, LLP. He is a member of the Virginia State Bar and the District of Columbia Bar. Learn more about the array of alternative credit data sources available to financial institutions.
As I mentioned in my previous blog, model validation is an essential step in evaluating a recently developed predictive model’s performance before finalizing and proceeding with implementation. An in-time validation sample is created to set aside a portion of the total model development sample so the predictive accuracy can be measured on a data sample not used to develop the model. However, if few records in the target performance group are available, splitting the total model development sample into the development and in-time validation samples will leave too few records in the target group for use during model development. An alternative approach to generating a validation sample is to use a resampling technique. There are many different types and variations of resampling methods. This blog will address a few common techniques. Jackknife technique — An iterative process whereby an observation is removed from each subsequent sample generation. So if there are N number of observations in the data, jackknifing calculates the model estimates on N - 1 different samples, with each sample having N - 1 observations. The model then is applied to each sample, and an average of the model predictions across all samples is derived to generate an overall measure of model performance and prediction accuracy. The jackknife technique can be broadened to a group of observations removed from each subsequent sample generation while giving equal opportunity for inclusion and exclusion to each observation in the data set. K-fold cross-validation — Generates multiple validation data sets from the holdout sample created for the model validation exercise, i.e., the holdout data is split into K subsets. The model then is applied to the K validation subsets, with each subset held out during the iterative process as the validation set while the model scores the remaining K-1 subsets. Again, an average of the predictions across the multiple validation samples is used to create an overall measure of model performance and prediction accuracy. Bootstrap technique — Generates subsets from the full model development data sample, with replacement, producing multiple samples generally of equal size. Thus, with a total sample size of N, this technique generates N random samples such that a single observation can be present in multiple subsets while another observation may not be present in any of the generated subsets. The generated samples are combined into a simulated larger data sample that then can be split into a development and an in-time, or holdout, validation sample. Before selecting a resampling technique, it’s important to check and verify data assumptions for each technique against the data sample selected for your model development, as some resampling techniques are more sensitive than others to violations of data assumptions. Learn more about how Experian Decision Analytics can help you with your custom model development.
An introduction to the different types of validation samples Model validation is an essential step in evaluating and verifying a model’s performance during development before finalizing the design and proceeding with implementation. More specifically, during a predictive model’s development, the objective of a model validation is to measure the model’s accuracy in predicting the expected outcome. For a credit risk model, this may be predicting the likelihood of good or bad payment behavior, depending on the predefined outcome. Two general types of data samples can be used to complete a model validation. The first is known as the in-time, or holdout, validation sample and the second is known as the out-of-time validation sample. So, what’s the difference between an in-time and an out-of-time validation sample? An in-time validation sample sets aside part of the total sample made available for the model development. Random partitioning of the total sample is completed upfront, generally separating the data into a portion used for development and the remaining portion used for validation. For instance, the data may be randomly split, with 70 percent used for development and the other 30 percent used for validation. Other common data subset schemes include an 80/20, a 60/40 or even a 50/50 partitioning of the data, depending on the quantity of records available within each segment of your performance definition. Before selecting a data subset scheme to be used for model development, you should evaluate the number of records available in your target performance group, such as number of bad accounts. If you have too few records in your target performance group, a 50/50 split can leave you with insufficient performance data for use during model development. A separate blog post will present a few common options for creating alternative validation samples through a technique known as resampling. Once the data has been partitioned, the model is created using the development sample. The model is then applied to the holdout validation sample to determine the model’s predictive accuracy on data that wasn’t used to develop the model. The model’s predictive strength and accuracy can be measured in various ways by comparing the known and predefined performance outcome to the model’s predicted performance outcome. The out-of-time validation sample contains data from an entirely different time period or customer campaign than what was used for model development. Validating model performance on a different time period is beneficial to further evaluate the model’s robustness. Selecting a data sample from a more recent time period having a fully mature set of performance data allows the modeler to evaluate model performance on a data set that may more closely align with the current environment in which the model will be used. In this case, a more recent time period can be used to establish expectations and set baseline parameters for model performance, such as population stability indices and performance monitoring. Learn more about how Experian Decision Analytics can help you with your custom model development needs.
Data is a part of a lot of conversations in both my professional and personal life. Everything around us is creating data – whether it’s usable or not is a business case for opportunity. Think about how many times a day you access the television, your phone, iPad or computer. Have a smart fridge? More data. Drive a car? More data. It’s all around us and can help us make more informed decisions. What is exciting to me are the new techniques and technologies, like machine learning, artificial intelligence and SaaS-based applications, that are becoming more accessible to lenders for use in managing their relationships with customers. This means lenders – whether a multi-national bank, online lender, regional bank or credit union – can make better use of the data they have about their customers. Let’s look at two groups – Gen-X and Millennials – who tend to be more transient than past generations. They rent not buy. They are brand loyal but will flip quickly if the experience or their expectations aren’t met. They live out their lives on social media yet know the value of their information. We’re just now starting to get to know the next generation, Gen Z. Can you imagine making individual customer decisions at a large scale on a population with so many characteristics to consider? With machine learning and new technologies available, alternative data – such as social media, visual and video data – can become an important input to knowing when, where and what financial product you offer. And make the offer quickly! This is a stark change from the days when decisions were based on binary inputs, or rather, simple yes/no answers. And it took 1-3 days (or sometimes weeks) to make an offer. More and more consumers are considering nontraditional banks because they offer the personalization and speed at which consumers have become accustomed. We can thank the Amazons of the world for setting the bar high. The reality is - lenders must evolve their systems and processes to better utilize big data and the insights that machine learning and artificial intelligence can offer at the speed of cloud-based applications. Digitization threatens to lower profits in the finance industry unless traditional banks undertake innovation initiatives centered on better servicing the customer. In plain speak – banks need to innovate like a FinTech – simplify the products and create superior customer experiences. Machine learning and artificial intelligence can be a way to use data for making more informed decisions faster that deliver better experiences and distinguish your business from the next. Prior to Experian, I spent some time at a start-up before it was acquired by one of the large multi-national payment processors. Energizing is a word that comes to mind when I think back to those days. And it’s a feeling I have today at Experian. We’re taking innovation to heart – investing a lot in revolutionary technology and visionary people. The energy is buzzing and it’s an exciting place to be. As a former customer of 20 years turned employee, I’ve started to think Experian will transform the way we think about cool tech companies!
According to our recent research for the State of Alternative Credit Data, more lenders are using alternative credit data to determine if a consumer is a good or bad credit risk. In fact, when it comes to making decisions: More than 50% of lenders verify income, employment and assets as well as check public records before making a credit decision. 78% of lenders believe factoring in alternative data allows them to extend credit to consumers who otherwise would be declined. 70% of consumers are willing to provide additional financial information to a lender if it increases their chance for approval or improves their interest rate. The alternative financial services space continues to grow with products like payday loans, rent-to-own products, short-term loans and more. By including alternative financial data, all types of lenders can explore both universe expansion and risk mitigation. State of Alternative Credit Data
Alternative credit data. Enhanced digital credit marketing. Faster, integrated decisioning. Fraud and identity protections. The latest in technology innovation. These were the themes Craig Boundy, Experian’s CEO of North America, imparted to an audience of 800-plus Vision guests on Monday morning. “Technology, innovation and new sources of data are fusing to create an unprecedented number of new ways to solve pressing business challenges,” said Boundy. “We’re leveraging the power of data to help people and businesses thrive in the digital economy.” Main stage product demos took the shape of dark web scans, data visualization, and the latest in biometric fraud scanning. Additionally, a diverse group of breakout sessions showcased all-new technology solutions and telling stats about how the economy is faring in 2018, as well as consumer credit trends and preferences. A few interesting storylines of the day … Regulatory Under the Trump administration, everyone is talking about deregulation, but how far will the pendulum swing? Experian Sr. Director of Regulatory Affairs Liz Oesterle told audience members that Congress will likely pass a bill within the next few days, offering relief to small and mid-sized banks and credit unions. Under the new regulations, these smaller players will no longer have to hold as much capital to cover losses on their balance sheets, nor will they be required to have plans in place to be safely dismantled if they fail. That trigger, now set at $50 billion in assets, is expected to rise to $250 billion. Fraud Alex Lintner, Experian’s President of Consumer Information Services, reported there were 16.7 million identity theft victims in 2017, resulting in $16.8 billion in losses. Need more to fear? There is also a reported 323k new malware samples found each day. Multiple sessions touched on evolving best practices in authentication, which are quickly shifting to biometrics-based solutions. Personal identifiable information (PII) must be strengthened. Driver’s licenses, social security numbers, date of birth – these formats are no longer enough. Get ready for eye scans, as well as voice and photo recognition. Emerging Consumers The quest to understand the up-and-coming Millennials continues. Several noteworthy stats: 42% of Millennials said they would conduct more online transactions if there weren’t so many security hurdles to overcome. So, while businesses and lenders are trying to do more to authenticate and strengthen security, it’s a delicate balance for Millennials who still expect an easy and turnkey customer experience. Gen Z, also known as Centennials, are now the largest generation with 28% of the population. While they are just coming onto the credit scene, these digital natives will shape the credit scene for decades to come. More than ever, think mobile-first. And consider this … it\'s estimated that 25% of shopping malls will be closed within five years. Gen Z isn’t shopping the mall scene. Retail is changing rapidly! Economy Mortgage originations are trending up. Consumer confidence, investor confidence, interest rates and home sales are all positive. Unemployment remains low. Bankcard originations have now surpassed the 2007 peak. Experian’s Vice President of Analytics Michele Raneri had glowing remarks on the U.S. economy, with all signs pointing to a positive 2018 across the board. Small business loan volumes are also up 10% year-to-date versus the same time last year. Keynote presenters speculate there could be three to four rate hikes within the year, but after years of no hikes, it’s time. Data There are 2.5 quintillion pieces of data created daily. And 80% of what we know about a consumer today is the result of data generated within the past year. While there is no denying there is a LOT of data, presenters throughout the day talked about the importance of access and speed. Value comes with more APIs to seamlessly connect, as well as data visualization solutions like Tableau to make the data easier to understand. More Vision news to come. Gain insights and news throughout the day by following #ExperianVision on Twitter.
The traditional credit score has ruled the financial services space for decades, but it‘s clear the way in which consumers are managing their money and credit has evolved. Today’s consumers are utilizing different types of credit via various channels. Think fintech. Think short-term loans. Think cash-checking services and payday. So, how do lenders gain more visibility to a consumer’s credit worthiness in 2018? Alternative credit data has surfaced to provide a more holistic view of all consumers – those on the traditional file and those who are credit invisibles and emerging. In an all-new report, Experian dives into “The State of Alternative Credit Data,” providing in-depth coverage on how alternative credit data is defined, regulatory implications, consumer personas attached to the alternative financial services industry, and how this data complements traditional credit data files. “Alternative credit data can take the shape of alternative finance data, rental, utility and telecom payments, and various other data sources,” said Paul DeSaulniers, Experian’s senior director of Risk Scoring and Trended/Alternative Data and attributes. “What we’ve seen is that when this data becomes visible to a lender, suddenly a much more comprehensive consumer profile is formed. In some instances, this helps them offer consumers new credit opportunities, and in other cases it might illuminate risk.” In a national Experian survey, 53% of consumers said they believe some of these alternative sources like utility bill payment history, savings and checking account transactions, and mobile phone payments would have a positive effect on their credit score. Of the lenders surveyed, 80% said they rely on a credit report, plus additional information when making a lending decision. They cited assessing a consumer’s ability to pay, underwriting insights and being able to expand their lending universe as the top three benefits to using alternative credit data. The paper goes on to show how layering in alternative finance data could allow lenders to identify the consumers they would like to target, as well as suppress those that are higher risk. “Additional data fields prove to deliver a more complete view of today’s credit consumer,” said DeSaulniers. “For the credit invisible, the data can show lenders should take a chance on them. They may suddenly see a steady payment behavior that indicates they are worthy of expanded credit opportunities.” An “unscoreable” individual is not necessarily a high credit risk — rather they are an unknown credit risk. Many of these individuals pay rent on time and in full each month and could be great candidates for traditional credit. They just don’t have a credit history yet. The in-depth report also explores the future of alternative credit data. With more than 90 percent of the data in the world having been generated in just the past five years, there is no doubt more data sources will emerge in the coming years. Not all will make sense in assessing credit decisions, but there will definitely be new ways to capture consumer-permissioned data to benefit both consumer and lender. Read Full Report
Marketers are keenly aware of how important it is to “Know thy customer.” Yet customer knowledge isn’t restricted to the marketing-savvy. It’s also essential to credit risk managers and model developers. Identifying and separating customers into distinct groups based on various types of behavior is foundational to building effective custom models. This integral part of custom model development is known as segmentation analysis. Segmentation is the process of dividing customers or prospects into groupings based on similar behaviors such as length of time as a customer or payment patterns like credit card revolvers versus transactors. The more similar or homogeneous the customer grouping, the less variation across the customer segments are included in each segment’s custom model development. So how many scorecards are needed to aptly score and mitigate credit risk? There are several general principles we’ve learned over the course of developing hundreds of models that help determine whether multiple scorecards are warranted and, if so, how many. A robust segmentation analysis contains two components. The first is the generation of potential segments, and the second is the evaluation of such segments. Here I’ll discuss the generation of potential segments within a segmentation scheme. A second blog post will continue with a discussion on evaluation of such segments. When generating a customer segmentation scheme, several approaches are worth considering: heuristic, empirical and combined. A heuristic approach considers business learnings obtained through trial and error or experimental design. Portfolio managers will have insight on how segments of their portfolio behave differently that can and often should be included within a segmentation analysis. An empirical approach is data-driven and involves the use of quantitative techniques to evaluate potential customer segmentation splits. During this approach, statistical analysis is performed to identify forms of behavior across the customer population. Different interactive behavior for different segments of the overall population will correspond to different predictive patterns for these predictor variables, signifying that separate segment scorecards will be beneficial. Finally, a combination of heuristic and empirical approaches considers both the business needs and data-driven results. Once the set of potential customer segments has been identified, the next step in a segmentation analysis is the evaluation of those segments. Stay tuned as we look further into this topic. Learn more about how Experian Decision Analytics can help you with your segmentation or custom model development needs.
In the credit game, the space is deep and diverse. From super prime to prime to subprime consumers, there is much to be learned about how different segments are utilizing credit and navigating the financial services arena. With 78 percent of full-time workers saying they live paycheck-to-paycheck and 71 percent of U.S. workers responding that they live in debt, it is not surprising a sudden life event can plunge a solid credit consumer from prime to subprime within months. Think lost job, divorce or unexpected medical bill. This population is not going away, and they are seeking ways to make ends meet and obtain finances for needs big and small. In many instances, alternative credit data can shed a light on new opportunities for traditional lenders, fintech players and those in the alternative financial space when servicing this specific consumer segment. In a new study, Clarity analyzed the trends and financial behavior of subprime loan users by looking at application and loan data in Clarity’s database, as well as overlaying Vantage Score insights from Experian from 2013 to 2017. Clarity conducted this subprime trends report last year, but this is the first time it factored in Vantage score data, providing a different lens as to where consumers fall within the credit score tiers. Among the study highlights: Storefront single pay loan customers are becoming more comfortable with applying for online loans, with a growing percentage seeking installment products. For the first time in five years, online single pay lending (payday) saw a reduction in total credit utilization per customer. Online installment, on the other hand, saw an increase. While the number of online installment loans increased by 12 percent and the number of borrowers by only 9 percent, the dollar value grew by 30 percent. Online installment lenders had the greatest percentage increase in average loan amount. California and Texas remain the most significant markets for online lenders, ranking first and second for five years in a row due to population size. There has also been growth in the Midwest. The in-depth report additionally delves into demographics, indicators of financial stability among the subprime market and comparisons between storefront and online product use and performance. “Every year, there are more financial lenders and products emerging to serve this population,” said Andy Sheehan, president of Clarity Services. “It’s important to understand the trends and data associated with these individuals and how they are maneuvering throughout the credit spectrum. As we know, it is often not a linear journey.” The inclusion of the Vantage Score showcased additional findings around prime versus subprime financial behaviors and looks at generational trends. Access Full Report
Traditional credit attributes provide immense value for lenders when making decisions, but when used alone, they are limited to capturing credit behavior during a single moment of time. To add a deeper layer of insight, Experian® today unveiled new trended attributes, aimed at giving lenders a wider view into consumer credit behavior and patterns over time. Ultimately, this helps them expand into new risk segments and better tailor credit offers to meet consumer needs. An Experian analysis shows that custom models developed using Trended 3DTM attributes provide up to a 7 percent lift in predictive performance when compared with models developed using traditional attributes only. “While trended data has been shown to provide additional insight into a consumer’s credit behavior, lack of standardization across different providers has made it a challenge to gain those insights,” said Steve Platt, Experian’s Group President of Decision Analytics and Data Quality. “Trended 3D makes it easy for our clients to get value from trended data in a consistent manner, so they can make more informed decisions across the credit life cycle and, more importantly, give consumers better access to lending options.” Experian’s Trended 3D attributes help lenders unlock valuable insights hidden within credit reports. For example, two people may have similar balances, utilization and risk scores, but their paths to that point may be substantially different. The solution synthesizes a 24-month history of five key credit report fields — balance, credit limit or original loan amount, scheduled payment amount, actual payment amount and last payment date. Lenders can gain insight into: Changes in balances over time Migration patterns from one tradeline or multiple tradelines to another Variations in utilization and credit limits Changes in payment activity and collections Balance transfer and debt consolidation behavior Behavior patterns of revolving trades versus transactional trades Additionally, Trended 3D leverages machine learning techniques to evaluate behavioral data and recognize patterns that previously may have gone undetected. To learn more information about Experian’s Trended 3D attributes, click here.
Expert offers insights into turnkey big data access The data is out there – and there is a lot of it. In the world of credit, there are more than 220 million credit-active consumers. Bolt on insights from the alternative financial services space and that number climbs even higher. So, what can analysts do with this information? With technology and the rise of data scientists, there are certainly opportunities to dig in and explore. To learn more, we chatted with Chris Fricks, data and product expert, responsible for Experian’s Analytical Sandbox™. 1. With the launch of Experian’s all-new Ascend platform, one of the key benefits is full-file access to our Sandbox environment. What exactly can clients access and are there specific tools they need to dig into the data? Clients will have access to monthly snapshots of 12-plus years of the full suite of Experian scores, attributes, and raw credit data covering the full national consumer base. Along with the data access, clients can interact and manipulate the data with the analytic tools they prefer. For example, a client can log into the environment through a standard Citrix portal and land on a Windows desktop. From there, they can access applications like SAS, R, Python, or Tableau to interrogate the data assets and derive the necessary value. 2. How are clients benefiting from this access? What are the top use cases you are seeing? Clients are now able to speed analytic findings to market and iterate through the analytics lifecycle much faster. We are seeing clients are engaging in new model development, reject inferencing, and industry/peer benchmarking. One of the more advanced use cases is related to machine learning – think of artificial intelligence for data analytics. In this instance, we have tools like H2O, a robust source of data for users to draw on, and a platform that is optimized to bring it all together in a cohesive, easy-to-use manner. 3. Our Experian database has details on 220 million credit-active consumers. Is this data anonymized, and how are we ensuring sensitive details are secure? We use the data from our credit database, but we’ve assigned unique consumer-level and trade-level encrypted pins to ensure security. Once the encrypted PINs are assigned, they remain the same over time. Then all PII is scrubbed and everything is rendered de-identifiable from an individual consumer and lender perspective. Our pinning technique allows users to accurately track individual trades and consumers through time, but also prevents any match back to individual consumers and lenders. 4. I imagine having access to so much data could be overwhelming for clients. Is more necessarily better? You’re right. Access to our full credit file can be a lot to handle. While general users will not “actively” use the full file daily, statisticians and data scientists will see an advantage to having access to the larger universe. For example, if a statistician only has access to 10% of the Sandbox and wants to look at a specific region of the country, they may find their self in a situation with limited data that it is no longer statistically significant. By accessing the full file, they can sample down based on the full population from the region they are concerned with analyzing. 5. Who are the best-suited individuals to dig into the Sandbox environment and assess trends and findings? The environment is designed to serve the front-line analysts responsible for coding and analytics that gets reported out to various levels of leadership. It also enables the socialization of those findings with leadership, helping them to interact and give feedback on what they are seeing. Learn more about Experian’s Analytical Sandbox and request a demo.