Loading…
The Recipe to a Strong Model Development (Part 2)

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: Aug 04, 2009 by

The Recipe to a Strong Model Development (Part 2)

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: Jul 30, 2009 by

FTC extends Red Flags Rule enforcement deadline…..again.

There were always questions around the likelihood that the August 1, 2009 deadline would stick.  Well, the FTC has pushed out the Red Flag Rules compliance deadline to November 1, 2009 (from the previously extended August 1, 2009 deadline). This extension is in response to pressures from Congress – and, likely, "lower risk" businesses questioning their being covered under the Red Flag Rule to begin with (businesses such as those related to healthcare, retailers, small businesses, etc). Keep in mind that the FTC extension on enforcement of Red Flag Guidelines does not apply to address discrepancies on credit profiles, and that those discrepancies are expected to be worked TODAY.  Risk management strategies are key to your success. To view the entire press release, visit: http://www.ftc.gov/opa/2009/07/redflag.shtm

Published: Jul 30, 2009 by

  • List 1
  • List 2
  • List 3

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/35exOG0jSJ0?si=amHCm-pJmzhZc9TT” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>

Testing the Border Radius

Changing the heading Page

Loading…
The Recipe to a Strong Model Development (Part 2)

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!” &nbsp;

Published: Aug 04, 2009 by

The Recipe to a Strong Model Development (Part 2)

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!” &nbsp;

Published: Jul 30, 2009 by

FTC extends Red Flags Rule enforcement deadline…..again.

There were always questions around the likelihood that the August 1, 2009 deadline would stick.&nbsp; Well, the FTC has pushed out the Red Flag Rules compliance deadline to November 1, 2009 (from the previously extended August 1, 2009 deadline). This extension is in response to pressures from Congress &ndash; and, likely, &quot;lower risk&quot; businesses questioning their being covered under the Red Flag Rule to begin with (businesses such as those related to healthcare, retailers, small businesses, etc). Keep in mind that the FTC extension on&nbsp;enforcement of Red Flag Guidelines&nbsp;does not apply to address discrepancies on credit profiles, and that those discrepancies are expected to be worked TODAY.&nbsp; Risk management strategies are key to your success. To view the entire press release, visit: http://www.ftc.gov/opa/2009/07/redflag.shtm

Published: Jul 30, 2009 by

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe