Скачать книгу

On the positive side, it may drive people to improve their payment and other credit habits to get better scores. On the negative side, this may also lead to manipulation. The usage of robust bureau data will mitigate some of the risk, while the usage of unreliable social media or demographics data may not.

      The ever-present discussion on newer, better algorithms will continue. Our quest to explain data better, and differentiate useful information from noise, has been going on for decades and will likely go on for decades more. The current hot topic is machine learning. Whether it or the other more complex algorithms replaces the simpler algorithms in use in credit scoring will depend on many factors (this topic will also be dealt with in the later chapter on vendor model validation). Banks overwhelmingly select logistic regression, scorecards, and other such methods for credit scoring based on their openness, simplicity, and ease of compliance. Complex algorithms will become more popular for nonlending and nonregulatory modeling, but there will need to be a change in regulatory and model validation mind-sets before they become widely acceptable for the regulatory models.

      The credit crisis of 2008 has been widely discussed and dissected by many others. Let us firstly recognize that it was a complex event and its causes many. Access to cheap money, a housing bubble in many places, teaser rates to subprime borrowers, lack of transparency around models, distorted incentives for frontline staff, unrealistic ratings for mortgage-backed securities, greed, fraud, and the use of self-declared (i.e., unconfirmed) incomes have all been cited.11 Generally, I consider it a failure of both bankers in exercising the basic rules of banking, and risk management in failing to manage risks. Some have even suggested that models and scorecards are to blame. This is not quite accurate and reflects a failure to understand the nature of models. As we will cover in this book, models are built on many underlying assumptions, and their use involves just as many caveats. Models are not perfect, nor are they 100 percent accurate for all times. All models describe historical data – hence the critical need to adjust expectations based on future economic cycles. The amount of confidence in any model or scorecard must be based on both the quality and quantity of the underlying data, and decision-making strategies adjusted accordingly. Models are very useful when used judiciously, along with policy rules and judgment, recognizing both their strengths and weaknesses. The most accurate model in the world will not help if a bank chooses not to confirm any information from credit applicants or to verify identities. As such, one needs to be very realistic when it comes to using scorecards/models, and not have an unjustified level of trust in them.

      “… too many financial institutions and investors simply outsourced their risk management. Rather than undertake their own analysis, they relied on the rating agencies to do the essential work of risk analysis for them.”

– Lloyd Blankfein, CEO Goldman Sachs (Financial Times, February 8, 2009)

      Chapter 2

      Scorecard Development: The People and the Process

      “Talent wins games, but teamwork and intelligence wins championships.”

– Michael Jordan

      Many years ago, I developed a set of scorecards for a risk management department of a bank. The data sent to us by the risk folks was great, and we built a good scorecard with about 14 reasonable variables. About two weeks after delivering the scorecard, we got a call from the customer. Apparently, two of the variables that they had sent to us in the data set were not usable, and we needed to take them out. I have had bankers tell me stories of changing scorecard variables because information technology (IT) gave them estimates of three to four months to code up a new derived variable. IT folks, however, tell me they hate to be surprised by last-minute requests to implement new scorecards or new derived variables that cannot be handled by their systems. Almost every bank I’ve advised has had occasions where the variables desired/expected by the risk manager could not be in models, where models built could not be used because other stakeholders would not agree to them, or where other surprises lay waiting months after the actual work was done.

      These are some of the things that cause problems during scorecard development and implementation projects. In order to prevent such problems, the process of scorecard development needs to be a collaborative one between IT, risk management (strategy and policy), modeling, validation, and operational staff. This collaboration not only creates better scorecards, it also ensures that the solutions are consistent with business direction, prevent surprises, and enable education and knowledge transfer during the development process. Scorecard development is not a “black box” process and should not be treated as such. Experience has shown that developing scorecards in isolation can lead to problems such as inclusion of characteristics that are no longer collected, legally suspect, or difficult to collect operationally, exclusion of operationally critical variables, and devising of strategies that result in “surprises” or are unimplementable. In fact, since the credit crisis of 2007–2008, the tolerance at most banks for complex/black box models and processes is gone. The business user expects a model that can be understood, justified, and where necessary, be tweaked based on business considerations, as well as an open and transparent process that can be controlled.

      In this chapter, we will look at the various personas that should be involved in a scorecard development and implementation project. The level of involvement of staff members varies, and different staff members are required at various key stages of the process. By understanding the types of resources required for a successful scorecard development and implementation project, one will also start to appreciate the business and operational considerations that go into such projects.

      Scorecard Development Roles

      At a minimum, the following main participants are required.

      Scorecard Developer

      The scorecard developer is the person who performs the statistical analyses needed to develop scorecards. This person usually has:

      ● Some business knowledge of the products/tasks for which models are being developed. For example, if someone is responsible for building models for an auto loan product or a mobile phone account, they should be familiar with the car-selling business or the cell phone/telco business. Similarly, a person building scorecards for collections needs to understand the collections process. This is to make sure that they understand the data and can interpret it properly in the context of each subject. This would include knowing which types of variables are generally considered important for each product, how decisions and data collection at source impacts quality, and how the model will be used for decision making.

      ● An in-depth knowledge of the various databases in the company and the data sets being used. The single most important factor in determining the quality of the model is the quality of the data. When the users understand the quirks in the data, where and how the data was generated, deficiencies, biases, and interpretation of the data, they will be able to conduct intelligent analysis of that data. Otherwise, their analysis will be devoid of context. This task may also be covered by someone other than the scorecard developer – for example, a data scientist playing an advisory role.

      ● An in-depth understanding of statistical principles, in particular those related to predictive modeling. For example, knowledge of logistic regression, fit statistics, multicollinearity, decision trees, and so on.

      ● A good understanding of the legal and regulatory requirements of models and of the model development process. This includes documentation requirements, transparency, and any laws that control the usage of certain information. For example, in many countries the use of gender, marital status, race, ethnicity, nationality, and the like are prohibited. They would also need to know requirements expected by internal model validation teams so that minimum standards of model governance are met. Detailed knowledge of this subject is usually with model validation groups.

      ● Business experience in the implementation and usage of risk models. This is related to the business knowledge of the product. If analysts understand the end use of the model, it enables them to develop the one best suited for that task. The analyst will not develop a model that merely meets statistical acceptance tests.

      This person ensures that data is collected according to specifications,

Скачать книгу


<p>11</p>

www.forbes.com/sites/stevedenning/2011/11/22/5086/#c333bf95b560