ТОП просматриваемых книг сайта:
The Failure of Risk Management. Douglas W. Hubbard
Читать онлайн.Название The Failure of Risk Management
Год выпуска 0
isbn 9781119522041
Автор произведения Douglas W. Hubbard
Жанр Ценные бумаги, инвестиции
Издательство John Wiley & Sons Limited
Answering the Right Question
The first and simplest test of a risk management method is determining if it answers the relevant question, “Where and how much do we reduce risk and at what cost?” A method that answers this, explicitly and specifically, passes this test. If a method leaves this question open, it does not pass the test—and many will not pass.
For example, simply providing a list of a firm's top ten risks or classifying risks into high, medium, or low doesn't close the loop. Certainly, this is a necessary and early step of any risk management method. I have sometimes heard that such a method is useful if only it helps to start the conversation. Yes, that may be useful, but if it stops there it still leaves the heavy lifting yet to be done. Consider an architectural firm that provides a list of important features of a new building such as “large boardroom,” “nice open entry way with a fountain,” and then walks away without producing detailed plans much less actually constructing the building. Such a list would be a starting point but it is far short of a usable plan, much less detailed blueprints or a finished building.
Relevant risk management should be based on risk assessment that ultimately follows through to explicit recommendations on decisions. Should an organization spend $2 million to reduce its second largest risk x by half, or spend the same amount to eliminate three risks that aren't in the top five biggest risks? Ideally, risk mitigation can be evaluated as a kind of “return on mitigation” so that different mitigation strategies of different costs can be prioritized explicitly. Merely knowing that some risks are high and others are low is not as useful as knowing that a particular mitigation has a 230 percent return on investment (ROI) and another has only a 5 percent ROI or whether the total risks are within our risk tolerance or not.
WHAT WE MAY FIND
We will spend some time on several of the previously mentioned methods of assessing performance, but we will be spending a greater share of our time on component testing. This is due, in part, to the fact that there is so much research on the performance of various components, such as methods of improving subjective estimates, the performance of quantitative methods, using simulations, aggregating expert opinion, and more.
Still, even if risk managers use only component testing in their risk management process, many are likely to find serious shortcomings in their current approach. Many of the components of popular risk management methods have no evidence of whether they work, and some components have shown clear evidence of adding error. Still other components, though not widely used, can be shown to produce convincing improvements compared to the alternatives.
Lacking real evidence of effectiveness, some practitioners will employ some of the previously mentioned defenses. We will address at least some of those arguments in subsequent chapters, and we will show how some of those same arguments could have been used to make the case for the “validity” of astrology, numerology, or crystal healing. When managers can begin to differentiate the astrology from the astronomy, then they can begin to adopt methods that work.
RISK MANAGEMENT SUCCESS-FAILURE SPECTRUM
1 Best. The firm builds quantitative models to run simulations; all inputs are validated with proven statistical methods, additional empirical measurements are used when optimal, and portfolio analysis of risk and return is used. Always skeptical of any model, the modelers check against reality and continue to improve the risk models with objective measures of risks. Efforts are made to systematically identify all risks in the firm.
2 Better. Quantitative models are built using at least some proven components; the scope of risk management expands to include more of the risks.
3 Baseline. Intuition of management drives the assessment and mitigation strategies. No formal risk management is attempted.
4 Worse (the merely useless). Detailed soft or scoring methods are used, or perhaps misapplied quantitative methods are used, but at least they are not counted on by management. This may be no worse than the baseline, except that they did waste time and money on it.
5 Worst (the worse than useless). Ineffective methods are used with great confidence even though they add error to the evaluation. Perhaps much effort is spent on seemingly sophisticated methods, but there is still no objective, measurable evidence they improve on intuition. These “sophisticated” methods are far worse than doing nothing or simply wasting money on ineffectual methods. They cause erroneous decisions to be made that would not otherwise have been made.
A firm that conducts an honest evaluation of itself using the prescribed methods will find it falls somewhere along a spectrum of success and failure. Based on the standards I've described for the success of risk management, the reader has probably already figured out that I believe the solution to be based on the more sophisticated, quantitative methods. You may not yet be convinced that such methods are best or that they are even practical. We'll get to that later. For now, let's look at the proposed success/failure spectrum. (See the Risk Management Success-Failure Spectrum box.)
Note that in this spectrum doing nothing about risk management is not actually the worst case. It is in the middle of the list. Those firms invoking the infamous “at least I am doing something” defense of their risk management process are likely to fare worse. Doing nothing is not as bad as things can get for risk management. The worst thing to do is to adopt an unproven method—whether or not it seems sophisticated—and act on it with high confidence.
NOTES
1 1. Some of the details of this are modified to protect the confidentiality of the firm that presented the method in this closed session, but the basic approach used was still a subjective weighted score.
2 2. C. Tsai, J. Klayman, and R. Hastie, “Effects of Amount of Information on Judgment Accuracy and Confidence,” Organizational Behavior and Human Decision Processes 107, no. 2 (2008): 97–105.
3 3. C. Heath and R. Gonzalez, “Interaction with Others Increases Decision Confidence but Not Decision Quality: Evidence against Information Collection Views of Interactive Decision Making,” Organizational Behavior and Human Decision Processes 61, no. 3 (1995): 305–26.
4 4. Stuart Oskamp, “Overconfidence in Case-Study Judgments,” Journal of Consulting Psychology 29, no. 3 (1965): 261–65, doi: 10.1037/ h0022125. Reprinted in Judgment under Uncertainty: Heuristics and Biases, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (Cambridge, UK: Cambridge University Press, 1982).
5 5. P. Andreassen, “Judgmental Extrapolation and Market Overreaction: On the Use and Disuse of News,” Journal of Behavioral Decision Making 3, no. 3 (July–September 1990): 153–74.
6 6. D. A. Seaver, “Assessing Probability with Multiple Individuals: Group Interaction versus Mathematical Aggregation,” Report No. 78–3 (Los Angeles: Social Science Research Institute, University of Southern California, 1978).
7 7. S. Kassin and C. Fong, “I'm Innocent! Effects of Training on Judgments of Truth and Deception in the Interrogation