ТОП просматриваемых книг сайта:
The Failure of Risk Management. Douglas W. Hubbard
Читать онлайн.Название The Failure of Risk Management
Год выпуска 0
isbn 9781119522041
Автор произведения Douglas W. Hubbard
Жанр Ценные бумаги, инвестиции
Издательство John Wiley & Sons Limited
These are examples of the Dunning-Kruger effect, the tendency for the least competent in a given area to also be the most overconfident in his or her capabilities. I first mentioned this phenomenon in the first edition of this book but it has become more widely recognized (if a bit overused) since then. It comes from the work of Cornell psychologists Justin Kruger and David Dunning. They published their research in the area of self-assessments in the harshly titled article, “Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments.”14 They show that about two-thirds of the entire population will rate themselves as better than most in reasoning skills, humor, and grammar. Although the last few studies I just mentioned are not focused on management, if you think that C-level management and trained business professionals are more realistic and their confidence is more justified, wait until you read part 2 of the book.
There is no reason to believe risk management avoids the same problems regarding self-assessment. As we saw in the surveys, any attempt to measure risk management at all is rare. Without measurements, self-assessments in the effectiveness of risk management are unreliable given the effect of the analysis placebo, the low validity problem described by Kahneman and Klein, and the Dunning-Kruger effect.
There is an old management adage that says, “You can't manage what you can't measure.” (This is often misattributed to W. E. Deming, but is a truism, nonetheless.) Management guru Peter Drucker considered measurement to be the “fourth basic element in the work of the manager.” Because the key objective of risk management—risk reduction or at least a minimized risk for a given opportunity—may not exactly be obvious to the naked eye, only deliberate measurements could even detect it. The only way organizations could be justified in believing they are “very effective” at risk management is if they have measured it.
Risk professionals from Protiviti and Aon (two of the firms that conducted the surveys in chapter 2) also have their suspicions about the self-assessments in surveys. Jim DeLoach, a Protiviti managing director, states that “the number of organizations saying they were ‘very effective’ at managing risks was much higher than we expected.” Recall that in the Protiviti survey 57 percent of respondents said they quantify risks “to the fullest extent possible” (it was slightly higher, 63 percent, for those that rated themselves “very effective” at risk management). Yet this is not what DeLoach observes first-hand when he examines risk management in various organizations: “Our experience is that most firms aren't quantifying risks … I just have a hard time believing they are quantifying risks as they reported.”
Christopher (Kip) Bohn, an actuary, fellow of the Casualty Actuarial Society, and formerly a director at Aon Global Risk Consulting, is equally cautious about the findings in surveys about how common quantitative methods may be in risk management. Bohn observes, “For most organizations, the state of the art is a qualitative analysis. They do surveys and workshops and get a list of risks on the board. They come up with a ranking system with a frequency and impact, each valued on a scale of, for example, one to five.” This is not exactly what an actuary like Bohn considers to be quantitative analysis of risk.
My own experience also seems to agree more with the personal observations of DeLoach and Bohn than the results of the self-assessment surveys. I treat the results of the HDR/KPMG survey as perhaps an upper bound in the adoption of quantitative methods. Whenever I give a speech about risk management to a large group of managers, I ask those who have a defined approach for managing risks to raise their hands. A lot of hands go up, maybe half on average. I then ask them to keep their hands up only if they measure risks. Many of the hands go down. Then I ask them to keep their hands up only if probabilities are used in their measurements of risks (note how essential this is, given the definition of risk we stated). More hands go down and, maybe, one or two remain up. Then I ask them to keep their hands up if they think their measures of probabilities and losses are in any way based on statistical analysis or methods used in actuarial science. After that, all the hands are down. It's not that the methods I'm proposing are not practical. I have used them routinely on a variety of problems. (I'll argue in more detail later against the myth that such methods aren't practical.)
Of course, some managers have argued that the standards I suggest for evaluating risk management are unfair and they will still argue that their risk management program was a success. When asked for specifics about the evidence of success, I find they will produce an interesting array of defenses for a method they currently use in risk management. However, among these defenses will be quite a few things that do not constitute evidence that a particular method is working. I have reason to believe that these defenses are common, not only because I've heard them frequently but also because many were cited as benefits of risk management in the surveys by Aon, The Economist, and Protiviti.
The following are some common, but invalid, claims given as evidence that a risk management process is successful:
When asked, the managers will say that the other stakeholders involved in the process will claim that the effort was a success. They may even have conducted a formal internal survey. But, as the previous studies show, self-assessments are not reliable. Furthermore, without an independent, objective measure of risk management, the perception of any success may merely be a kind of placebo effect. That is, they might feel better about their situation just by virtue of the fact that they perceive they are doing something about it.
The proponents of the method will point out that the method was “structured.” There are a lot of structured methods that are proven not to work. (Astrology, for example, is structured.)
Often, a “change in culture” is cited as a key benefit of risk management. This, by itself, is not an objective of risk management—even though some of the risk management surveys show that risk managers considered it to be one of the main benefits of the risk management effort. But does the type of change matter? Does it matter if the culture doesn't really lead to reduced risks or measurably better decisions?
The proponents will argue that the method “helped to build consensus.” This is a curiously common response, as if the consensus itself were the goal and not actually better analysis and management of risks. An exercise that builds consensus to go down a completely disastrous path probably ensures only that the organization goes down the wrong path even faster.
The proponents will claim that the underlying theory is mathematically proven. I find that most of the time, when this claim is used, the person claiming this cannot actually produce or explain the mathematical proof, nor can the person he or she heard it from. In many cases, it appears to be something passed on without question. Even if the method is based on a widely recognized theory, such as options theory (for which the creators were awarded the Nobel Prize in 1997) or modern portfolio theory (the Nobel Prize in 1990), it is very common for mathematically sound methods to be misapplied. (And those famous methods themselves have some important shortcomings that all risk managers should know about.)
The vendor of the method will claim that the mere fact that other organizations bought it, and resorted to one or more of the preceding arguments, is proof that it worked. I call this the testimonial proof. But if the previous users of the method evaluated it using criteria no better than those previously listed, then the testimonial is not evidence of effectiveness.
The final and most desperate defense is the claim, “But at least we are doing something.” I'm amazed at how often I hear this, as if it were irrelevant whether the “something” makes things better or worse. Imagine a patient complains of an earache and a doctor, unable to solve the problem, begins to saw off the patient's foot. “At least I am doing something,” the doctor says in defense.
With some exceptions (e.g., insurance, some financial management, etc.), risk management is not an evolved profession with standardized certification