ТОП просматриваемых книг сайта:
The Failure of Risk Management. Douglas W. Hubbard
Читать онлайн.Название The Failure of Risk Management
Год выпуска 0
isbn 9781119522041
Автор произведения Douglas W. Hubbard
Жанр Ценные бумаги, инвестиции
Издательство John Wiley & Sons Limited
POTENTIAL OBJECTIVE EVALUATIONS OF RISK MANAGEMENT
If self-assessments don't suffice, then what objective measures are possible for risk management? At its root, the objective measure of risk management should be based on the whether and how much risk was actually reduced or whether risk was acceptable for a given payoff. In order to do that, the risk management method should have an approach for properly assessing the risks. In order to measure the effectiveness of risk management, we have to measure risk itself.
Recall from chapter 1 that risk can be measured by the probability of an event and its severity. If we get to watch an event over a long period of time then we could say something about how frequent the event is and the range of possible impacts. If a large retailer is trying to reduce the risk of loss due to shoplifting (an event that may occur more than a hundred times per month per store), then one inventory before the improved security efforts and another a month after would suffice to detect a change. But a risk manager isn't usually concerned with very high-frequency and low-cost events such as shoplifting.
In a retailer such as Target or Walmart, theft should be so common that it becomes more of a fully anticipated cost than a risk. Similarly, the “risks” of running out of 60W incandescent bulbs or mislabeling a price on a single item are, correctly, not usually the types of risks we think of as foremost in the minds of risk managers. The biggest risks tend to be those things that are more rare but potentially disastrous—perhaps even events that have not yet occurred in this organization.
If it is a rare event (such as many of the more serious risks organizations would hope to model) then we need a very long period of time to observe how frequent and impactful the event may be—given we can survive long enough after observing enough of these events. Suppose, for example, a major initiative is undertaken by the retailer's IT department to make point-of-sale and inventory management systems more reliable. If the chance of these systems being down for an hour or more were reduced from 10 percent per year to 5 percent per year, how would they know just by looking at the first year? And if they did happen to observe one event and the estimated cost of that event was $5 million, how do we use that to estimate the range of possible losses?
Fortunately, there are some methods of determining effectiveness in risk management without just waiting for the events to occur (the very events you are trying to mitigate) just so you can measure their risks. Here are six potential measurement methods that should work even if the risks being managed are rare:
The big experiment
Direct evidence of cause and effect
Component testing
Formal errors
A check of completeness
Answering the right question
The Big Experiment
The most convincing way—and the hardest way—to measure the effectiveness of risk management is with a large-scale experiment over a long period tracking dozens or hundreds of organizations. This is still time-consuming—for example, waiting for the risk event to occur in your own organization—but it has the advantage of looking at a larger population of firms in a formal study. If risk management is supposed to, for example, reduce the risk of events that are so rare that actual results alone would be insufficient to draw conclusions, then we can't just use the short-term history of one organization. Even if improved risk management has a significant effect on reducing losses from various risks, it may take a large number of samples to be confident that the risk management is working.
To build on the previous pharmaceutical outsourcing example, imagine applying a method that pharmaceutical companies would already be very familiar with in the clinical testing of drugs. Suppose that nearly all of the major health products companies (this includes drugs, medical instruments, hospital supplies, etc.) are recruited for a major risk management experiment. Let's say, in total, that a hundred different product lines that will be outsourced to China are given one particular risk management method to use. Another hundred product lines, again from various companies, implement a different risk management method. For a period of five years, each product line uses its new method to assess risks of various outsourcing strategies. Over this period of time, the first group experiences a total of twelve events resulting in adverse health effects traced to problems related to the overseas source. During the same period, the second group has only four such events without showing a substantial increase in manufacturing costs.
Of course, it would seem unethical to subject consumers to an experiment with potentially dangerous health effects just to test different risk management methods. (Patients in drug trials are at least volunteers.) But if you could conduct a study similar to what was just described, the results would be fairly good evidence that one risk management method was much better than the other. If we did the math (which I will describe more later on as well as show an example on the website www.howtomeasureanything.com/riskmanagement) we would find that it would be unlikely for this result to be pure chance if, in fact, the probability of the events were not different. In both groups, there were companies that experienced unfortunate events and those that did not, so we can infer something about the performance of the methods only by looking at the aggregation of all their experiences.
Although this particular study might be unethical, there were some examples of large studies similar to this that investigated business practices. For example, in July 2003, Harvard Business Review published the results of a study involving 160 organizations to measure the effectiveness of more than two hundred popular management tools, such as TQM, ERP, and so on.15 Then independent external reviews of the degree of implementation of the various management tools were compared to shareholder return over a five-year period. In an article titled “What Really Works,” the researchers concluded, to their surprise, that “most of the management tools and techniques we studied had no direct causal relationship to superior business performance” That would be good to know if your organization was about to make a major investment in one of these methods.
Another study, which was based on older but more relevant data, did look at alternative methods of risk management among insurance companies. There was a detailed analysis of the performance of insurance companies in mid-nineteenth century Great Britain when actuarial science was just emerging. Between 1844 and 1853, insurance companies were starting up and failing at a rate more familiar to Silicon Valley than the insurance industry. During this period 149 insurance companies formed and after that period just fifty-nine survived. The study determined that the insurance companies who were using statistical methods were more likely to stay in business (more on this study later).16 Actuarial methods that were at first considered a competitive advantage became the norm.
Again, this is the hard way to measure risk management methods. The best case for organizations would be to rely on research done by others instead of conducting their own studies—assuming they find the relevant study. Or, similar to the insurance industry study, the data are all historical and are available if you have the will to dig all of it up. Fortunately, there are alternative methods of measurement.
Direct