Скачать книгу

to classify situations into previously observed patterns, which provides a mechanism to make rapid decisions (mostly correctly) in complex or important situations. These include:

      • Optimism. The trait of optimism is regarded by many experts as being an important human survival instinct, and generally inherent in many individual and group processes.

      • Bias to action. Management rewards (both explicit and implicit) are often based on the ability to solve problems that arise; much rarer is to create rewards around lack of action, or for the taking of preventive measures. The bias to action rather than prevention (in many management cultures) can lead to lack of consideration of risks, which are, after all, only potential and not yet tangibly present.

      • Influence and overconfidence. This refers to a belief that we have the ability to influence events that are actually beyond our control (i.e. that are essentially random). This can lead to an overestimation of one's ability to predict the future and explain the past, or to an insufficient consideration of the consequences and side effects. A poor outcome will be blamed on bad luck, whereas a favourable one will be attributed to skill:

      • A simple example would be when one shakes dice extra hard to try to achieve certain numbers.

      • People may make rapid decisions about apparently familiar situations, whereas in fact some aspect may be new and pose significant risks.

      • Arguably, humans are reasonable at assessing the effects of, and managing, individual risks, but much less effective at assessing the effects and combinations when there are multiple risks or interdependencies between them, or where the behaviour of a system (or model) output is of a non-linear nature as its input values are altered.

      • Anchoring and confirmation. This means that the first piece of information given to someone (however misleading) tends to serve as a reference point, with future attitudes biased to that point. New information becomes selectively filtered to tend to try to reinforce the anchor and is ignored or misinterpreted if the information does not match the pre-existing anchor. One may also surmise that many educational systems (especially in the earlier and middle years) emphasise the development of students' ability to create a hypothesis and then defend this with logic and facts, with at best only a secondary focus on developing an enquiring mind that asks why an analysis or hypothesis may be wrong. The bias of confirmation describes that there is typically more focus on finding data that confirm a view than in finding data to disprove or question it.

      • Framing. This means the making of a different decision based on the same information, depending on how the situation is presented. Typically, there is a different behaviour when faced with a gain versus when faced with a loss (one is more often risk seeking when trying to avoid losses, and risk averse when concerned with possible gains):

      • A consumer is generally more likely to purchase an item that is reduced in price from $500 to $400 (that is, to “save” $100), than to purchase the same item if it had always been listed at $400.

      • An investor may decide to retain (rather than sell) some shares after a large fall in their value, hoping that the share price will recover. However, when given a separate choice as to whether to buy additional such shares, the investor would often not do so.

      • Faced with a decision whether to continue or abandon a risky project (after some significant investment has already been made), a different decision may result depending on whether the choice is presented as: “Let's continue to invest, with the possibility of having no payback” (which is more likely to result in the project being rejected) or “We must avoid getting into a situation where the original investment was wasted” (which is more likely to result in a decision to continue).

      • Framing effects also apply in relation to the units that are used to present a problem. For example, due to a tendency to think or negotiate in round terms, a different result may be achieved if one changes the currency or units of analysis (say from $ to £ or €, or from absolute numbers to percentages).

      • Incompleteness. Historical data are inherently incomplete, as they reflect only one possible outcome of a range of possibilities that could have occurred. The consequence is that (having not observed the complete set of possible outcomes) one assumes that variability (or risk) is lower than it really is. A special case of this (sampling error) is survivorship bias (i.e. “winners” are observed but “losers” are not). For example:

      • For stock indices, where poorly performing stocks are removed from the index to be replaced by stocks that have performed well, the performance of the index is overstated compared to the true performance (which should use the original basket of stocks that made up the index, some of which may now be worthless).

      • Similarly, truly catastrophic events that could have wiped out humanity have not yet occurred. In general, there can be a failure to consider the possible extremes or situations that have never occurred (but could do so in reality), specifically those associated with low probability, large impact events. Having said that (as discussed in Chapter 2) the consideration and inclusion in analysis of truly rare events (especially those that are, in principle, present in any project context, such as an asteroid destroying life on the planet) are probably in general not relevant to include in project and business risk assessments, or for management decision-making.

      • Group think. A well-functioning group should, in principle, be able to use its diversity of skills and experience to create a better outcome than most individuals would be able to. However, very often, the combination of dominant characters, hierarchical structures, an unwillingness to create conflict, or a lack of incentive to dissent or raise objections, can instead lead to poorer outcomes than if decisions had been left to a reasonably competent individual. The fact that individual failure is often punished, whereas collective failure is typically not, provides a major incentive for individuals to “go with the pack” or resort to “safety in numbers” (some argue this provides part of the explanation for “bubbles” and over-/underpricing in financial markets, even over quite long time periods).

      Structural biases are where particular types of approach inherently create bias in the results, independently of psychological or motivational factors. An important example is a static model populated with most likely values that will, in general, not show the most likely value of the true output range (the “fallacy of the most likely”, as discussed in Chapter 4). Key driving factors for this include non-symmetric distributions of uncertainty, non-linear model logic or the presence of underlying event risks that are excluded from a base assumption. The existence of such biases is an especially important reason to use risk modelling; paraphrasing the words of Einstein, “a problem cannot be solved within the framework that created it”, and indeed the use of probabilistic risk techniques is a key tool to overcoming some of these limitations.

      1.3 Key Drivers of the Need for Formalised Risk Assessment in Business Contexts

      Generally, risk assessment will be useful where there is a significant level of investment (i.e. non-reversible commitments in money, time, resources or reputation), and where there is inherent uncertainty (as there usually is in any future situation). More specifically, the key drivers of the need for formalised risk assessment in business contexts include:

      • The complexity of typical projects.

      • The size and scale of the decisions, in terms of financial and other resource commitments.

      • To provide support to the procedures required to identify and authorise mitigation actions, or to change project structures, and to assign responsibilities for executing the required measures.

      • Corporate governance requirements, both in a formal sense relating to specific guidelines or regulations, and in the sense of optimising executive management and decision-making, i.e. to make decisions that are the best ones that can be made, are not just adequate, and create some competitive advantage.

      • The frequent need to support decisions with quantified analysis.

      • The need to be able to reflect risk tolerances in decision-making and in business portfolio design, and to be able to compare projects of different risk profiles.

      These are discussed individually below.

1.3.1 Complexity

      Clearly,

Скачать книгу