ТОП просматриваемых книг сайта:
Maths on the Back of an Envelope: Clever ways to (roughly) calculate anything. Rob Eastaway
Читать онлайн.Название Maths on the Back of an Envelope: Clever ways to (roughly) calculate anything
Год выпуска 0
isbn 9780008324599
Автор произведения Rob Eastaway
Издательство HarperCollins
It’s a simplistic example, but it shows that increasing one thing by 10% doesn’t mean that everything else increases by 10% as a result.7
EXPONENTIAL GROWTH
There are some situations when a small change in the value assigned to one of the ‘inputs’ has an effect that grows dramatically as time elapses.
Take chickenpox, for example. It’s an unpleasant disease but rarely a dangerous one so long as you get it when you are young. Most children catch chickenpox at some point unless they have been vaccinated against it, because it is highly infectious. A child infected with chickenpox might typically pass it on to 10 other children during the contagious phase, and those newly infected children might themselves infect 10 more children, meaning there are now 100 cases. If those hundred infected children pass it on to 10 children each, within weeks the original child has infected 1,000 others.
In their early stages, infections spread ‘exponentially’. There is some sophisticated maths that is used to model this, but to illustrate the point let’s pretend that in its early stages, chickenpox just spreads in discrete batches of 10 infections passed on at the end of each week. In other words:
N = 10T,
where N is the number of people infected and T is the number of infection periods (weeks) so far.
After one week: N = 101 = 10.
After two weeks: N = 102 = 100.
After three weeks: N = 103 = 1,000,
and so on.
What if we increase the rate of infection by 20% to N = 12, so that now each child infects 12 others instead of 10? (Such an increase might happen if children are in bigger classes in school or have more playdates, for example.)
After one week, the number of children infected is 12 rather than 10, just a 20% increase. However, after three weeks, N = 123 = 1,728, which is heading towards double what it was for N = 10 at this stage. And this margin continues to grow as time goes on.
CLIMATE CHANGE AND COMPLEXITY
Sometimes the relationship between the numbers you feed into a model and the forecasts that come out are not so direct. There are many situations where the factors involved are inter-connected and extremely complex.
Climate change is perhaps the most important of these. Across the world, there are scientists attempting to model the impact that rising temperatures will have on sea levels, climate, harvests and animal populations. There is an overwhelming consensus that (unless human behaviour changes) global temperatures will rise, but the mathematical models produce a wide range of possible outcomes depending on how you set the assumptions. Despite overall warming, winters in some countries might become colder. Harvests may increase or decrease. The overall impact could be relatively benign or catastrophic. We can guess, we can use our judgement, but we can’t be certain.
In 1952, the science-fiction author Raymond Bradbury wrote a short story called ‘A Sound of Thunder’ in which a time-traveller transported back to the time of the dinosaurs accidentally kills a tiny butterfly, and this apparently innocuous incident has knock-on effects that turn out to have changed the modern world they return to. A couple of decades later, the mathematician Edward Lorenz is thought to have been referencing this story when he coined the phrase ‘the butterfly effect’ as a way to describe the unpredictable and potentially massive impact that small changes in the starting situation can have on what follows.
These butterfly effects are everywhere, and they make confident long-term predictions of any kind of climate change (including political and economic climate) extremely difficult.
MAD COWS AND MAD FORECASTS
In 1995, Stephen Churchill, a 19-year-old from Wiltshire, became the first person to die from Variant Creutzfeldt–Jakob disease (or vCJD). This horrific illness, a rapidly progressing degeneration of the brain, was related to BSE, more commonly known as ‘Mad Cow Disease’, and caused by eating contaminated beef.
As more victims of vCJD emerged over the following months, health scientists began to make forecasts about how big this epidemic would become. At a minimum, they reckoned there would be at least 100 victims. But, at worst, they predicted as many as 500,000 might die – a number of truly nightmare proportions. 8
Nearly 25 years on, we are now able to see how the forecasters did. The good news is that their prediction was right – the number of victims was indeed between 100 and 500,000. But this is hardly surprising, given how far apart the goalposts were.
The actual number believed to have died from vCJD is about 250, towards the very bottom end of the forecasts, and about 2,000 times smaller than the upper bound of the prediction.
But why was the predicted range so massive? The reason is that, when the disease was first identified, scientists could make a reasonable guess as to how many people might have eaten contaminated burgers, but they had no idea what proportion of the public was vulnerable to the damaged proteins (known as prions). Nor did they know how long the incubation period was. The worst-case scenario was that the disease would ultimately affect everyone exposed to it – and that we hadn’t seen the full effect because it might be 10 years before the first symptoms appeared. The reality turned out to be that most people were resistant, even if they were carrying the damaged prion.
It’s an interesting case study in how statistical forecasts are only as good as their weakest input. You might know certain details precisely (such as the number of cows diagnosed with BSE), but if the rate of infection could be anywhere between 0.01% and 100%, your predictions will be no more accurate than that factor of 10,000.
At least nobody (that I’m aware of) attempted to predict a number of victims to more than one significant figure. Even a prediction of ‘370,000’ would have implied a degree of accuracy that was wholly unjustified by the data.
DOES THIS NUMBER MAKE SENSE?
One of the most important skills that back-of-envelope maths can give you is the ability to answer the question: ‘Does this number make sense?’ In this case, the back of the envelope and the calculator can operate in harmony: the calculator does the donkey work in producing a numerical answer, and the back of the envelope is used to check that the number makes logical sense, and wasn’t the result of, say, a slip of the finger and pressing the wrong button.
We are inundated with numbers all the time; in particular, financial calculations, offers, and statistics that are being used to influence our opinions or decisions. The assumption is that we will take these figures at face value, and to a large extent we have to. A politician arguing the case for closing a hospital isn’t going to pause while a journalist works through the numbers, though I would be pleased if more journalists were prepared to do this.
Often it is only after the event that the spurious nature of a statistic emerges.
In 2010, the Conservative Party were in opposition, and wanted to highlight social inequalities that had been created by the policies of the Labour government then in power. In a report called ‘Labour’s Two Nations’, they claimed that in Britain’s most deprived areas ‘54% of girls are likely to fall pregnant before the age of 18’. Perhaps this figure was allowed to slip through because the Conservative policy makers wanted it to be true: if half of the girls on these housing estates really were getting pregnant before leaving school, it painted what they felt was a shocking picture of social breakdown in inner-city Britain.
The truth turned out