Humble Pi, by Matt Parker
Matt Parker is a ginormous* math nerd who has compiled together a collection of entertaining accounts of Math Gone Wrong. Stories of mathematical mistakes range from fuel-depleted airplanes to computer memory leaks, and for the most part, nobody gets hurt. But, to be fair, it’s pretty much impossible not to suffer a few losses when bridges and buildings collapse, so a few tragedies manage to slip on the back of more sombre cautionary tales about the dangerous of treading close to limits of our current body of knowledge.
What follows are a few of my favourite anecdotes for your personal enjoyment!
Y2K, The Sequel
Calendars create all sorts of opportunities for mathematical anomalies and mistakes. When Britain adopted the Gregorian calendar in 1752, they had to move forward 11 days to align correctly with the other Gregorian countries (British calendars that year jumped from Sept 2 to Sept 14 overnight!). A more contemporary concern with calendars was the “Y2K Bug”, which was actually a computer problem at heart. Simply put, before the year 2000 it was common to shortcut year numbers to their last two digits: we went to the moon in ’69, I was born in ’65, the Challenger disaster was in ’86 and so on (Prince broke the pattern when he partied like it was 1999.) Computers were similarly programmed to shortcut the years as 2 digit numbers, either because programming space was precious in those days or because programmers are lazy. The concern was that when the date/time clicked over on New Years Eve of 1999, computer clocks would roll over from ’99 to ’00 (the 2 digit version of 100) and world-wide disasters would unfold. Of course, this never happened, mostly due to the extremely under-appreciate effort of thousands of programmers hired to fix the problem before it occurred.
However, a lessor-known but equally as disastrous computer deficiency still lurks out there. This deficiency is due to detonate on January 19, 2038 at exactly 3:14:07 am. This computer bomb lives in Unix operating systems and has to do with how these systems keep track of time. For unknown reasons (and I say unknown because I’m too lazy to look them up – see the above reference to programmer apathy), Unix programmers decided to keep track of time by “counting” the number of seconds that have passed since a common start time, and they initiated this counter on January 1, 1970. Seems arbitrary, yet safe enough! However, Unix computers use 32 “bits” to store binary numbers, which are long strings of ones and zeros, and on Jan 19, 2038 3:14:07, exactly 2,147,483,647 will have passed. This very big number looks like this in binary: 1111111111111111111111111111111.** The bomb goes off when the computer tries to add one more second, at which point the number changes to 10000000000000000000000000000000. The leading 1 in this case exceeds the available space and pushes into the adjacent spot in the computer’s memory. Writing numbers to random spots in computer memory is, generally speaking, something you want to avoid, and we really have no idea what the results of this will be. The bad news is that a lot of transportation and communication systems use a Unix operating system. The good news is that there is lots of time to make the necessary fixes. Let’s hope we do!
The Wobbly Bridge
Bridge mistakes have been a favourite of mine since my first year engineering professor treated us to this video of the Tacoma Narrows bridge collapsing (skip ahead to 1:38). The bridge was built across the (surprise) Tacoma Narrows and on the fateful day, wind blowing down the narrows caused the bridge to flutter, and a subsequent feedback loop amplified this wind+flutter effect. Eventually the excessive flutter caused the bridge to collapse spectacularly.
Another bridge in London, known as the Millennium Bridge, was built with cutting-edge, side-suspension technology. Unfortunately, this design had a resonant frequency of about 1 Hz, very similar to the walking gate of an average person. The result was that, with enough pedestrian traffic walking synchronously (which happened more often than you might think), the bridge would start to sway side to side, earning it the illustrious nickname of “The Wobbly Bridge”. The wobble has since been fixed through dampening, but the nickname has stuck to this day.
A Tale of Two Monties
Coincidentally, there are two fascinating mathematical mistakes in statistics and probability which, respectively, are known as the Monte Carlo fallacy and the Monty Hall Problem. The first, the Monte Carlo fallacy, is based on the mistaken belief that a series of independent events somehow influence the outcome of the next event in the sequence. The tossing of coins provides a common example. If a coin is tossed 9 times in a row and lands heads every time, it is a common mistake to believe that there is a greater chance that the 10th toss will result in tails because it is very unlikely that heads would turn up 10 times in a row (in other words, tails is “due”). In fact, the 10th toss is unaffected by any prior tosses, and the chance of the coin landing as tails is 50%, as it is with every other toss. (Note that it is possible to calculate the probability of tossing a coin to heads 10 times in a row, and this probability is absolutely different than the probability of a tenth toss being heads or tails. Not at all confusing!)
The Monty Hall problem is named after the host of the 1960’s popular game show Let’s Make a Deal. In this situation, you are asked to pick between three doors, one of which contains a grand prize. Spoiler alert – at this point you have a 1 in 3, or 33.3%, chance of picking the correct door. Things change, however, when the game master opens one of the remaining two doors to show you that it does not contain the prize, and then gives you the option to stick with your original selection or to switch to the other remaining closed door. Should you stick, or should you switch? In fact, switching doors increases your probability of winning from 33.3 % to 66.7%. The internet lost its collective mind when presented with this mathematical nugget, but it’s absolutely true, and there are explanations galore if you care to google it. Math works in mysterious ways!
The Office Space Gambit
Rounding is a mathematical trick that allows you to get rid of excess decimals in a number. For example, if you and your friend want to split a treat that costs $2.25, you would each have to pay exactly $1.125. But since there is no such thing as 0.125 of a dollar, you can round the $1.125 up to $1.13 or down to $1.12 (when you and your friend share the cost of this treat, one of you is getting hosed for a penny). A fun idea is figuring out how to exploit the concept of rounding to your financial benefit. Consider the treat-splitting calculation another way – if I were the one selling the treat, I could tell you that half the cost was $1.13 (rounded up from $1.125) and then tell your friend that her share of the cost was also $1.13 (rounded up from $1.125). Unless you and your friend compared notes, neither of you has any real reason to suspect a problem with my math, and I can collect $2.26 and conveniently pocket $0.01. If I did this to 100 people in a day, I would make a cool one dollar. Not exactly enough to retire to that villa in Tuscany, but what if I could apply this trick to the millions of transactions that a bank or an investment firm makes every day? Would that be enough? In fact, this is the plot of the delightful 1999 (’99) movie Office Space, where a group of cubicle-hating employees exploit this very loophole hoping to make a few bucks off the company they despise and miscalculate their proceeds by a factor of about 100,000. Math is important in crime!
Conversion Matters
The Gimli Glider is one of my favourite stories of all time. It’s a Canadian bad math story where crazy good luck wins out in the end. In 1983 (’83), Canada was making the switch from the Imperial system of measurement (yuck) to Metric (yay!). The pilots and maintenance crew of an Air Canada 767 made a series of bad mistakes that included miscalculating the conversion from pounds to kilograms of fuel and inadvertently ended up with half the fuel they needed to fly from Montreal to Edmonton. As a result, the plane ran out of fuel over Manitoba, just about halfway to it’s destination. At this point, a series of good luck events intervened and the pilot was was able to miraculously glide the plane to a safe landing at an abandoned air field in Gimli, Manitoba. This story is so magnificent that I strongly recommend you read about it in full here. It turns out you can buy luggage tags made from the fuselage of the airplane, for good luck. It’s up to you what kind of luck you choose to believe that running out of fuel at 41,000 feet counts as!
The Statistics of Large Numbers
The last thing I’ll say in reference to this book is that if you have enough data points, even the most unlikely-seeming event is likely to happen. So the next time someone tells you how amazing it is that just as they were thinking of a specific person, that person suddenly called them out of the blue, remember this. In a world of 7.8 billion people, even if the odds were a million to one that the person you are thinking of suddenly calls you in that exact same moment, this would still happen 7,800 times!
Rating: Buy this book, it’s sooo good!
Adapting the Monte Carlo example above….
So lets say that every single time you see your friend, she mentions how much she loves a book she received for Christmas and then tells you an entertaining story from the book. Although the probability of her mentioning this highly satisfying book does not change from one get-together to another, the probability of me actually buying or borrowing this book drops. Why? Because I would rather just hear her version of the stories rather than read them myself!
Thanks for more stories Risa!
hahaha love it, thanks Chrystal!!