Do we really know what we think we know? How can we know?

Prediction is a big, big business these days, and even those of us who aren’t explicitly in the prediction business probably do all we can to make sense of the future. Does your company do marketing research? Do you track the financial pages? Do you keep abreast of the latest innovations in your industry (or any industry, for that matter)? If so – and most of you probably answered yes to at least one of these questions – then that’s all part of what I’m calling the prediction business. In a nutshell, the more we know about the future, the more likely we are to make decisions that succeed in the present and the future, and we all want that.

So, how good are we at predicting? How much of what we think we know is accurate, and how reliable are our techniques for predicting? Perhaps not as good as we’d hope. Consider a recent BBC story on efforts to detect terrorists. It starts out with a promising premise: what if you had a method that was 90% effective? Not bad, right? But then the analysis takes a nasty left turn.

You’re in the Houses of Parliament demonstrating the device to MPs when you receive urgent information from MI5 that a potential attacker is in the building. Security teams seal every exit and all 3,000 people inside are rounded up to be tested.

The first 30 pass. Then, dramatically, a man in a mac fails. Police pounce, guns point.

How sure are you that this person is a terrorist?
A. 90%
B. 10%
C. 0.3%

The answer is C, about 0.3%.

Huh?

The article goes on to explain the math:

If 3,000 people are tested, and the test is 90% accurate, it is also 10% wrong. So it will probably identify 301 terrorists – about 300 by mistake and 1 correctly. You won’t know from the test which is the real terrorist. So the chance that our man in the mac is the real thing is 1 in 301.

My guess is that very few readers guessed C – I know I didn’t – and the fact that most of us aren’t in the terrorist hunting business is no solace. The problem is that unless we’re serious math types, we probably rely, at least occasionally, on techniques that are actually less effective than we think they are.

One of the hottest business books out there right now is Nassim Nicholas Taleb’s The Black Swan. Taleb, who is equal parts philosopher, math whiz and trading savant, wreaks havoc with the world of financial analysis, and in light of our current economic condition and the factors that helped us get here, you can imagine why a book of this nature would strike a nerve.

Taleb’s central thesis is that a small number of unexpected events – the black swans – explains much of import that goes on in the world. We need to understand just how much we will never understand is the line. ‘The world we live in,’ he likes to say, ‘is vastly different from the world we think we live in.’

When it comes to finance, collective wisdom has shown itself to be close to astrology – based on nothing. But according to Taleb, unpredictable events – 9/11, the dotcom bubble, the current financial implosion – are much more common than we think.

He spends a lot of time, for obvious reasons, on finance, but the sum total of Taleb’s thesis is much broader: our need to know blinds us and leads us to rely on tools that can’t be trusted.

Toward the end of the book we discover that Taleb was a disciple of Benoit Mendelbrot, the father of fractal geometry and the man who introduced to the principle of sensitivity to initial conditions – better known as the “butterfly effect.” Stated simply, this principle says that even very small changes in a system can lead to huge changes in the results, and the implications for most kinds of research and modeling are huge. So much research assumes that we can control for non-relevant factors, but Mandelbrot calls that assumption into question. It is far, far harder to predict than we might suspect, and this goes for those in the business of selling prediction, as well.

So, if we can’t know or predict anything, what can we do? Pack it up and go home?

This isn’t to suggest that the task is hopeless, but a few strategies are recommended. Taleb offers some very useful advice – again, read the book. In addition, we have a few ideas of our own.

First, there’s some value in diversifying your sources. If you rely on one tool, one model, one expert, one information source, well, that’s like being an investor with all his or her eggs in one basket. Second, there’s value in diversifying the type of source. We’re a culture with a rage for quantification – we believe that numbers don’t lie and that the only way to measure and evaluate is with statistics. To be sure, stats can tell us a lot, but they can’t tell us a lot, too. The most effective research programs in my world (marketing) also rely on qualitative methods – focus groups, interviews, observation, case histories, etc.

Finally, there’s no substitute for a critical mind. Never accept any claim or data point at face value and be as rigorous in your assessments of methodology as you are results. And especially, go in fear of people who are married to one method. All too often, as Taleb demonstrates, these are ideologues who value the beauty and symmetry of theory above the messiness of reality.

So we probably don’t know as much as we think we do, but if we approach the task of learning and predicting critically, we have a lot better shot.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s