Monday, March 7, 2011

A Dangerous Narrative

In my research I work mostly with raw statistics. That means data (generally secondary) based on observations of hundreds if not thousands of people. Thanks to desktop computing I have a myriad of ways analyzing, reducing, organizing, and presenting this data, all in the name of trying to find some consistent relationships between variables. All of these fancy techniques are designed to make sure I identify relationships between things that actually exist and can be used to predict what relationships will emerge when I’m using a different data set. Get this right and, ideally, we can predict what is going to happen before we collect the data. This is the whole point.

When I’m getting ready to lecture in a class, I’m engaged in an entirely different exercise. Most of the students in lower level classes that I run aren’t ready or able to deal with the kind of statistical arguments that the theories require. The symbolic logic that underpins the theories does no better. Instead, I go looking for stories – narratives. Students at lower levels (ie before they’re indoctrin… – I mean properly educated) tend to find narratives more convincing anyways.
So what’s the problem – students become convinced of the “right” things and I don’t have to figure out how to explain a probit or fixed effects panel model to first year students, wins all around, right?

Well, things get a little more complicated when you start to think about how people form expectations. From experimental work in both economics and behavioural psychology we can identify some of the consistent mistakes people make when forming their beliefs about how likely something is. Let’s start with why narratives work.
The effectiveness of a narrative is based on the fact that most people are able to project themselves into someone else’s position if they invest a little energy. This is easier, the more like you the person is seen to be. As a result the best narratives are those that feature people as close to how the audience sees themselves as possible. For those who want to explore this further consult Adam Smith’s Theory of Moral Sentiments.

OK, so we can make a story more compelling by choosing someone as like the audience as possible – big deal. This introduces to proximity. This is the idea that events you can relate to are seen as more likely than they actually are. If your friend’s house is robbed you’re more likely to worry about your house, even if you live across the city. Somebody you don’t know in the next neighborhood – not much effect. The more like you the person in the story seems, the bigger the impact on probability.

Here’s another way things can get weird, some of the some things that make for an interesting story generally screw with our perceptions. For example, more extreme events are more interesting and a lot easier to recall. This taps into what is called availability; the easier it is for you to recall an event the more likely you believe it to. A good narrative will increase availability in both these ways, even when the event is incredibly unlikely.

We also have to worry about issues like representativeness and conservatism. Representativeness just means we assign probabilities based on a prior belief and how well any new data represents the conditional event. If 85% of cars are blue and somebody who is wrong 20% of the time tells you a car is blue, you’ll generally go with an 80% chance the car was blue. This, for those who have studied stats, is dead wrong, the correct answer is 41%. Conservatism means you start with a prior and resist updating your beliefs by giving little weight to new data. So a good narrative can entrench an incorrect belief very easily.

We also have to deal with the so called, law of small numbers. The idea here is that a remarkably small sample should be representative of the population that generated the sample. This really comes out when you ask people to generate a small set of random numbers. The numbers they come up with tend to have negative autocorrelation (a big number is followed by a small one), which means the series isn’t random. What this means is that a small number of narratives are often assumed to be representative of an entire population, particularly if those few narrative agree. This just isn’t the case.

Don’t get me wrong, I’m not saying there is no place for narrative in research or teaching, they’re a great place to start, but a horrible place to stop. If all we consider are narratives, however, we’re going to get it wrong.


Anonymous said...

How much of our teaching at an undergraduate level is science and how much of it is preaching?

Most undergrad courses do not teach methodology, tools, or measurements. We concentrate on least painful way of describing (superficially) how the world works. Descriptions are never precise. Is that a dangerous thing? I am not sure. I think it leaves lasting (warm) impression on students, but I highly doubt it affects their perception of the world.


economistatlarge said...

@KS - you've got point. That's another post I'm working on, choice of major and confirmation bias.

Anonymous said...

I always enjoyed your stories. It's a good start, like you said, but the student needs to dig deeper later on and ask questions...challenge the theory. Beautiful thing about economics is that there is no right depends.

I knew Bayesian stats would come in handy one day.


Anonymous said...

Makes you wonder how rational people are when making expectations. Stephen Levitt had a great example of this in his latest version of Freakonomics.

Is making the best guess possible adequate? To make life altering decisions based on a best guess/hunch - I don't know. Is the information making that "guess" adequate, and if so, is it accurate/relevant? There's a lot variables that come into play and I don't think many people consider them. Of course, this takes a certain kind of individual to be so analytical and detailed. Then again, maybe that's not so great an idea. But I think you get my point.

I'm getting to believe that most people don't consider the extremes when making decisions, that is, they discount outliers too much.

This current economic morass just proves how "deadly" ignoring, not giving credence to, or heavy discounting that certain things/events just won't happen.

Don't even start me on models. But that too would be a great topic for a later time.


economistatlarge said...


People aren't "rational" as we normally thing of it when forming their expectations. They might be rational in terms of latching onto heuristics that seem to work, so long as analysis is costly. The heuristics don't look like rational thought, is the catch.

I'm working up a post about models - I'll likely post two this week. (I'm not promising).