Friday, March 25, 2005

Statistical fallacies

I've just finished reading How to Think Straight about Psychology by Keith Stanovich. It's a wonderful book, and, to be honest, really about critical, scientific thinking and not so much about psychology. Most of its examples are from the field of medicine, in fact.

The best parts of the book, to my mind, are the ones that discuss how humans deal with probability and statistics. Everyone knows that statistics are dangerous, but the danger doesn't wholly come from deliberate misuse. Some of the danger comes from the way people intuitively interpret statistics - or, rather, misinterpret them. Not to mention the way people dismiss statistics when they should be taking them seriously.

To summarise the relevant chapters, as much for my sake as anything else, the ways people mistreat and misuse statistics are:

(1) "person-who" arguments (Stanovich's terminology)

People treat a statistical finding or law as invalid because they know of an exception to the law, despite "knowing" that the law was probabilistic in the first place and that there would be exceptions. A lot of this is due to "vividness" effects: probabilistic law is not concrete to most people, but a living, breathing counter-example is. What has a greater effect on their thinking? The counter-example, of course, leading them to believe the law inaccurate.

(2) discounting base rates

This topic is treated in many statistics classes (at least the ones I've been in), but people often seem to forget about it. So the classic example goes, supposing that there's a rare disease that occurs in 1 out of 1000 people (ok, so that's not so rare). Further suppose there's a test that diagnoses the disease that has a zero false-negative rate (if someone has the disease, the test always gets it right) BUT has a 5% false-positive rate (if a person doesn't have the disease, there's a 5% chance it'll say that they do).

So you pluck a random person off the street and administer the test on them, and it says yes, they have the disease. What's the chance that they do have the disease?

Well, even physicians get this wrong and say 95%. The true answer, if you do the math, is about 2%. Why is the intuitive answer so off-base? Because they forgot about the huge effect of the low base rate - the unlikelihood that that random person would have had the disease in the first place. This is also why implementing security systems that are "99% accurate" gives you absolutely no boost in security: the extremely low probability that a random person you choose will be a terrorist [I'm pretty sure Bruce Schneier discussed this at least once on his blog, but am unable to find the exact URL].

(3) failure to use sample size information

To put it simply, people forget (or don't realise) the effect of the law of large numbers - that "a larger sample size always more accurately estimates a population value".

(4) the gambler's fallacy

Say you're flipping a coin, and you've had 5 heads come up. Ask someone whether they think the sixth will come up heads, and they will say it's unlikely, despite the fact that the coin flips are independent. They operate on the basis of a "law of averages" - but in reality, there's no such thing as a law of averages!

(5) thinking coincidences are more "miraculous" than they are

Skeptics often point out that if something is a "one-in-a-million" occurrence then, depending on how you count a single event, at least 300 should happen a day in the U.S. (population approx. 300M). Another classic example is asking people in a class of 30 their birthdays and seeing if any coincide. Students often think the probability of two people in a class having a birthday as a low-probability occurrence, but it's really more probable than none of the students at all sharing a birthday!

(6) discounting incidences and only seeing coincidences

This is common to all of us. Coincidences are vivid - you think of old Uncle Al and suddenly he rings up on the phone. Hey, ESP! But what about all the times you thought of him and he didn't ring up? Oh, you forgot about those, did you?

(7) trying to get it right every time - even when it's better to be wrong sometimes

Stanovich describes an interesting experiment here (Fantino & Esfandiari, 2002 [Pubmed abstract]; Gal & Baron, 1996 [abstract]). Subjects are sat down and told to predict which of two lights, red and blue, will blink. Often, there'll be some money paid for correct predictions. The sequence of red and blue lights is random, except that red flashes 70% of the time and blue 30%. Analysis of the predictions make afterwards show that subjects pick up on the 70-30 spread pretty well, and guess red 70% of the time and blue 30% of the time. But, if they'd just guessed red 100% of the time, they'd have done better! Alternating red and blue with the 70-30 spread gives them, on average, only about 58% accuracy.

The thing is, guessing red all the time guarantees you'll be wrong 30% of the time - while alternating still opens up the possibility that you'll be right all the time, by some miracle. Hope springs eternal in the human heart.

Stanovich further explains how this carries over to clinical vs actuarial prediction. Actuarial prediction is based on historical statistical data. Clinical prediction is based on familiarity with individual circumstances. It seems to people that clinical prediction should be better - (1) you have more information to go on (actuarial + individual), and (2) doesn't actually knowing a person and his circumstances tell you more than a bunch of numbers?

Well, it doesn't: in many, many replicated studies, it's been shown that adding clinical prediction to actuarial always *decreases* the accuracy of the prediction. As unlikely as it seems, restricting yourself to judging based on past statistical trends is always better in the long run. You have to accept the error inherent in relying only on general, statistical, historical data in order to decrease error overall.

(8) trying to see patterns where there are none - or the "conspiracy theory" effect

Stanovich uses the stock market as an example. Much of the variability in stock market prices is due simply to random fluctuations. But people try to read patterns and explain every single fluctuation. What about those people who are always correct? Well, take 100 monkeys and ask them to throw darts at a board. Use the positions of the darts to determine how to place bets. Do this for a year, and 50% will have beat the Standard and Poor's 500 Index. Want to hire?

This is made even worse when people think they should be seeing a pattern, seeing structure. Take the Rorschach test, for example: clinicians using it see relationships in how people respond because they believe they are there. If they believe the theory behind the test, they think there'll be a relationship between what people see in the random inkblots and the makeup of their psychology. But there is no evidence for this whatsoever.

(9) the illusion of control

When people think they have control over a situation, they believe personal skill and actions can affect situations which are actually way beyond their control. I believe the classic example here (not cited in Stanovich's book, he has more interesting ones, actually) is the sports fan who believes that by performing certain actions, he can affect the outcome of a match.

All that's from this book, and I hope I haven't misreported or misrepresented anything. It's [the book's] very pithy, straight to the point, and is a joy to read. The explanations he gives are a good deal better than the ones I've given above, so go check it out from the library or buy it - whatever you do, I encourage you to read it. Stanovich also has a bunch of papers online that look interesting.

In my next post, I'll discuss some of my thoughts on people's statistical abilities and their relation to learning, especially learning language.

Friday, March 18, 2005

UPSID online

I posted before about Nexidia's curious idea of what a phoneme is. Especially ludicrous was their claim that

All utterances made in the entire world have been catalogued within a 400 phoneme range. The majority of languages use a 40 phoneme range, and the most widely spoken languages fall within an 80 phoneme range.

which makes absolutely zero sense. Anyway, I was wondering what indeed was the phoneme count for the average language. One source is UPSID, and I've just found out that it's available online here on Langmaker, a site for conlang creators. There is a phoneme inventory index of UPSID languages with the phoneme count for individual languages. Lowest are Mura and Rotokas with 9 phonemes, and highest is Ga with 241. The mean is 30.26 phonemes, if my calculations are correct, with the mode being 26 phonemes, represented by 23 out of the 317 languages included.

Related:
a list of resources for conlang creators, also by Langmaker. Lots of materials about natural languages, though.
The Language Creation Kit, which has lots of good stuff about how languages, natural and constructed, work.

Thursday, March 17, 2005

"Linguistics" in book covers

Check this out: the word linguistics spelt out in book covers that contain the word "linguistics". Brought to you by amaztype, using Amazon Web Services. You put in a keyword (either in the title or author) and tell it to generate, and book covers will start appearing, gradually forming the shape of the keyword you input. (Unfortunately, they don't seem to stop generating, so don't wait for it to finish - it'll just paste the same book covers in over and over again. Also, it makes an annoying sound when you click on the link.)

You can play a geeky (dorky? nerdy?) game with it, trying to identify the publishers of the books (or, if you're feeling eagle-eyed enough, the books themselves). I saw a few Blackwell Handbooks, MIT Press and CUP ones, some from the CILT series, and spotted a few books like Linguistics at Work and the Oxford Dictionary of Linguistics.

Monday, March 14, 2005

Zeugma

I learnt a new word a couple days ago from Language Log: zeugma, which is "[a] construction in which a single word, especially a verb or an adjective, is applied to two or more nouns when its sense is appropriate to only one of them or to both in different ways, as in He took my advice and my wallet."

That immediately recalled the Flanders and Swann song Have some madeira, m'dear. The lyrics are here, and these are the relevant lines:

...
And he said as he hastened to put out the cat,
The wine, his cigar and the lamps:
...
When he asked, "What in heaven?" she made no reply,
Up her mind, and a dash for the door.
...

I highly recommend all the Flanders and Swann songs; there's a set of CDs, the first of which is At the drop of a hat, the second At the drop of another hat and the famous Bestiary.

In particular, there's the Gnu Song, which is the song that got a lot of people pronouncing pre-nasal velars (that doesn't seem quite the right term, somehow...), at least in the word gnu. Here's the chorus:

I'm a g-nu, I'm a g-nu
The g-nicest work of g-nature in the zoo
I'm a g-nu, how d'you do
You really ought to k-now w-ho's w-ho.
I'm a g-nu, spelt G-N-U,
I'm g-not a camel or a kangaroo.
So let me introduce,
I'm g-neither man nor moose,
So g-nu g-nu g-nu, I'm a g-nu!

Quite a case of hypercorrection there. It's the only song I can think of off-hand that mocks the English orthographic practice of silent letters, though possibly readers will think of others. Read the rest of Have some madeira, too: it's wicked.

Wednesday, March 09, 2005

Linguistics olympiad problems

I heard many years ago about the linguistics olympiad, which seems to be a sort of tradition in Russia and the former Soviet bloc but hasn't really seemed to catch on elsewhere (though the Netherlands seems to be something of a powerhouse in the area).

The typical form of the problem consists of raw data and translations, and you have to figure out what morpheme corresponds to what meaning, and perhaps translate a few sentences. Although there are quirkier problems (like the one where you figure out how pieces get promoted in Japanese chess - that's in the first set of problems listed below). Problems are supposed not to require any previous knowledge of the language - and they don't. They're basically exercises in pattern-finding, though I did employ some heuristics learnt in linguistics class to speed up the process (e.g. singular nouns tend to be less marked than plurals). Most of all, they're fun! A few of them are hard, but many will take no longer than 5-10 mins to solve. And they really expand your mind as to the variety of linguistic structures available to the world's languages.

Anyway, there is now an international linguistics olympiad and there were attempts to set up a U.S. linguistics olympiad (though it doesn't seem to be running now), which means there are problems available at English to solve! Thomas Payne (of "Describing morphosyntax" fame) maintained the U.S. olympiad website; it included a description and history people may be interested in. Most of the problems seem to be on sites that are down, but thanks to the genius of Brewster Kahle and the Internet Archive, we can rescue them from oblivion. I've listed a few of them here:

Problems from the first international olympiad in linguistics (2003), incl. some sample problems
Puzzles from the 1998-2001 U.S. olympiad
Problems offered at the 27th and 28th (Russian) olympiad, sorted by "stages"

Russian-speaking readers can apparently find problems here and here, but I can't tell because I don't read Russian.

Wouldn't it be fun if there were introductory linguistics textbooks chock-full of problems integrated into the text? For example, while talking about morphology you could have a problem with Arabic words and their meanings and the task would be to figure out the principles of Arabic morphology - i.e. its root-and-pattern structure and non-concatenative nature. It would sure make more of an impression than passively reading about Semitic morphology. It would be a little like Lyle Campbell's Historical Linguistics, but synchronic.