Monday, August 30, 2004

Things linguists (well, some of them) often forget

Seems to me that linguistics today has too many frameworks, too many ways of looking at things. When there are as diverse ways of looking at language as Government and Binding, Head-Driven Phrase Structure Grammar and the Minimalist Program - well, they can't all be right. I think linguists really have to go back to the biology and the neuroscience and say, what really is "natural" and what really is "real"? Are trees natural? Is movement real?

Here are some things that I think linguists all know about but tend to overlook when they're constructing theories:

- Language isn't perfect. It's riddled with errors. People quite often don't finish sentences, for example. Do we nevertheless understand them because we can fill in the blanks from context, or is it because you don't need all the words to understand language?

- The distinction between competence and performance: often noted, but do any of the different frameworks explain or in any way provide for this distinction? Computational grammars (I think, I haven't done nearly enough work on them to know for sure) find parsing harder than generating because over-generation is often the result. But most people can't even find one way of saying what they want, not several (wrong) ways to say what they want. Is this merely a problem of word choice, or does syntax play a part?

- We begin parsing even before we come to the end of a sentence, whether we're listening or reading. The most dramatic proof for this probably comes from garden-path sentences like The horse raced past the barn fell. It seems to me that most computational grammars take only the whole sentence and then parse it. Is any provision made for this undeniable fact of language? If so, what predisposes us to thinking that raced past the barn is the predicate of the sentence and not a reduced relative clause? There must be some sort of minimality effect going on here. I think that insights from computational neuroscience, particularly in the field of vision, may come in useful here.

- A lot of these frameworks are way too powerful. I was working in LFG this summer, and you could do just about anything with it just by adding another feature. Is such power really a good thing? Is there a cap on the number of features a language should have? A big problem is that we have no idea where the boundaries lies between what's unattested and what's impossible. I know there's been some work done in phonotactics on this, but what of syntax? Is there any way to design an experiment to test the boundary in the realm of syntax? Good thing to think about.


Post a Comment

<< Home