Pages

Monday, November 13, 2017

Noam Chomsky - short summary and explanation

   In the early 1960s, Noam Chomsky developed the idea that each sentence in a language has two levels of representation - a Deep Structure and a Surface structure.

   The Deep Structure is (more or less) a direct representation of the basic semantic
relations underlying a sentence, and is mapped onto the Surface Structure (which follows the phonological form of the sentence very closely) via ''transformations
''.

  Chomsky believed that there would be considerable similarities between the Deep Structures of different languages, and that these structures would reveal properties, common to all languages, which were concealed by their Surface Structures.

   However, this was perhaps not the central motivation for introducing Deep Structure.

   Deep Structure was devised largely for narrow technical reasons relating to early semantics. Chomsky emphasizes the importance of modern formal mathematical devices in the development of grammatical theory.

   Though transformations continue to be important in Chomsky's current theories, he has now abandoned the original notion of Deep Structure and Surface Structure.

   Initially, two additional levels of representation were introduced (Logical Form and Phonetic Form), and then in the 1990s Chomsky sketched out a new program of research known as ''Minimalist'', in which Deep Structure and Surface Structure no longer appeared and LF and PF remained as the only levels of representation.

    Terms such as"transformation" can give the impression that theories of transformational generative grammar are intended as a model for the processes through which the human mind constructs and understands sentences. Chomsky is clear that this is not the case: a generative grammar models only the knowledge that underlies the human ability to speak and understand.

   One of Chomsky's most important ideas is that most of this knowledge is innate, with the result that a baby can have a large body of prior knowledge about the structure of language in general, and need only actually ''learn'' the idiosyncratic features of the language(s) it is exposed to.

   Chomsky was not the first person to suggest that all languages had certain fundamental things in common, but he helped to make the innateness theory respectable after a period dominated by behaviourist attitudes towards language.

   He goes so far as to suggest that a baby need not learn any actual ''rules'' specific to a particular language. All languages are presumed to follow the same set of rules, but the effects of these rules and the interactions between them vary depending on the values of certain universal linguistic ''parameters''. 

   This is a very strong assumption, and is one of the reasons why Chomsky's current theory of language differs from most others.

   In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories. The first was the distinction between ''competence'' and ''performance''.

   Chomsky noted the obvious fact that people, when speaking in the real world, often make linguistic errors. He argued that these errors in linguistic performance were irrelevant to the study of linguistic competence (the knowledge that allows people to construct and understand grammatical sentences).

    Consequently, the linguist can study an idealised version of language, greatly simplifying linguistic analysis. 

   The second idea related directly to the evaluation of theories of grammar. Chomsky made a distinction between grammars that achieved ''descriptive adequacy'' and those that went further and achieved ''explanatory adequacy''.

   A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety.

   A grammar that achieves explanatory adequacy has the additional property that it gives an insight into the underlying linguistic structures in the human mind; it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented.

   In the 1980s, Chomsky proposed a distinction between ''I-Language'' and ''E-Language'', similar but not identical to the competence/performance distinction.

 I-Language is the object of study in syntactic theory; it is the mentally represented linguistic knowledge that a native speaker of a language has, and is therefore a mental object.

   E-Language includes all other notions of what a language is, for example that it is a body of knowledge or behavioural habits shared by a community. Chomsky argues that such notions of language are not useful in the study of innate linguistic knowledge, i.e. competence.

   Chomsky argued that the notions "grammatical" and "ungrammatical" could be defined in a useful way by saying that the intuition of a native speaker is enough to define the grammaticalness of a sentence.

   This is entirely distinct from the question of whether a sentence is meaningful. It is possible for a sentence to be both grammatical and meaningless, as in Chomsky's famous example "colourless green ideas sleep furiously".

     But such sentences manifest a linguistic problem distinct from that posed by meaningful but ungrammatical (non)-sentences such as "man the bit sandwich the", the meaning of which is fairly clear, but which no native speaker would accept as being well formed.

   Much current research in transformational grammar is inspired by Chomsky's minimalism, outlined in his book The Minimalist Program (1995). The new research direction involves the further development of ideas such as ''economy of derivation'' and ''economy of representation''

   Economy of derivation is a principle stating that transformations only occur in order to match ''interpretable features'' with ''uninterpretable features''. An example of an interpretable feature is the plural inflection on regular English nouns. The word ''dogs'' can only be used to refer to several dogs, not a single dog, and so this inflection contributes to meaning, making it ''interpretable''.

   English verbs are inflected according to the grammatical number of their subject (e.g. "Dogs bite" vs "A dog bites), but in most sentences this inflection just duplicates the information about number that the subject noun already has, and it is therefore ''uninterpretable''.

Economy of representation is the principle that grammatical structures must exist for a purpose, i.e. the structure of a sentence should be no larger or more complex than required to satisfy constraints on grammaticalness (note that this does not rule out complex sentences in general, only sentences that have superfluous elements in a narrow syntactic sense).

   Both notions are somewhat vague, and indeed the precise formulation of these principles is a major area of controversy in current research.