×
STUDY HARD SO YOU CAN LIVE YOUR LIFE AS YOU WANT.
--Your friends at LectureNotes
Close

Artificial Language

by Placement FactoryPlacement Factory
Type: NoteSpecialization: Logical ReasoningOffline Downloads: 9Views: 203Uploaded: 2 months ago

Share it with your friends

Suggested Materials

Leave your Comments

Contributors

Placement Factory
Placement Factory
Artificial Language Learning in Adults and Children 188
Artificial Language Learning in Adults and Children This article briefly reviews some recent work on artificial language learning in children and adults. The final part of the article is devoted to a theoretical formulation of the language learning problem from a mechanistic neurobiological viewpoint and we show that it is logically possible to combine the notion of innate language constraints with, for example, the notion of domain general learning mechanisms. A growing body of empirical evidence suggests that the mechanisms involved in artificial language learning and in structured sequence processing are shared with those of natural language acquisition and natural language processing. Finally, by theoretically analyzing a formal learning model, we highlight Fodor’s insight that it is logically possible to combine innate, domain-specific constraints with domain-general learning mechanisms. Human languages are characterized by the “design features of language” (Hockett, 1963, 1987): discreteness, arbitrariness, productivity, and the duality of patterning (i.e., elements at one level are combined to construct elements at another). Somehow these properties arise from how the human brain works, develops, and learns in interaction with its environment. For example, a study characterizing the emergence of Nicaraguan Sign Language ¨ urek, 2004) showed how two of Hockett’s design (NSL) (Senghas, Kita, & Ozy¨ features—segmentation/discretization and combinatoriality—rapidly emerged in the population of NSL signers. Similarly, Aronoff, Meir, Padden, and Sandler (2008) documented the development of morphology and syntax in Al-Sayyid Bedouin signers and described the emergence of recursive syntax from the first generation onward (see also Sandler, Meir, Padden, & Aronoff, 2005). One way to interpret these findings is that humans are equipped with learning mechanisms that shape the language acquired into discrete and hierarchically organized system when the relevant communicative context is present. The human capacity for language and communication is subserved by an intricate network of brain regions that collectively instantiate the semantic, syntactic, phonological, and pragmatic operations necessary for adequate comprehension and production. How are these skills acquired? Despite much progress, it is still not well understood how humans acquire language skills. The acquisition of language is a complex learning task that is governed by constraints deriving from the properties of the developing human brain. These constraints, to the extent that they are innate (genetic or nongenetic), need not be acquired. The mainstream generative position has for a long time been that there are interesting language acquisition constraints that are linguistic in nature and that language is acquired by means of a language-specific acquisition device (Chomsky, 1965, 1986; Jackendoff, 2002). During the last decade, this position 189 Language Learning 60:Suppl. 2, December 2010, pp. 188–220
Artificial Language Learning in Adults and Children has become increasingly challenged on processing grounds (e.g., Christiansen & Chater, 1999; Reali & Christiansen, 2009), on evolutionary and acquisition grounds (e.g., Chater & Christiansen, 2009; Christiansen & Chater, 2008; Pullum & Scholz, 2002; Scholz & Pullum, 2002), as well as on grounds of language diversity (Evans & Levinson, 2009). The alternative proposal suggests that children make use of domain-general learning mechanisms (e.g., Chater & Christiansen, 2009; Christiansen & Chater, 2008). However, this latter position is not incompatible with the notion that language acquisition has an innate basis (Chater & Christiansen, 2009; Chater, Reali, & Christiansen, 2009). Rather, it is suggested that this basis in large part is non-language-specific, or prelinguistic, in nature (Hornstein, 2009). During the past decade, artificial language learning (ALL) paradigms have revitalized the study of language acquisition and language evolution (e.g., Bahlmann, Schubotz, & Friederici, 2008; Christiansen & Chater, 2008; Christiansen & Kirby, 2003; de Vries, Monaghan, Knecht, & Zwitserlood, 2008; Fitch & Hauser, 2004; Forkstam, Hagoort, Fernandez, Ingvar, & Petersson, 2006; Forkstam, Jansson, Ingvar, & Petersson, 2009; Friederici, Bahlmann, Heim, Schubotz, & Anwander, 2006; Friederici, Fiebach, Schlesewsky, Bornkessel, & von Cramon, 2006; Friederici, Steinhauer, & Pfeifer, 2002; Gentner, Fenn, Margoliash, & Nusbaum, 2006; G´omez & Gerken, 1999, 2000; Hauser, Chomsky, & Fitch, 2002; Makuuchi, Bahlmann, Anwander, & Friederici, 2009; Marcus, Vijayan, Bandi Rao, & Vishton, 1999; Misyak, Christiansen, & Tomblin, 2009; Perruchet & Rey, 2005; Petersson, Forkstam, & Ingvar, 2004; Poletiek, 2002; Saffran, Aslin, & Newport, 1996; Udd´en et al., 2009). The complexity of natural languages makes it exceedingly difficult to isolate factors responsible for language learning. For example, in natural language processing, semantics, syntax, and phonology operate in parallel, in close spatial and temporal contiguity, and because of this, artificial language learning paradigms have been developed with the objective of controlling the influence of the various elements of natural language. Language researchers have thus turned to artificial languages as a means of obtaining better experimental control over the input to which learners are exposed. For example, the use of artificial languages makes it possible to control for prior learning. Moreover, it is crucial to know what children can learn in order to specify possible language acquisition mechanisms. More importantly, the identification of such learning mechanisms will allow researchers to evaluate their degree of domain-specificity as well as possible inherent constraints. The basic assumption in artificial language learning research is that some of the learning mechanisms are shared between artificial and natural language acquisition (G´omez & Language Learning 60:Suppl. 2, December 2010, pp. 188–220 190
Artificial Language Learning in Adults and Children Gerken, 2000; Petersson et al., 2004; Reber, 1967). In addition, artificial grammar learning (AGL) experiments, a version of ALL experiments that focuses on syntax (i.e., sequential structure), has been used in cross-species comparisons to establish which, if any, are the uniquely human components or properties of the language faculty (Fitch & Hauser, 2004; Gentner et al., 2006; Hauser, Chomsky et al., 2002; Newport & Aslin, 2004; Newport, Hauser, Spaepen, & Aslin, 2004; O’Donnell, Hauser, & Fitch, 2005; Saffran et al., 2008). Artificial Syntax Learning in Children The current lack of knowledge concerning the actual learning mechanisms involved during infancy makes it difficult to determine the relative contributions of innate and acquired knowledge in language acquisition. One approach to these issues exposes infants to artificial languages and this approach has resulted in a number of discoveries regarding the learning mechanisms available during infancy (G´omez & Gerken, 2000). The difficulty of acquiring a language is related to the fact that the internal mental structures that represent linguistic information are not directly expressed in the surface form of a language (e.g., the utterance). The question of if and how these structures are acquired is the question of how a learner transforms the language input (“primary linguistic data”) into phonological, syntactic, and semantic knowledge (Chomsky, 1980b). Under the traditional Chomskyan view, the input underdetermines the linguistic knowledge of the adult grammar. The dilemma of generalizing beyond the stimuli encountered without overgeneralizing, in combination with the absence of certain generalization errors during child language acquisition, suggests that the learning mechanisms involved are constrained by prior knowledge or constraints. For example, it appears that children never consider rules solely based on linear order in sentences (G´omez & Gerken, 2000). This and similar observations was one of the fundamental reasons that led Chomsky to propose the existence of a specific language acquisition device (Chomsky, 1965, 1986, 2005). Thus, the acquisition of a grammar is not only based on an analysis of the linguistic input but also depends on an innate structure that guides the process of language acquisition (Jackendoff, 2002). Recently, Lidz, Waxman, and Freedman (2003) investigated the syntactic structures required to determine the antecedent for the pronoun “one.” Based on corpus analyses of child-directed speech in the CHILDES database (MacWhinney, 2000), they concluded that the anaphoric uses of “one” that are syntactically uninformative vastly outstrip the informative use in the input. 191 Language Learning 60:Suppl. 2, December 2010, pp. 188–220

Lecture Notes