Language Learning & Technology
Vol. 1, No. 1, July 1997, pp 82-93




PROCESSES AND OUTCOMES IN NETWORKED CLASSROOM INTERACTION: DEFINING THE RESEARCH AGENDA FOR L2 COMPUTER-ASSISTED CLASSROOM DISCUSSION


Lourdes Ortega
University of Hawai'i at Manoa

ABSTRACT

The present paper focuses on the use of one networked technology, namely synchronous computer-mediated interaction, in the second language (L2) classroom. The scope is intentionally limited to research concerned with evaluating the potential benefits of computer-assisted classroom discussion (CACD) in terms of second language acquisition (SLA) theory. The findings stemming from the existing body of L2 research on CACD are critically examined and a number of methodological suggestions are offered for future research on CACD. It is suggested that in addition to analyzing language outcomes by means of well-motivated measures of L2 use and L2 acquisition, a multiplicity of data sources be used in CACD research, so as to be able to document the processes learners actually engage in when interpreting and carrying out CACD tasks. A process- and task-driven research agenda for L2 CACD is proposed with the ultimate goal of describing the nature of language, learning, and interaction fostered in networked synchronous communication and to ascertain which features of CACD may or may not be relevant to the processes involved in second language acquisition.

NETWORKED CLASSROOM INTERACTION AND SECOND LANGUAGE LEARNING

From drill-and-practice software, to word-processing programs, to network and hypertext software, the gradual integration of technology in classrooms over the last twenty years has tended to mirror the technological developments and limitations of each computer era as well as, more importantly, the theories of learning and instruction developed by scholars and construed in teachers' actual practices. Thus, the introduction of networked technologies in education coincided with a shift in education from an interest in cognitive and developmental theories of learning to a social and collaborative view of learning (cf. Hawisher, 1994).

Since the early 1990s, national and international networks, on the one hand, and local area networks (LANs), on the other, have been widely used for instructional purposes within social and critical education approaches. The use of electronic mail, bulletin boards, or discussion lists on worldwide networks such as the Internet enables learners and teachers to access and share information in a time- and space- independent fashion. By contrast, the instructional use of LANs, which link computers in a laboratory or a classroom to each other, has introduced the possibility of real-time, synchronous, many-to-many written discussion by a whole class or by smaller groups within the class (Warschauer, 1996b). Both technologies underscore a view of learning as a collaborative act that happens in a social and political context, with learners and teacher working together in the new medium of networked interaction.

Some scholars have suggested that the era of hypertext and networked communication that started burgeoning in the mid-1990s signals the need for an expanded view of literacy: Computers can no longer be seen as a surrogate of the teacher or an intelligent tool in the hands of the student, but as a new medium that has changed the ways in which we write, read, and possibly think (Selfe, 1989). Without committing to such a radical analysis of the role of technology on literacy practices, I would agree with Herring (1996), Selfe and Hilligoss (1994), and others that we need research on computers and education that not only extols the pedagogical and social virtues of computer technology but also determines exactly in which ways language, learning, and interaction have been transformed by the use of networked and hypertext technologies in our classrooms. In the case of L2 classrooms in which CACD has started to be used, the crucial question from an SLA perspective is in what specific ways CACD may or may not be relevant to the processes involved in second language learning.

-82-



--------------------------------------------------------------------------------

Computer-Mediated Discussion in the Language Classroom

The use of networked computers for the purpose of large group discussions in language educational contexts began with hearing impaired students learning L1 composition at Gallaudet University (Batson, 1988). The software application for computer-assisted classroom discussion (CACD) which is most widely used in foreign language classrooms is the Daedalus Integrated Writing Environment (Daedalus Inc., 1989) and its application InterChange. This software was developed in the 1980s in the English Department at the University of Texas at Austin by Fred Kemp, a scholar in composition studies, and colleagues. Social theories of writing instruction that emphasize the collaborative nature of meaning and writing were at the core of the Daedalus software as it was intended to be used in composition classes (see Barker & Kemp, 1990; and for a discussion of the concomitant social epistemic theory of writing, see Berlin, 1987). In foreign language classes, Daedalus began to be used also in the University of Texas at Austin in the early 1990s, but the orientation was more on target-language practice than on the development of writing skills. In the last six years, a small number of FL studies (and most recently ESL studies) have reported on the use of InterChange/Daedalus in CACD in various FL classes in universities in the United States, typically for general classroom discussion purposes rather than in connection with L2 writing instruction.

How does CACD work? During a typical Daedalus/InterChange session in the computer lab, each student sits in front of a computer terminal and is free to type in messages that can be sent by clicking on the "send" button on the screen. Sent messages appear on the upper half of all individual screens, displayed in the order in which they were sent and automatically identified with the name of the sender. All class members can read each other's comments at their own pace by scrolling up and down the sent-messages window, and they can write messages at their own leisure without interfering effects (freezing, etc.) from incoming messages.

Among the many different types of CALL activities available for second language instruction, CACD stands as a promising area for research in second language learning and teaching for several reasons. For one, conducting class discussions on a computer network entails meaningful use of the target language and forces teachers and students away from treating language as an object rather than as a medium of communication (e.g., Colomb & Simutis, 1996). Not only is CACD a communicative CALL activity in Underwood's (1984) sense, but it can promote a task- and interaction-driven approach to L2 learning and teaching which is the backdrop to concrete proposals for curriculum design superseding traditional communicative approaches (e.g., analytic and Type B syllabi as outlined in Wilkins, 1976, and White, 1988, respectively; see also detailed discussion of procedural, process, and task syllabi in Long & Crookes, 1993). The communicative investment and the meaningfulness and relevance achieved in many CACD discussions appear to provide for a context in which opportunities for language development are enhanced, since students are motivated to stretch their linguistic resources in order to meet the demands of real communication in a social context. In brief, the CACD environment appears to be optimal for devising CALL activities that facilitate and promote comprehensible output (Swain, 1985) within a holistic, process- and task-oriented approach to the L2 curriculum (Long & Crookes, 1993). Other benefits of CACD associated with language learning that have been repeatedly singled out in the FL literature are:

Learners are able to contribute as much as they want at their own pace and leisure; consequently, they tend to perceive CACD as less threatening and inhibiting than oral interactions and produce a high amount of writing, with all students participating to a high degree and all producing several turns/messages per session.
Because of the interactive nature of the writing, learners are expected to engage in a variety of interactive moves on the computer and to take control of managing the discussion.
Learners make use of the available opportunity to take time to plan their messages and edit them. In this way they engage in productive L2 strategies and processes.
-83-



--------------------------------------------------------------------------------


Learners have exposure to a substantial amount of comprehensible input which is produced by peers of a similar level and shared background.
Learners get a considerable amount of reading practice in addition to writing practice. Because of at least two tasks (writing and reading) competing for the learner's investment, reading skills practiced may tend to be holistic (reading for the gist) and meaning-driven. In addition, learners are expected to be motivated to read because of an authentic sense of interactive audience provided by CACD.
These putative advantages of CACD have been anecdotally illustrated in the FL pedagogical literature (e.g., Chavez, 1997; Cononelos & Oliva, 1993; Nicholas & Toporski, 1993; Oliva & Pollastrani, 1995). Impressionistic documentations of the first attempts at using CACD in the L2 classroom (e.g., Beauvois, 1992, and Kelm, 1992) have opened the way to a small body of more recent research that suggests computer-mediated synchronous discussion affords a novel medium for L2 interaction that is perhaps inherently competence-expanding (Chun, 1994; Kern, 1995; Warschauer, 1996a).

From a more general pedagogic viewpoint, certain features of CACD can also have a great impact on subverting the traditional roles enacted by teachers and students in classrooms. This may foster critical changes in well-established pedagogical practices (e.g., Cooper & Selfe, 1990; Cummins & Sayers, 1990; Warschauer, 1996b) which may also affect the amount and quality of the language used and, hence, of the learning processes in the L2 classroom (Kern, 1995). However, more and more voices in the education and technology literature are acknowledging that it is not computers per se that can be beneficial or harmful, but the use we put them to (Barton, 1994). Indeed, the newest technologies can be made to serve the most traditional pedagogies, and the philosophies of language teachers can shape the uses of technology within the language curriculum so as to preserve a rather behavioristic view of language learning, as Warschauer (1997) has recently documented.

CACD IN THE L2 CLASSROOM: WHAT DO WE KNOW SO FAR?

Three areas of electronic synchronous communication have been the focus of most CACD research to date: (a) CACD has an equalizing effect on participation; (b) it increases learner productivity in terms of overall amount of language and/or ideas produced; and (c) the language produced in electronic synchronous discussions can be expected to be more complex and formal than in face-to-face discussions, without losing the interactive nature of oral language. These three dimensions of CACD are the object of much anecdotal discussion and enthusiastic advocacy in the FL literature. However, empirical evidence in support of these characterizations of CACD needs to be drawn from a sound theoretical motivation within our field (see Doughty, 1987, for an early call on this area, and Chapelle, 1996): I would like to argue here that the benefits of CACD in the L2 classroom need to be evaluated not only from a pedagogical standpoint but also in light of our most current knowledge about how languages are learned. This can be done by examining how second language performance and, tentatively, second language learning are shaped by particular features of synchronous electronic discussion as a new environment for L2 interaction.

CACD as an Equalizer of Participation Structures

Accounts of the use of CACD in L1 and L2 classrooms identify equality of participation as one of the most pervasive and beneficial effects of using electronic synchronous discussion in L1 writing instruction (Hartman, Neuwirth, Kiesler, Sproull, Cochran, Plamquist, & Zubrow, 1991), FL instruction (Beauvois, 1992; Kelm, 1992; Kern, 1995), and ESL instruction (Sullivan & Pratt, 1996; Warschauer, 1996a).

The more equal participation pattern in electronic discussions may be attributed partly to the reduction of static and dynamic social context cues in computer-mediated communication (Hartman et al., 1991; Warschauer, 1996b), and partly to the absence of oral interaction constraints such as fear to interrupt or of being interrupted, need to manage the floor and the transfer of speakership, and need for interlocutors to co-orient to the production of sequentially relevant discourse (e.g., Schenkein, 1978). Additionally, in L2 CACD learners need not be concerned with pronunciation issues, which often require a high degree of attention and monitoring in the oral mode and may inhibit efforts at oral

-84-



--------------------------------------------------------------------------------

communication in the target language. The consequences are: (a) interactants are less apprehensive about being evaluated by interlocutors, and thus more willing to participate at their leisure; and (b) they are less affected by wait time, turn-taking, and other elements of traditional interaction, enabling them to participate as much as they want, whenever they want, with opportunities for contribution being more equally distributed among participants.

The effect of this equalizing power of synchronous electronic discussion is threefold. First, the traditional figure of the teacher as authority source and expert is subverted in that the role of the teacher during the electronic discussion is that of a mere participant (see Kern, 1995, and Warschauer, 1997). Hence, the teacher cannot dominate the floor and do most of the talking, and he or she cannot direct and redirect the development of the topic, pose display questions, nominate students as next speakers, or evaluate individual student's contributions, all of which is the norm in traditional teacher-fronted L1 and L2 classrooms (e.g., Cazden, 1988; Chaudron, 1988; McHoul, 1978; Sinclair & Coulthard, 1975). Second, and as a result of this change in teacher role, control of and responsibility for the electronic discussion is arguably incumbent on the students who are afforded the opportunity to engage in self-generated, personally relevant communication involving a wide range of moves, functions, and meanings that may be facilitative in the development of communicative competence and overall language proficiency (Chun, 1994).

Third, all speakers share the floor more equally, and students that do not normally participate much in traditional classroom discussion seem to dramatically increase their participation in the electronic mode. Naturally, a more democratic and equitable participation in electronic discussions can in turn foster counterhegemonic pedagogies (Cooper & Selfe, 1990; Cummins & Sayers, 1990; but see Barton, 1994, Olson, 1987, and Warschauer, 1997, for caveats). This last aspect of the equalizing effect of electronic discussions has been the focus of many studies in L1 education literature, which present consistent evidence of increased participation on the part of so-called poorer performing students (Hartman et al., 1991), female students (Flores, 1990; Selfe, 1990), and shyer students (Bump, 1990).

In the FL literature, the small body of studies concerned with CACD seems to provide support for an equalizing effect of electronic discussion on participation patterns. Kern (1995) conducted a within-groups quasi-experimental comparison of both a traditional whole-class discussion and an electronic discussion in two intact French classes, concluding that electronic discussion effects a radical change in the proportion of teacher versus student language production and that teacher discourse becomes less authoritative and less dominant. Chun (1994) undertook a descriptive study of the language produced in regular CACD sessions in a German class over a two semester period. She provides information concerning the ratio of teacher-output versus student-output, the proportion of student-initiated and teacher-initiated messages, and the directionality of contributions produced in electronic synchronous discussions. Chun's descriptive approach is important in that she not only substantiates in her analysis an increase in learner production coupled with a decrease in teacher-centered discourse, but she also identifies concrete advantages of more democratic and equitable participation in terms of potential learner development in discoursal, interactional, and functional competence.

In their impressionistic accounts of electronic synchronous discussions involving Portuguese and French learners, Beauvois (1992) and Kelm (1992) report increases in the participation pattern of shy students and low-motivated, unsuccessful language learners, whereby the same students were perceived by their instructors as less willing to participate in oral discussions led by the teacher. Most of the cited studies also elicited information on students' impressions and evaluations of CACD and found that students themselves identified an increase in participation (and production) as one of the benefits of engaging in electronic discussions in the target language.

Although these studies provide some useful descriptive information, a methodological and conceptual problem within their comparisons obscures the interpretation of the findings.

-85-



--------------------------------------------------------------------------------

Namely, evidence of an equalizing effect in participation seems to be sought solely by focusing on comparisons between traditional teacher-led classroom discussions and whole-class electronic discussions, whether the comparison is experimental (Kern, 1995) or assumed as a frame of reference (Beauvois, 1992; Chun, 1994; Kelm, 1992). However, it is justified to hypothesize that group size and equality of participation are negatively related in traditional oral interactions and positively related in computer-assisted interactions, and that the benefits of electronic over non-electronic interactions will increase with the size of groups (Gallupe, Bastianutti, & Cooper, 1991). In other words, the positive equalizing effect of the electronic mode will be accentuated when comparing larger groups, as in comparisons of teacher-fronted, whole-class discussion with whole-class electronic discussion.

On the other hand, face-to-face discussions in communicative L2 classrooms are often conducted in small groups rather than as a whole class. Thus, in addition to whole-class comparisons, a more informative approach to investigating the equalizing effects of CACD in L2 classrooms would be to compare electronic versus non-electronic small group interactions of equal size, and to compare the relative effects in participation patterns when group size is equally reduced or increased in both modes. Two studies of electronic-assisted communication in ESL classes (Sullivan & Pratt, 1996; Warschauer, 1996a) are the only ones to date that have adopted such an approach and included comparisons of small group interactions in the oral and electronic modes. Both studies provide further evidence of a clear change in participation structures and a substantial increase in the amount of participation afforded to individuals in electronic classroom discussions.

Sullivan and Pratt (1996) studied both whole-class discussions and four-member group interactions in two ESL writing classes: one traditional (oral) class and one class that used the software InterChange, both of which were taught by the same teacher over the course of a semester. In the large-group-discussion comparison between groups, as in previous FL studies, Sullivan and Pratt focused on the proportion of teacher "talk" and student language production and confirmed the radical change in student/teacher participation structures in CACD already documented in the literature. Of particular interest, however, is the between-groups comparison of electronic and non-electronic peer response in four-member groups. Sullivan and Pratt's analysis of the quality of interactions here suggests that face-to-face oral discussions were dominated by the author of the essay discussed, whereas there was no one individual dominating the floor in the same type of discussions on the computer. As a result, the researchers claim, the quality and efficacy of peer suggestions for revision increased in the electronic mode.

Warschauer (1996a) undertook a within-groups, counterbalanced comparison of open-ended discussions in face-to-face and electronic discussions by four small groups of four students each. He calculated the ratio of total words produced by speaker per total amount of words produced by the group and concluded that three out of the four groups showed greater equality of participation in the electronic discussion. More research is needed that compares participation patterns of face-to-face versus electronic small group discussions, carefully controlling group format as well as other task features so as to increase the validity of the experimental comparisons.

CACD Increases in Language Output and Learner Productivity

In close parallel to changes in teacher/learner participation patterns and equitable sharing of the floor among individual participants, another possible advantage of electronic synchronous discussions is an increase in the amount of participation per individual, that is, an increase in learner productivity. In the L1 literature on electronic interaction, productivity of ideas (e.g., Gallupe, Bastianutti, & Cooper, 1991) and provision of practice (e.g., DiMatteo, 1990, 1991) have been general concerns, especially in relation to uses of synchronous electronic discussion for the teaching of composition. More specifically, collaborative and process-oriented approaches to writing point at the interest of discerning whether electronic discussions are an efficient way of increasing the volume of


Current Issue - Activities for ESL Students - Things for ESL Teachers - TESL/TEFL Links - Search - Copyright

The Internet TESL Journal
English Teachers' Barriers to the Use of Computer-assisted Language Learning
Kuang-wu Lee
Johnny [at] hcu.edu.tw
Hsuan Chuang University (Hsinchu, Taiwan)
Computers have been used for language teaching ever since the 1960's. This 40-year period can be divided into three main stages: behaviorist CALL, communicative CALL, and integrative CALL. Each stage corresponds to a certain level of technology and certain pedagogical theories. The reasons for using Computer-assisted Language Learning include: (a) experiential learning, (b) motivation, (c) enhance student achievement, (d) authentic materials for study, (e) greater interaction, (f) individualization, (g) independence from a single source of information, and (h) global understanding. The barriers inhibiting the practice of Computer-assisted Language Learning can be classified in the following common categories: (a) financial barriers, (b) availability of computer hardware and software, (c) technical and theoretical knowledge, and (d) acceptance of the technology.
Introduction
In the last few years the number of teachers using Computer-assisted Language Learning (CALL) has increased markedly and numerous articles have been written about the role of technology in education in the 21st century. Although the potential of the Internet for educational use has not been fully explored yet and the average school still makes limited use of computers, it is obvious that we have entered a new information age in which the links between technology and TEFL have already been established.
In the early 90's education started being affected by the introduction of word processors in schools, colleges and universities. This mainly had to do with written assignments. The development of the Internet brought about a revolution in the teachers' perspective, as the teaching tools offered through the Internet were gradually becoming more reliable. Nowadays, the Internet is gaining immense popularity in foreign language teaching and more and more educators and learners are embracing it.

The History of CALL
Computers have been used for language teaching ever since the 1960's. According to Warschauer & Healey (1998), this 40-year period can be divided into three main stages: behaviorist CALL, communicative CALL, and integrative CALL. Each stage corresponds to a certain level of technology and certain pedagogical theories.
Behaviorist CALL
In the 1960's and 1970's the first form of computer-assisted Language Learning featured repetitive language drills, the so-called drill-and-practice method. It was based on the behaviorist learning model and as such the computer was viewed as little more than a mechanical tutor that never grew tired. Behaviorist CALL was first designed and implemented in the era of the mainframe and the best-known tutorial system, PLATO, ran on its own special hardware. It was mainly used for extensive drills, explicit grammar instruction, and translation tests (Ahmad, et al., 1985).
Communicative CALL
Communicative CALL emerged in the 1970's and 1980's as a reaction to the behaviorist approach to language learning. Proponents of communicative CALL rejected behaviorist approaches at both the theoretical and pedagogical level. They stressed that CALL should focus more on using forms rather than on the forms themselves. Grammar should be taught implicitly and students should be encouraged to generate original utterances instead of manipulating prefabricated forms (Jones & Fortescue, 1987; Philips, 1987). This form of computer-based instruction corresponded to cognitive theories which recognized that learning was a creative process of discovery, expression, and development. The mainframe was replaced by personal computers that allowed greater possibilities for individual work. Popular CALL software in this era included text reconstruction programmers and simulations.
Integrative CALL
The last stage of computer-assisted Language Learning is integrative CALL. Communicative CALL was criticized for using the computer in an ad hoc and disconnected fashion and using the computer made 'a greater contribution to marginal rather than central elements' of language learning (Kenning & Kenning, 1990: 90). Teachers have moved away from a cognitive view of communicative language teaching to a socio-cognitive view that emphasizes real language use in a meaningful, authentic context. Integrative CALL seeks both to integrate the various skills of language learning (listening, speaking, writing, and reading) and to integrate technology more fully into language teaching (Warschauer & Healey, 1998). To this end the multimedia-networked computer provides a range of informational, communicative, and publishing tools that are potentially available to every student.
Why Use CALL?
Research and practice suggest that, appropriately implemented, network-based technology can contribute significantly to:

Experiential Learning
The World Wide Web makes it possible for students to tackle a huge amount of human experience. In such a way, they can learn by doing things themselves. They become the creators not just the receivers of knowledge. As the way information is presented is not linear, users develop thinking skills and choose what to explore.

Motivation
Computers are most popular among students either because they are associated with fun and games or because they are considered to be fashionable. Student motivation is therefore increased, especially whenever a variety of activities are offered, which make them feel more independent.

Enhanced Student Achievement
Network-based instruction can help pupils strengthen their linguistic skills by positively affecting their learning attitude and by helping them build self-instruction strategies and promote their self-confidence.

Authentic Materials for Study
All students can use various resources of authentic reading materials either at school or from their home. Those materials can be accessed 24 hours a day at a relatively low cost.

Greater Interaction
Random access to Web pages breaks the linear flow of instruction. By sending E-mail and joining newsgroups, EFL students can communicate with people they have never met. They can also interact with their own classmates. Furthermore, some Internet activities give students positive and negative feedback by automatically correcting their on-line exercises.

Individualization
Shy or inhibited students can be greatly benefited by individualized, student-centered collaborative learning. High fliers can also realize their full potential without preventing their peers from working at their own pace.

Independence from a Single Source of Information
Although students can still use their books, they are given the chance to escape from canned knowledge and discover thousands of information sources. As a result, their education fulfils the need for interdisciplinary learning in a multicultural world.

Global Understanding
A foreign language is studied in a cultural context. In a world where the use of the Internet becomes more and more widespread, an English Language teacher's duty is to facilitate students' access to the web and make them feel citizens of a global classroom, practicing communication on a global level.
What Can We Do With CALL?
There is a wide range of on-line applications which are already available for use in the foreign language class. These include dictionaries and encyclopedias, links for teachers, chat-rooms, pronunciation tutors, grammar and vocabulary quizzes, games and puzzles, literary extracts. The World Wide Web (WWW) is a virtual library of information that can be accessed by any user around the clock. If someone wants to read or listen to the news, for example, there are a number of sources offering the latest news either printed or recorded. The most important newspapers and magazines in the world are available on-line and the same is the case with radio and TV channels.
Another example is communicating with electronic pen friends, something that most students would enjoy. Teachers should explain how it all works and help students find their keypals. Two EFL classes from different countries can arrange to send E-mail regularly to one another. This can be done quite easily thanks to the web sites providing lists of students looking for communication. It is also possible for two or more students to join a chat-room and talk on-line through E-mail. .

Another network-based EFL activity could be project writing. By working for a project a pupil can construct knowledge rather that only receive it. Students can work on their own, in groups of two or in larger teams, in order to write an assignment, the size of which may vary according to the objectives set by the instructor. A variety of sources can be used besides the Internet such as school libraries, encyclopedias, reference books etc. The Internet itself can provide a lot of food for thought. The final outcome of their research can be typed using a word processor. A word processor can be used in writing compositions, in preparing a class newsletter or in producing a school home page. In such a Web page students can publish their project work so that it can reach a wider audience. That makes them feel more responsible for the final product and consequently makes them work more laboriously.

The Internet and the rise of computer-mediated communication in particular have reshaped the uses of computers for language learning. The recent shift to global information-based economies means that students will need to learn how to deal with large amounts of information and have to be able to communicate across languages and cultures. At the same time, the role of the teacher has changed as well. Teachers are not the only source of information any more, but act as facilitators so that students can actively interpret and organize the information they are given, fitting it into prior knowledge (Dole, et al., 1991). Students have become active participants in learning and are encouraged to be explorers and creators of language rather than passive recipients of it (Brown, 1991). Integrative CALL stresses these issues and additionally lets learners of a language communicate inexpensively with other learners or native speakers. As such, it combines information processing, communication, use of authentic language, and learner autonomy, all of which are of major importance in current language learning theories.

Teachers' Barriers to the Use of Computer-assisted Language Learning
The barriers inhibiting the practice of Computer-assisted Language Learning can be classified in the following common categories (a) financial barriers, (b) availability of computer hardware and software, (c) technical and theoretical knowledge, and (d) acceptance of the technology.
Financial Barriers
Financial barriers are mentioned most frequently in the literature by language education practitioners. They include the cost of hardware, software, maintenance (particular of the most advanced equipment), and extend to some staff development. Froke (1994b) said, "concerning the money, the challenge was unique because of the nature of the technology." Existing universities policies and procedures for budgeting and accounting were well advanced for classroom instruction. The costs of media were accounted for in the university as a part of the cost of instruction. Though the initial investment in hardware is high, inhibiting institutions' introduction of advance technologies; but Hooper (1995) recommends that the cost of computers will be so low that they will be available in most schools and homes in the future.
Lewis et al. (1994) indicate three conditions under which Computer-assisted Learning and other technologies can be cost-effectiveness: Computer-assisted Learning costs the same as conventional instruction but ends up with producing higher achievement in the same amount of instructional time, it results in students achieving the same level but in less time. These authors indicate that in examples where costs of using technologies in education are calculated, they are usually understand because the value of factors, such as faculty time and cost of equipment utilization, is ignored (McClelland, 1996).

Herschbach (1994) argues firmly that new technologies are add-on expenses and will not, in many cases, lower the cost of providing educational services. He stated that that the new technologies probably will not replace the teachers, but will supplement their efforts, as has been the pattern with other technologies. The technologies will not decrease educational costs or increase teacher productivity as currently used. Low usage causes the cost barrier. Computers, interactive instruction TV, and other devices are used very few hours of the day, week, or month. Either the number of learners or the amount of time learners apply the technology must be increased substantially to approach the concept of cost-effectiveness. There are other more quick and less expensive ways of reducing costs, no matter how inexpensive the technology being used (Kincaid, McEachron, & McKinney,1994.

Availability of Computer Hardware and Software
The most significant aspects of computer are hardware and software. Availability of high quality software is the most pressing challenge in applying the new technologies in education (Herschbach, 1994; Miller, 1997; Office of Technology Assessment, 1995; Noreburg & Lundblad, 1997). Underlying this problem is a lack of knowledge of what elements in software will promote different kinds of learning. There are few educators skilled in designing it because software development is costly and time-consuming (McClelland, 1996).
McClelland (1996) indicated having sufficient hardware in locations where learners have access to it problematic and is, of course, partly a financial problem. Computer hardware and software compatibility goes on to be a significant problem. Choosing hardware is difficult because of the many choices of systems to be used in delivering education, the delivery of equipment, and the rapid changes in technology.

Technical and Theoretical Knowledge
A lack of technical and theoretical knowledge is another barrier to the use of Computer-assisted Language Learning technology. Not only is there a shortage of knowledge about developing software to promote learning, as shown above, but many instructors do not understand how to use the new technologies. Furthermore, little is known about integrating these new means of learning into an overall plan. In the communication between McClelland and C. Dede (1995), Dede indicated the more powerful technologies, such as artificial intelligence in computers, might promote learning of higher-order cognitive skills that are difficult to access with today's evaluation procedures and, therefore, the resulting pedagogical gains may be under-valued. Improper use of technologies can affect both the teacher and learner negatively (Office of Technical Assessment, 1995).
Acceptance of Technologies
We live in a time change. Gelatt (1995) stated that change itself has changed. Change has become so rapid, so turbulent, and so unpredictable that is now called "white water" change (p.10). Murphy & Terry (1998a) indicated the current of change move so quickly that they destroy what was considered the norm in the past, and by doing so, create new opportunities. But, there is a natural tendency for organizations to resist change. Wrong conceptions about the use of technology limit innovation and threaten teachers' job and security (Zuber-Skerritt, 1994). Instructors are tend not to use technologies that require substantially more preparation time, and it is tough to provide instructors and learners access to technologies that are easy to use (Herschbach, 1994).
Engaging in Computer-assisted Language Learning is a continuing challenge that requires time and commitment. As we approach the 21st century, we realize that technology as such is not the answer to all our problems. What really matters is how we use technology. Computers can/will never substitute teachers but they offer new opportunities for better language practice. They may actually make the process of language learning significantly richer and play a key role in the reform of a country's educational system. The next generation of students will feel a lot more confident with information technology than we do. As a result, they will also be able to use the Internet to communicate more effectively, practice language skills more thoroughly and solve language learning problems more easily.

Reference
Benson, G. M., Jr. (1996). Combining Computer Assisted Instruction (CAI) and a live TV teacher to extend learning opportunities into the home. A learning productivity research and developmental project of the research foundation of the State University of New York and Instructional Systems Inc. Albany, NY: Instructional Systems Inc., State University of New York. (ERIC Doc. ED359936).
Belisle, Ron, E-mail Activities in the ESL Writing Class, The Internet TESL Journal, Vol. II, No. 12, December 1996
http://iteslj.org/Articles/Belisle-Email.html
Boswood, Tim(editor), New Ways of Using Computers in Language Teaching, TESOL, 1997.
Bush,M.D., R.M.Terry(editors.), Technology-Enhanced Language Learning, 1996.
Dean, J. (1993). Alternative instructional delivery system: Implications for vocational education, The Visitor, 4, 2-4.
Froke, M. (1994). A vision and promise: Distance education at Penn State, Part1-Toward an experience-based definition. The Journal of Continuing Higher Education, 42 (2), 16-22.
Gelatt, H. B. (1995). Future sense: Creating the future. The Futurist, 3 (2), 35-43.
Hahn, H. A. (1995). Distributed training for the reserve component: Course conversion and implementation guidelines for computer conferencing. (ERIC Doc. ED359916).
Herschbach, D. (1994). Addressing vocational training and retaining through educational technology: Policy alternatives. (Information Series No. 276). Columbus, OH: The National Center for Research in Vocational Education.
Hill, M. (1995). What is new in telecommunication? Electronic Learning, (6), 16.
Kasper, L.F., ESL and the Internet: Content, rhetoric and research. Proceedings of Rhetoric and Technology in the New Millennium, 1998.
http://members.aol.com/Drlfk/rhetoric.html
Kincaid, H., McEachron, N. B., & McKinney, D. (1994). Technology in public elementary and secondary education: a policy analysis perspective. Menlo Park, CA: Stanford Research Institute.
Miller, J. V. (1997). Questions about communications technologies for educators: An introduction. In N. M. Singer (Ed.), Communications technologies: their effect on adult, career, and vocational education (Information Series No. 244,1-4). Columbus, OH: The National Center for Research in Vocational Education.
Mor, Nili, Computers in the ESL Classroom Ð The Switch from "Why" to "How". 1995
http://ietn.snunit.k12.il/nili1.htm
Murphy, T. H., & Terry, R., Jr. (1998a). Adoption of CALL technologies in education: A national delphi. Proceedings of the Forty-Fourth Annual Southern Agricultural Education Research Meeting, 112-123.
Office of Technology Assessment. (1995). Information technology and its impact on American education. Washington, DC: U.S. Government Printing Office.
Ortega, Lourdes, Processes and outcomes in networked classroom interaction, Language Learning & Technology, Vol. 1, No. 1, July 1997, pp 82-93,
http://polyglot.cal.msu.edu/llt/vol1num1/ortega/
Power, M. A. (1996). Interactive ESL in-service teacher training via distance education. Paper Presented at the Annual Conference of Teachers of English to Speakers of Other Languages.
Purdy, L. N. (Ed). (1996). Reaching new students through new technologies: A Reader. Dubuque: Kendall/Hunt Publishing Company.
Pickering, John, Teaching on the Internet is learning, Active Learning,
http://www.cti.ac.uk/publ/actlea/issue2/pickering/
Renner, Christopher E, Learning to surf the net in the EFL classroom: Background information on the Internet, TESOL Greece Newsletter, 60, Dec. 1998, 9-11 & 61, Jan. 1999, 11-14
Spotts, T. H., & Bowman, M. A. (1995). Faculty use of instructional technologies in higher education. Educational Technologies, 35 (2), 56-64.
Singhal, Meena, The Internet and Foreign Language Education: Benefits and Challenges, The Internet TESL Journal, Vol. III, No. 6, June 1997
http://iteslj.org/Articles/Singhal-Internet.html
Sperling, Dave, The Internet Guide for English Language Teachers, Prentice-Hall Regents, 1998
Tanguay, Edward, English Teachers, Prepare Yourselves for the Digital Age.
http://userpage.fu-berlin.de/~tanguay/english-teachers.htm
Wilkenson, T. W., & Sherman, T. M. (1996). Telecommunications-based distance education: Who's doing what? Educational Technology, 21 (11), 54-59.
Zuboff, S. (1998). In the age of the smart machine. New York: Basic Books, Inc.


--------------------------------------------------------------------------------
The Internet TESL Journal, Vol. VI, No. 12, December 2000
http://iteslj.org/
--------------------------------------------------------------------------------
http://iteslj.org/Articles/Lee-CALLbarriers.html


Linguistics and Second Language Acquisition


Vivian Cook (1993)
New York: St. Martin's Press
Pp. x + 313.
ISBN 0-312-10355-7 (paper); 0-312-10100-7 (cloth)
US $22.61 (paper); $45.00 (cloth)
Cook's Linguistics and Second Language Acquisition (hereafter LASLA) is a welcome addition to the recent spate of second language acquisition (SLA) texts which have flooded the market. It is unique in its focus on linguistics and SLA, and represents the tremendous growth in SLA research within a UG framework that has occurred in recent years. To my knowledge (as with Goodluck (1991) for first language acquisition), LASLA represents the first such text.
Chapter 1 begins with a disclaimer of sorts, that is, Cook makes clear that his focus is on linguistics and SLA, not psycholinguistics, sociolinguistics, or language teaching. He does, however, make specific reference to work in those areas. To my mind, this is one of the real strengths of this text: Cook is up-front about his domain of coverage, but by citing work outside that domain he evidences a breadth that is all too rare in a field obsessed with borders. Having issued this disclaimer, Cook goes on to provide a rather complete assessment of early work of relevance to SLA research, including reference to the work of Weinreich on up through interlanguage and approximative systems.

Chapter 2 covers sequences in SLA, and discusses both the morpheme studies and later studies of negation. The treatment of morpheme studies is excellent, especially the discussion of their many problems. Cook makes an important methodological point in noting that most of the morpheme studies were not truly cross-sectional, but rather what he calls single-moment studies. One of the helpful features introduced in this chapter and used throughout is boxed research summaries of major articles. Cook summarizes some of the major studies discussed in detail in each chapter, giving the aim, subjects, focus, type of data, method of analysis, and results, in a concise, readable form.

Chapter 3 examines in detail the theory of Stephen Krashen. While noting that Krashen's agenda does seem to be in line with that of linguistics (through, for example, appeal to a LAD), Cook also points out that Krashen's use of terms and concepts from linguistics is quite different from that intended by linguists. Nonetheless, LASLA provides a judicious overview of the arguments advanced for Krashen's theory, as well as a sound criticism of them. Overall, this is a very balanced and insightful critical summary of Krashen.

Despite the earlier disclaimer, Chapter 4 takes up the more prominent social/sociolinguistic approaches to SLA, that is, Acculturation, Pidginization, Creolization, and Variation Theory. The discussion of the latter is once again balanced and thorough, [-1-] with reference to the work of Labov, Ellis, Tarone, Huebner, and Young. Cook concludes that variation represents a "rich new area for SLA research, perhaps showing promise for the future rather than concrete results in the present" (p. 89). Coming back to his focus on generative linguistics, he notes the fundamental differences between work on variation and Chomsky's idealized speaker-hearer, pointing out that this idealization is an acute problem for SLA research and that "syntactic variation may have to be reconciled with a description of competence in some way" (p. 91).

The work of Pienemann et al. is the focus of chapter 5. Cook presents a lucid explanation of Pienemann's rather complex Multidimensional Model/Teachability Hypothesis, which attempts to account for both a similar acquisition order across learners as well as individual variation among learners. According to the research of Pienemann et al., the former is not amenable to instruction, but the latter may be. Cook criticizes the work of Pienemann et al. primarily on two counts: the reliance on word order and movement make it unclear just how the model is to apply to languages without movement (such as Japanese), and the conception of movement is analogous to that of earlier transformational generative grammar and so does not fit well with current theory.

Chapter 6 looks at learning and communication strategies. The discussion of learner strategies focuses almost exclusively on the work of O'Malley et al. Cook provides a complete listing of the various strategies uncovered in this work, but he also takes O'Malley et al. to task for some serious methodological problems, such as the use of the native language (L1) for one group of subjects and the use of the second language (L2) for another during interviews. Cook also covers the work of Faerch and Kasper, Tarone, and the Nijmegen Project on communication strategies, but the main focus is on the Nijmegen Project, which, among other things, takes an approach based not only on linguistic realizations but processes as well. In keeping with discussions of methodology earlier on, Cook cites some of the methodological problems in strategies research, including use of introspective data and a taxonomic approach.

Relative clauses are dealt with in chapter 7. Much of the important research on SLA and relative clauses is discussed in light of the Keenan-Comrie hierarchy, which also receives a brief summary. Again, there is an insightful discussion of research methods (which by now is clearly one of the strengths of LASLA) in relative clause research. Cook notes that while the use of experimental data (e.g., comprehension tests, acceptability judgments, sentence combining) provides different perspectives on knowledge of language than observation of speech performance, it also introduces certain problems. Among these are a lack of comparability among data types, and, perhaps most significant, the fact that "most Second Language [-2-] Acquisition researchers are not trained psychologists and so blithely undertake experiments that psychologists can easily find fault with" (p. 154).

In chapter 8, the topic turns more to the focus of LASLA by taking up Principles and Parameters syntax. Cook provides a good thumbnail sketch of important terms and concepts, but also refers the reader to his own in-depth treatment (Cook, 1988) and the more weighty introduction by Haegeman (1991). Given the usual lucid explanations, though, reference to those texts is not really necessary for understanding the discussion. Topics covered include X-bar syntax, the pro-drop parameter, binding, and the head-direction parameter. For each, the syntax is explained, followed by discussion of the major research in both L1 and L2 acquisition. This is a real strength because it underscores what is perhaps the most exciting aspect of current work in a GB framework: a partial convergence of L1 and L2 acquisition research and the potential for such work to inform linguistic theory. As before, there are insightful comments on research methods, and perhaps most striking, Cook ends the chapter with a few admonitions, namely, that "research that is specific to one particular syntactic analysis has a short shelf-life" (p. 198), and that there is the "danger that second language researchers may forget that their purpose is to discover how people learn L2s, not see if the latest fashion in linguistics can be applied to L2 research" (p. 199).

Chapter 9 is the second of two chapters dealing directly with linguistics and SLA, this time covering Universal Grammar (UG) and SLA. Cook provides an overview of some of the main tenets of UG, such as the poverty-of-stimulus argument, and then reviews the SLA research related to access to UG in SLA. Most of this has to do with subjacency and German word order, and Cook concludes that the evidence against access is rather murky. Also included is an excellent discussion of the use of grammaticality judgments in SLA research. Overall, LASLA adopts a rather balanced view of the limits of UG for SLA research. Cook notes that the "homogeneous" L1 competence that underlies L1 acquisition research is lacking for L2 learners, and that being concerned with core grammar, UG has "a part to play [in SLA research] but that part should not be exaggerated; much, or even most, of the totality of L2 learning lies outside the core" (p. 241). The chapter closes with a discussion of what Cook calls "multi-competence," that is, the "multilingual nature of most people's knowledge of language" (p. 245), and Cook rightly points out that this conflicts with one of the basic tenets of UG. He asserts that monolingual competence should not be the model for SLA research.

The final chapter of LASLA takes up research based on an assumption that runs directly counter to that of generative linguistics, that is, that knowledge of language is not acquired differently than other types of knowledge. Work discussed includes [-3-] Anderson's ACT* (Adaptive Control of Thought, with the "*" indicating the most current version), McLaughlin's information processing, and MacWhinney's Competition Model. While noting that such cognitive approaches represent an alternative to a linguistics-based approach, Cook also points out several objections linguists have to such research, such as the reliance on negative evidence, the association between frequency and learning, and little or no consideration of grammatical structure. Before ending chapter 10, Cook reminds the reader that "linguistics is only one of the disciplines that SLA research can draw on" (p. 269), then closes by noting once again that SLA research has to come to terms with the fact that a monolingual model is not appropriate for the study of L2 competence.

Also included between chapter 10 and the references are activities for each chapter. Activities include problems involving the analysis of L2 data as well as questions designed to stimulate further thought and critical analysis of SLA research discussed in the text.

It is difficult to say anything bad about LASLA. Cook has accomplished well what he set out to do, and has done perhaps even more. Particularly welcome is the frequent discussion of research methodology, an area that needs considerable attention in SLA (and other applied linguistics) research. It is also refreshing to read a text written by someone who shows such a balanced view of his domain of coverage and its place in the larger scheme of things, and also evidences an awareness of (and more importantly a respect for) work going on outside that domain. While it is not a comprehensive SLA text and would not be ideal for all purposes (e.g., teacher training), LASLA cannot be faulted for not being something it was not intended to be. However, despite the initial disclaimer, LASLA does cover much of the ground of SLA research. Besides, Cook (1991) already covers much of what is omitted from LASLA, which is written mainly as a text for students in linguistics.

For that audience, it is an outstanding text.

References

Cook, V. (1988). Chomsky's Universal Grammar: An introduction. Oxford: Basil Blackwell.

Cook, V. (1991). Second language learning and language teaching. London: Edward Arnold.

Goodluck, H. (1991). Language acquisition: A linguistic introduction. Oxford: Basil Blackwell. [-4-]

Haegeman, L. (1991). Introduction to Government and Binding Theory. Oxford: Basil Blackwell

Kenneth R. Rose
Hong Kong Baptist College


linguistics
Language acquisition by children
science

Main
the scientific study of language. The word was first used in the middle of the 19th century to emphasize the difference between a newer approach to the study of language that was then developing and the more traditional approach of philology. The differences were and are largely matters of attitude, emphasis, and purpose. The philologist is concerned primarily with the historical development of languages as it is manifest in written texts and in the context of the associated literature and culture. The linguist, though he may be interested in written texts and in the development of languages through time, tends to give priority to spoken languages and to the problems of analyzing them as they operate at a given point in time.


The field of linguistics may be divided in terms of three dichotomies: synchronic versus diachronic, theoretical versus applied, microlinguistics versus macrolinguistics. A synchronic description of a language describes the language as it is at a given time; a diachronic description is concerned with the historical development of the language and the structural changes that have taken place in it. The goal of theoretical linguistics is the construction of a general theory of the structure of language or of a general theoretical framework for the description of languages; the aim of applied linguistics is the application of the findings and techniques of the scientific study of language to practical tasks, especially to the elaboration of improved methods of language teaching. The terms microlinguistics and macrolinguistics are not yet well established, and they are, in fact, used here purely for convenience. The former refers to a narrower and the latter to a much broader view of the scope of linguistics. According to the microlinguistic view, languages should be analyzed for their own sake and without reference to their social function, to the manner in which they are acquired by children, to the psychological mechanisms that underlie the production and reception of speech, to the literary and the aesthetic or communicative function of language, and so on. In contrast, macrolinguistics embraces all of these aspects of language. Various areas within macrolinguistics have been given terminological recognition: psycholinguistics, sociolinguistics, anthropological linguistics, dialectology, mathematical and computational linguistics, and stylistics. Macrolinguistics should not be identified with applied linguistics. The application of linguistic methods and concepts to language teaching may well involve other disciplines in a way that microlinguistics does not. But there is, in principle, a theoretical aspect to every part of macrolinguistics, no less than to microlinguistics.

A large portion of this article is devoted to theoretical, synchronic microlinguistics, which is generally acknowledged as the central part of the subject; it will be abbreviated henceforth as theoretical linguistics.


History of linguistics » Earlier history » Non-Western traditions
Linguistic speculation and investigation, insofar as is known, has gone on in only a small number of societies. To the extent that Mesopotamian, Chinese, and Arabic learning dealt with grammar, their treatments were so enmeshed in the particularities of those languages and so little known to the European world until recently that they have had virtually no impact on Western linguistic tradition. Chinese linguistic and philological scholarship stretches back for more than two millennia, but the interest of those scholars was concentrated largely on phonetics, writing, and lexicography; their consideration of grammatical problems was bound up closely with the study of logic.

Certainly the most interesting non-Western grammatical tradition—and the most original and independent—is that of India, which dates back at least two and one-half millennia and which culminates with the grammar of Pāṇini, of the 5th century bc. There are three major ways in which the Sanskrit tradition has had an impact on modern linguistic scholarship. As soon as Sanskrit became known to the Western learned world the unravelling of comparative Indo-European grammar ensued and the foundations were laid for the whole 19th-century edifice of comparative philology and historical linguistics. But, for this, Sanskrit was simply a part of the data; Indian grammatical learning played almost no direct part. Nineteenth-century workers, however, recognized that the native tradition of phonetics in ancient India was vastly superior to Western knowledge; and this had important consequences for the growth of the science of phonetics in the West. Thirdly, there is in the rules or definitions (sutras) of Pāṇini a remarkably subtle and penetrating account of Sanskrit grammar. The construction of sentences, compound nouns, and the like is explained through ordered rules operating on underlying structures in a manner strikingly similar in part to modes of contemporary theory. As might be imagined, this perceptive Indian grammatical work has held great fascination for 20th-century theoretical linguists. A study of Indian logic in relation to Pāṇinian grammar alongside Aristotelian and Western logic in relation to Greek grammar and its successors could bring illuminating insights.

Whereas in ancient Chinese learning a separate field of study that might be called grammar scarcely took root, in ancient India a sophisticated version of this discipline developed early alongside the other sciences. Even though the study of Sanskrit grammar may originally have had the practical aim of keeping the sacred Vedic texts and their commentaries pure and intact, the study of grammar in India in the 1st millennium bc had already become an intellectual end in itself.


History of linguistics » Earlier history » Greek and Roman antiquity
The emergence of grammatical learning in Greece is less clearly known than is sometimes implied, and the subject is more complex than is often supposed; here only the main strands can be sampled. The term hē grammatikē technē (“the art of letters”) had two senses. It meant the study of the values of the letters and of accentuation and prosody and, in this sense, was an abstract intellectual discipline; and it also meant the skill of literacy and thus embraced applied pedagogy. This side of what was to become “grammatical” learning was distinctly applied, particular, and less exalted by comparison with other pursuits. Most of the developments associated with theoretical grammar grew out of philosophy and criticism; and in these developments a repeated duality of themes crosses and intertwines.

Much of Greek philosophy was occupied with the distinction between that which exists “by nature” and that which exists “by convention.” So in language it was natural to account for words and forms as ordained by nature (by onomatopoeia—i.e., by imitation of natural sounds) or as arrived at arbitrarily by a social convention. This dispute regarding the origin of language and meanings paved the way for the development of divergences between the views of the “analogists,” who looked on language as possessing an essential regularity as a result of the symmetries that convention can provide, and the views of the “anomalists,” who pointed to language’s lack of regularity as one facet of the inescapable irregularities of nature. The situation was more complex, however, than this statement would suggest. For example, it seems that the anomalists among the Stoics credited the irrational quality of language precisely to the claim that language did not exactly mirror nature. In any event, the anomalist tradition in the hands of the Stoics brought grammar the benefit of their work in logic and rhetoric. This led to the distinction that, in modern theory, is made with the terms signifiant (“what signifies”) and signifié (“what is signified”) or, somewhat differently and more elaborately, with “expression” and “content”; and it laid the groundwork of modern theories of inflection, though by no means with the exhaustiveness and fine-grained analysis reached by the Sanskrit grammarians.

The Alexandrians, who were analogists working largely on literary criticism and text philology, completed the development of the classical Greek grammatical tradition. Dionysius Thrax, in the 2nd century bc, produced the first systematic grammar of Western tradition; it dealt only with word morphology. The study of sentence syntax was to wait for Apollonius Dyscolus, of the 2nd century ad. Dionysius called grammar “the acquaintance with [or observation of] what is uttered by poets and writers,” using a word meaning a less general form of knowledge than what might be called “science.” His typically Alexandrian literary goal is suggested by the headings in his work: pronunciation, poetic figurative language, difficult words, true and inner meanings of words, exposition of form-classes, literary criticism. Dionysius defined a sentence as a unit of sense or thought, but it is difficult to be sure of his precise meaning.

The Romans, who largely took over, with mild adaptations to their highly similar language, the total work of the Greeks, are important not as originators but as transmitters. Aelius Donatus, of the 4th century ad, and Priscian, an African of the 6th century, and their colleagues were slightly more systematic than their Greek models but were essentially retrospective rather than original. Up to this point a field that was at times called ars grammatica was a congeries of investigations, both theoretical and practical, drawn from the work and interests of literacy, scribeship, logic, epistemology, rhetoric, textual philosophy, poetics, and literary criticism. Yet modern specialists in the field still share their concerns and interests. The anomalists, who concentrated on surface irregularity and who looked then for regularities deeper down (as the Stoics sought them in logic) bear a resemblance to contemporary scholars of the transformationalist school. And the philological analogists with their regularizing surface segmentation show striking kinship of spirit with the modern school of structural (or taxonomic or glossematic) grammatical theorists.


History of linguistics » Earlier history » The European Middle Ages
It is possible that developments in grammar during the Middle Ages constitute one of the most misunderstood areas of the field of linguistics. It is difficult to relate this period coherently to other periods and to modern concerns because surprisingly little is accessible and certain, let alone analyzed with sophistication. In the early 1970s the majority of the known grammatical treatises had not yet been made available in full to modern scholarship, so that not even their true extent could be classified with confidence. These works must be analyzed and studied in the light of medieval learning, especially the learning of the schools of philosophy then current, in order to understand their true value and place.

The field of linguistics has almost completely neglected the achievements of this period. Students of grammar have tended to see as high points in their field the achievements of the Greeks, the Renaissance growth and “rediscovery” of learning (which led directly to modern school traditions), the contemporary flowering of theoretical study (men usually find their own age important and fascinating), and, in recent decades, the astonishing monument of Pāṇini. Many linguists have found uncongenial the combination of medieval Latin learning and premodern philosophy. Yet medieval scholars might reasonably be expected to have bequeathed to modern scholarship the fruits of more than ordinarily refined perceptions of a certain order. These scholars used, wrote in, and studied Latin, a language that, though not their native tongue, was one in which they were very much at home; such scholars in groups must often have represented a highly varied linguistic background.

Some of the medieval treatises continue the tradition of grammars of late antiquity; so there are versions based on Donatus and Priscian, often with less incorporation of the classical poets and writers. Another genre of writing involves simultaneous consideration of grammatical distinctions and scholastic logic; modern linguists are probably inadequately trained to deal with these writings.

Certainly the most obviously interesting theorizing to be found in this period is contained in the “speculative grammar” of the modistae, who were so called because the titles of their works were often phrased De modis significandi tractatus (“Treatise Concerning the Modes of Signifying”). For the development of the Western grammatical tradition, work of this genre was the second great milestone after the crystallization of Greek thought with the Stoics and Alexandrians. The scholastic philosophers were occupied with relating words and things—i.e., the structure of sentences with the nature of the real world—hence their preoccupation with signification. The aim of the grammarians was to explore how a word (an element of language) matched things apprehended by the mind and how it signified reality. Since a word cannot signify the nature of reality directly, it must stand for the thing signified in one of its modes or properties; it is this discrimination of modes that the study of categories and parts of speech is all about. Thus the study of sentences should lead one to the nature of reality by way of the modes of signifying.

The modistae did not innovate in discriminating categories and parts of speech; they accepted those that had come down from the Greeks through Donatus and Priscian. The great contribution of these grammarians, who flourished between the mid-13th and mid-14th century, was their insistence on a grammar to explicate the distinctions found by their forerunners in the languages known to them. Whether they made the best choice in selecting logic, metaphysics, and epistemology (as they knew them) as the fields to be included with grammar as a basis for the grand account of universal knowledge is less important than the breadth of their conception of the place of grammar. Before the modistae, grammar had not been viewed as a separate discipline but had been considered in conjunction with other studies or skills (such as criticism, preservation of valued texts, foreign-language learning). The Greek view of grammar was rather narrow and fragmented; the Roman view was largely technical. The speculative medieval grammarians (who dealt with language as a speculum, “mirror” of reality) inquired into the fundamentals underlying language and grammar. They wondered whether grammarians or philosophers discovered grammar, whether grammar was the same for all languages, what the fundamental topic of grammar was, and what the basic and irreducible grammatical primes are. Signification was reached by imposition of words on things; i.e., the sign was arbitrary. Those questions sound remarkably like current issues of linguistics, which serves to illustrate how slow and repetitious progress in the field is. While the modistae accepted, by modern standards, a restrictive set of categories, the acumen and sweep they brought to their task resulted in numerous subtle and fresh syntactic observations. A thorough study of the medieval period would greatly enrich the discussion of current questions.


History of linguistics » Earlier history » The Renaissance
It is customary to think of the Renaissance as a time of great flowering. There is no doubt that linguistic and philological developments of this period are interesting and significant. Two new sets of data that modern linguists tend to take for granted became available to grammarians during this period: (1) the newly recognized vernacular languages of Europe, for the protection and cultivation of which there subsequently arose national academies and learned institutions that live down to the present day; and (2) the exotic languages of Africa, the Orient, the New World, and, later, of Siberia, Inner Asia, Papua, Oceania, the Arctic, and Australia, which the voyages of discovery opened up. Earlier, the only non-Indo-European grammar at all widely accessible was that of the Hebrews (and to some extent Arabic); and Semitic in fact shares many categories with Indo-European in its grammar. Indeed, for many of the exotic languages scholarship barely passed beyond the most rudimentary initial collection of word lists; grammatical analysis was scarcely approached.

In the field of grammar, the Renaissance did not produce notable innovation or advance. Generally speaking, there was a strong rejection of speculative grammar and a relatively uncritical resumption of late Roman views (as stated by Priscian). This was somewhat understandable in the case of Latin or Greek grammars, since here the task was less evidently that of intellectual inquiry and more that of the schools, with the practical aim of gaining access to the newly discovered ancients. But, aside from the fact that, beginning in the 15th century, serious grammars of European vernaculars were actually written, it is only in particular cases and for specific details (e.g., a mild alteration in the number of parts of speech or cases of nouns) that real departures from Roman grammar can be noted. Likewise, until the end of the 19th century, grammars of the exotic languages, written largely by missionaries and traders, were cast almost entirely in the Roman model, to which the Renaissance had added a limited medieval syntactic ingredient.

From time to time a degree of boldness may be seen in France: Petrus Ramus, a 16th-century logician, worked within a taxonomic framework of the surface shapes of words and inflections, such work entailing some of the attendant trivialities that modern linguistics has experienced (e.g., by dividing up Latin nouns on the basis of equivalence of syllable count among their case forms). In the 17th century a group of Jansenists (followers of the Flemish Roman Catholic reformer Cornelius Otto Jansen) associated with the abbey of Port-Royal in France produced a grammar that has exerted noteworthy continuing influence, even in contemporary theoretical discussion. Drawing their basic view from scholastic logic as modified by rationalism, these people aimed to produce a philosophical grammar that would capture what was common to the grammars of languages—a general grammar, but not aprioristically universalist. This grammar attracted attention from the mid-20th century because it employs certain syntactic formulations that resemble rules of modern transformational grammar.

Roughly from the 15th century to World War II, however, the version of grammar available to the Western public (together with its colonial expansion) remained basically that of Priscian with only occasional and subsidiary modifications, and the knowledge of new languages brought only minor adjustments to the serious study of grammar. As education has become more broadly disseminated throughout society by the schools, attention has shifted from theoretical or technical grammar as an intellectual preoccupation to prescriptive grammar suited to pedagogical purposes, which started with Renaissance vernacular nationalism. Grammar increasingly parted company with its older fellow disciplines within philosophy as they moved over to the domain known as natural science, and technical academic grammatical study has increasingly become involved with issues represented by empiricism versus rationalism and their successor manifestations on the academic scene.

Nearly down to the present day, the grammar of the schools has had only tangential connections with the studies pursued by professional linguists; for most people prescriptive grammar has become synonymous with “grammar,” and the prevailing view held by educated people regards grammar as an item of folk knowledge open to speculation by all, and in nowise a formal science requiring adequate preparation such as is assumed for chemistry.

Eric P. Hamp
Ed.


History of linguistics » The 19th century » Development of the comparative method
It is generally agreed that the most outstanding achievement of linguistic scholarship in the 19th century was the development of the comparative method, which comprised a set of principles whereby languages could be systematically compared with respect to their sound systems, grammatical structure, and vocabulary and shown to be “genealogically” related. As French, Italian, Portuguese, Romanian, Spanish, and the other Romance languages had evolved from Latin, so Latin, Greek, and Sanskrit as well as the Celtic, Germanic, and Slavic languages and many other languages of Europe and Asia had evolved from some earlier language, to which the name Indo-European or Proto-Indo-European is now customarily applied. That all the Romance languages were descended from Latin and thus constituted one “family” had been known for centuries; but the existence of the Indo-European family of languages and the nature of their genealogical relationship was first demonstrated by the 19th-century comparative philologists. (The term philology in this context is not restricted to the study of literary languages.)

The main impetus for the development of comparative philology came toward the end of the 18th century, when it was discovered that Sanskrit bore a number of striking resemblances to Greek and Latin. An English orientalist, Sir William Jones, though he was not the first to observe these resemblances, is generally given the credit for bringing them to the attention of the scholarly world and putting forward the hypothesis, in 1786, that all three languages must have “sprung from some common source, which perhaps no longer exists.” By this time, a number of texts and glossaries of the older Germanic languages (Gothic, Old High German, and Old Norse) had been published, and Jones realized that Germanic as well as Old Persian and perhaps Celtic had evolved from the same “common source.” The next important step came in 1822, when the German scholar Jacob Grimm, following the Danish linguist Rasmus Rask (whose work, being written in Danish, was less accessible to most European scholars), pointed out in the second edition of his comparative grammar of Germanic that there were a number of systematic correspondences between the sounds of Germanic and the sounds of Greek, Latin, and Sanskrit in related words. Grimm noted, for example, that where Gothic (the oldest surviving Germanic language) had an f, Latin, Greek, and Sanskrit frequently had a p (e.g., Gothic fotus, Latin pedis, Greek podós, Sanskrit padás, all meaning “foot”); when Gothic had a p, the non-Germanic languages had a b; when Gothic had a b, the non-Germanic languages had what Grimm called an “aspirate” (Latin f, Greek ph, Sanskrit bh). In order to account for these correspondences he postulated a cyclical “soundshift” (Lautverschiebung) in the prehistory of Germanic, in which the original “aspirates” became voiced unaspirated stops (bh became b, etc.), the original voiced unaspirated stops became voiceless (b became p, etc.), and the original voiceless (unaspirated) stops became “aspirates” (p became f). Grimm’s term, “aspirate,” it will be noted, covered such phonetically distinct categories as aspirated stops (bh, ph), produced with an accompanying audible puff of breath, and fricatives (f ), produced with audible friction as a result of incomplete closure in the vocal tract.

In the work of the next 50 years the idea of sound change was made more precise, and, in the 1870s, a group of scholars known collectively as the Junggrammatiker (“young grammarians,” or Neogrammarians) put forward the thesis that all changes in the sound system of a language as it developed through time were subject to the operation of regular sound laws. Though the thesis that sound laws were absolutely regular in their operation (unless they were inhibited in particular instances by the influence of analogy) was at first regarded as most controversial, by the end of the 19th century it was quite generally accepted and had become the cornerstone of the comparative method. Using the principle of regular sound change, scholars were able to reconstruct “ancestral” common forms from which the later forms found in particular languages could be derived. By convention, such reconstructed forms are marked in the literature with an asterisk. Thus, from the reconstructed Proto-Indo-European word for “ten,” *dekm, it was possible to derive Sanskrit daśa, Greek déka, Latin decem, and Gothic taihun by postulating a number of different sound laws that operated independently in the different branches of the Indo-European family. The question of sound change is dealt with in greater detail in the section entitled Historical (diachronic) linguistics.


History of linguistics » The 19th century » The role of analogy
Analogy has been mentioned in connection with its inhibition of the regular operation of sound laws in particular word forms. This was how the Neogrammarians thought of it. In the course of the 20th century, however, it has come to be recognized that analogy, taken in its most general sense, plays a far more important role in the development of languages than simply that of sporadically preventing what would otherwise be a completely regular transformation of the sound system of a language. When a child learns to speak he tends to regularize the anomalous, or irregular, forms by analogy with the more regular and productive patterns of formation in the language; e.g., he will tend to say “comed” rather than “came,” “dived” rather than “dove,” and so on, just as he will say “talked,” “loved,” and so forth. The fact that the child does this is evidence that he has learned or is learning the regularities or rules of his language. He will go on to “unlearn” some of the analogical forms and substitute for them the anomalous forms current in the speech of the previous generation. But in some cases, he will keep a “new” analogical form (e.g., “dived” rather than “dove”), and this may then become the recognized and accepted form.


History of linguistics » The 19th century » Other 19th-century theories and development » Inner and outer form
One of the most original, if not one of the most immediately influential, linguists of the 19th century was the learned Prussian statesman, Wilhelm von Humboldt (died 1835). His interests, unlike those of most of his contemporaries, were not exclusively historical. Following the German philosopher Johann Gottfried von Herder (1744–1803), he stressed the connection between national languages and national character: this was but a commonplace of romanticism. More original was Humboldt’s theory of “inner” and “outer” form in language. The outer form of language was the raw material (the sounds) from which different languages were fashioned; the inner form was the pattern, or structure, of grammar and meaning that was imposed upon this raw material and differentiated one language from another. This “structural” conception of language was to become dominant, for a time at least, in many of the major centres of linguistics by the middle of the 20th century. Another of Humboldt’s ideas was that language was something dynamic, rather than static, and was an activity itself rather than the product of activity. A language was not a set of actual utterances produced by speakers but the underlying principles or rules that made it possible for speakers to produce such utterances and, moreover, an unlimited number of them. This idea was taken up by a German philologist, Heymann Steinthal, and, what is more important, by the physiologist and psychologist Wilhelm Wundt, and thus influenced late 19th- and early 20th-century theories of the psychology of language. Its influence, like that of the distinction of inner and outer form, can also be seen in the thought of Ferdinand de Saussure, a Swiss linguist. But its full implications were probably not perceived and made precise until the middle of the 20th century, when the U.S. linguist Noam Chomsky re-emphasized it and made it one of the basic notions of generative grammar (see below Transformational-generative grammar).

History of linguistics » The 19th century » Other 19th-century theories and development » Phonetics and dialectology
Many other interesting and important developments occurred in 19th-century linguistic research, among them work in the areas of phonetics and dialectology. Research in both these fields was promoted by the Neogrammarians’ concern with sound change and by their insistence that prehistoric developments in languages were of the same kind as developments taking place in the languages and dialects currently spoken. The development of phonetics in the West was also strongly influenced at this period, as were many of the details of the more philological analysis of the Indo-European languages, by the discovery of the works of the Indian grammarians who, from the time of the Sanskrit grammarian Pāṇini (5th or 6th century bc), if not before, had arrived at a much more comprehensive and scientific theory of phonetics, phonology, and morphology than anything achieved in the West until the modern period.


History of linguistics » The 20th century » Structuralism
The term structuralism has been used as a slogan and rallying cry by a number of different schools of linguistics, and it is necessary to realize that it has somewhat different implications according to the context in which it is employed. It is convenient to draw first a broad distinction between European and American structuralism and, then, to treat them separately.

History of linguistics » The 20th century » Structuralism » Structural linguistics in Europe
Structural linguistics in Europe is generally said to have begun in 1916 with the posthumous publication of the Cours de Linguistique Générale (Course in General Linguistics) of Ferdinand de Saussure. Much of what is now considered as Saussurean can be seen, though less clearly, in the earlier work of Humboldt, and the general structural principles that Saussure was to develop with respect to synchronic linguistics in the Cours had been applied almost 40 years before (1879) by Saussure himself in a reconstruction of the Indo-European vowel system. The full significance of the work was not appreciated at the time. Saussure’s structuralism can be summed up in two dichotomies (which jointly cover what Humboldt referred to in terms of his own distinction of inner and outer form): (1) langue versus parole and (2) form versus substance. By langue, best translated in its technical Saussurean sense as language system, is meant the totality of regularities and patterns of formation that underlie the utterances of a language; by parole, which can be translated as language behaviour, is meant the actual utterances themselves. Just as two performances of a piece of music given by different orchestras on different occasions will differ in a variety of details and yet be identifiable as performances of the same piece, so two utterances may differ in various ways and yet be recognized as instances, in some sense, of the same utterance. What the two musical performances and the two utterances have in common is an identity of form, and this form, or structure, or pattern, is in principle independent of the substance, or “raw material,” upon which it is imposed. “Structuralism,” in the European sense then, refers to the view that there is an abstract relational structure that underlies and is to be distinguished from actual utterances—a system underlying actual behaviour—and that this is the primary object of study for the linguist.

Two important points arise here: first, that the structural approach is not in principle restricted to synchronic linguistics; second, that the study of meaning, as well as the study of phonology and grammar, can be structural in orientation. In both cases “structuralism” is opposed to “atomism” in the European literature. It was Saussure who drew the terminological distinction between synchronic and diachronic linguistics in the Cours; despite the undoubtedly structural orientation of his own early work in the historical and comparative field, he maintained that, whereas synchronic linguistics should deal with the structure of a language system at a given point in time, diachronic linguistics should be concerned with the historical development of isolated elements—it should be atomistic. Whatever the reasons that led Saussure to take this rather paradoxical view, his teaching on this point was not generally accepted, and scholars soon began to apply structural concepts to the diachronic study of languages. The most important of the various schools of structural linguistics to be found in Europe in the first half of the 20th century have included the Prague school, most notably represented by Nikolay Sergeyevich Trubetskoy (died 1938) and Roman Jakobson (died 1982), both Russian émigrés, and the Copenhagen (or glossematic) school, centred around Louis Hjelmslev (died 1965). John Rupert Firth (died 1960) and his followers, sometimes referred to as the London school, were less Saussurean in their approach, but, in a general sense of the term, their approach may also be described appropriately as structural linguistics.

History of linguistics » The 20th century » Structuralism » Structural linguistics in America
American and European structuralism shared a number of features. In insisting upon the necessity of treating each language as a more or less coherent and integrated system, both European and American linguists of this period tended to emphasize, if not to exaggerate, the structural uniqueness of individual languages. There was especially good reason to take this point of view given the conditions in which American linguistics developed from the end of the 19th century. There were hundreds of indigenous American Indian languages that had never been previously described. Many of these were spoken by only a handful of speakers and, if they were not recorded before they became extinct, would be permanently inaccessible. Under these circumstances, such linguists as Franz Boas (died 1942) were less concerned with the construction of a general theory of the structure of human language than they were with prescribing sound methodological principles for the analysis of unfamiliar languages. They were also fearful that the description of these languages would be distorted by analyzing them in terms of categories derived from the analysis of the more familiar Indo-European languages.

After Boas, the two most influential American linguists were Edward Sapir (died 1939) and Leonard Bloomfield (died 1949). Like his teacher Boas, Sapir was equally at home in anthropology and linguistics, the alliance of which disciplines has endured to the present day in many American universities. Boas and Sapir were both attracted by the Humboldtian view of the relationship between language and thought, but it was left to one of Sapir’s pupils, Benjamin Lee Whorf, to present it in a sufficiently challenging form to attract widespread scholarly attention. Since the republication of Whorf’s more important papers in 1956, the thesis that language determines perception and thought has come to be known as the Sapir-Whorf hypothesis, or the theory of linguistic relativity.

Sapir’s work has always held an attraction for the more anthropologically inclined American linguists. But it was Bloomfield who prepared the way for the later phase of what is now thought of as the most distinctive manifestation of American “structuralism.” When he published his first book in 1914, Bloomfield was strongly influenced by Wundt’s psychology of language. In 1933, however, he published a drastically revised and expanded version with the new title Language; this book dominated the field for the next 30 years. In it Bloomfield explicitly adopted a behaviouristic approach to the study of language, eschewing in the name of scientific objectivity all reference to mental or conceptual categories. Of particular consequence was his adoption of the behaviouristic theory of semantics according to which meaning is simply the relationship between a stimulus and a verbal response. Because science was still a long way from being able to give a comprehensive account of most stimuli, no significant or interesting results could be expected from the study of meaning for some considerable time, and it was preferable, as far as possible, to avoid basing the grammatical analysis of a language on semantic considerations. Bloomfield’s followers pushed even further the attempt to develop methods of linguistic analysis that were not based on meaning. One of the most characteristic features of “post-Bloomfieldian” American structuralism, then, was its almost complete neglect of semantics.

Another characteristic feature, one that was to be much criticized by Chomsky, was its attempt to formulate a set of “discovery procedures”—procedures that could be applied more or less mechanically to texts and could be guaranteed to yield an appropriate phonological and grammatical description of the language of the texts. Structuralism, in this narrower sense of the term, is represented, with differences of emphasis or detail, in the major American textbooks published during the 1950s.


History of linguistics » The 20th century » Transformational-generative grammar
The most significant development in linguistic theory and research in the 20th century was the rise of generative grammar, and, more especially, of transformational-generative grammar, or transformational grammar, as it came to be known. Two versions of transformational grammar were put forward in the mid-1950s, the first by Zellig S. Harris and the second by Noam Chomsky, his pupil. It is Chomsky’s system that has attracted the most attention so far. As first presented by Chomsky in Syntactic Structures (1957), transformational grammar can be seen partly as a reaction against post-Bloomfieldian structuralism and partly as a continuation of it. What Chomsky reacted against most strongly was the post-Bloomfieldian concern with discovery procedures. In his opinion, linguistics should set itself the more modest and more realistic goal of formulating criteria for evaluating alternative descriptions of a language without regard to the question of how these descriptions had been arrived at. The statements made by linguists in describing a language should, however, be cast within the framework of a far more precise theory of grammar than had hitherto been the case, and this theory should be formalized in terms of modern mathematical notions. Within a few years, Chomsky had broken with the post-Bloomfieldians on a number of other points also. He had adopted what he called a “mentalistic” theory of language, by which term he implied that the linguist should be concerned with the speaker’s creative linguistic competence and not his performance, the actual utterances produced. He had challenged the post-Bloomfieldian concept of the phoneme (see below), which many scholars regarded as the most solid and enduring result of the previous generation’s work. And he had challenged the structuralists’ insistence upon the uniqueness of every language, claiming instead that all languages were, to a considerable degree, cut to the same pattern—they shared a certain number of formal and substantive universals.


History of linguistics » The 20th century » Tagmemic, stratificational, and other approaches
The effect of Chomsky’s ideas has been phenomenal. It is hardly an exaggeration to say that there is no major theoretical issue in linguistics today that is debated in terms other than those in which he has chosen to define it, and every school of linguistics tends to define its position in relation to his. Among the rival schools are tagmemics, stratificational grammar, and the Prague school. Tagmemics is the system of linguistic analysis developed by the U.S. linguist Kenneth L. Pike and his associates in connection with their work as Bible translators. Its foundations were laid during the 1950s, when Pike differed from the post-Bloomfieldian structuralists on a number of principles, and it has been further elaborated since then. Tagmemic analysis has been used for analyzing a great many previously unrecorded languages, especially in Central and South America and in West Africa. Stratificational grammar, developed by a U.S. linguist, Sydney M. Lamb, was seen by some linguists in the 1960s and ’70s as an alternative to transformational grammar. Not yet fully expounded or widely exemplified in the analysis of different languages, stratificational grammar is perhaps best characterized as a radical modification of post-Bloomfieldian linguistics, but it has many features that link it with European structuralism. The Prague school has been mentioned above for its importance in the period immediately following the publication of Saussure’s Cours. Many of its characteristic ideas (in particular, the notion of distinctive features in phonology) have been taken up by other schools. But there has been further development in Prague of the functional approach to syntax (see below). The work of M.A.K. Halliday derived much of its original inspiration from Firth (above), but Halliday provided a more systematic and comprehensive theory of the structure of language than Firth had, and it has been quite extensively illustrated.


Methods of synchronic linguistic analysis » Structural linguistics
This section is concerned mainly with a version of structuralism (which may also be called descriptive linguistics) developed by scholars working in a post-Bloomfieldian tradition.



Methods of synchronic linguistic analysis » Structural linguistics » Phonology
With the great progress made in phonetics in the late 19th century, it had become clear that the question whether two speech sounds were the same or not was more complex than might appear at first sight. Two utterances of what was taken to be the same word might differ quite perceptibly from one occasion of utterance to the next. Some of this variation could be attributed to a difference of dialect or accent and is of no concern here. But even two utterances of the same word by the same speaker might vary from one occasion to the next. Variation of this kind, though it is generally less obvious and would normally pass unnoticed, is often clear enough to the trained phonetician and is measurable instrumentally. It is known that the “same” word is being uttered, even if the physical signal produced is variable, in part, because the different pronunciations of the same word will cluster around some acoustically identifiable norm. But this is not the whole answer, because it is actually impossible to determine norms of pronunciation in purely acoustic terms. Once it has been decided what counts as “sameness” of sound from the linguistic point of view, the permissible range of variation for particular sounds in particular contexts can be measured, and, within certain limits, the acoustic cues for the identification of utterances as “the same” can be determined.

What is at issue is the difference between phonetic and phonological (or phonemic) identity, and for these purposes it will be sufficient to define phonetic identity in terms solely of acoustic “sameness.” Absolute phonetic identity is a theoretical ideal never fully realized. From a purely phonetic point of view, sounds are more or less similar, rather than absolutely the same or absolutely different. Speech sounds considered as units of phonetic analysis in this article are called phones, and, following the normal convention, are represented by enclosing the appropriate alphabetic symbol in square brackets. Thus [p] will refer to a p sound (i.e., what is described more technically as a voiceless, bilabial stop); and [pit] will refer to a complex of three phones—a p sound, followed by an i sound, followed by a t sound. A phonetic transcription may be relatively broad (omitting much of the acoustic detail) or relatively narrow (putting in rather more of the detail), according to the purpose for which it is intended. A very broad transcription will be used in this article except when finer phonetic differences must be shown.

Phonological, or phonemic, identity was referred to above as “sameness of sound from the linguistic point of view.” Considered as phonological units—i.e., from the point of view of their function in the language—sounds are described as phonemes and are distinguished from phones by enclosing their appropriate symbol (normally, but not necessarily, an alphabetic one) between two slash marks. Thus /p/ refers to a phoneme that may be realized on different occasions of utterance or in different contexts by a variety of more or less different phones. Phonological identity, unlike phonetic similarity, is absolute: two phonemes are either the same or different, they cannot be more or less similar. For example, the English words “bit” and “pit” differ phonemically in that the first has the phoneme /b/ and the second has the phoneme /p/ in initial position. As the words are normally pronounced, the phonetic realization of /b/ will differ from the phonetic realization of /p/ in a number of different ways: it will be at least partially voiced (i.e., there will be some vibration of the vocal cords), it will be without aspiration (i.e., there will be no accompanying slight puff of air, as there will be in the case of the phone realizing /p/), and it will be pronounced with less muscular tension. It is possible to vary any one or all of these contributory differences, making the phones in question more or less similar, and it is possible to reduce the phonetic differences to the point that the hearer cannot be certain which word, “bit” or “pit,” has been uttered. But it must be either one or the other; there is no word with an initial sound formed in the same manner as /p/ or /b/ that is halfway between the two. This is what is meant by saying that phonemes are absolutely distinct from one another—they are discrete rather than continuously variable.

How it is known whether two phones realize the same phoneme or not is dealt with differently by different schools of linguists. The “orthodox” post-Bloomfieldian school regards the first criterion to be phonetic similarity. Two phones are not said to realize the same phoneme unless they are sufficiently similar. What is meant by “sufficiently similar” is rather vague, but it must be granted that for every phoneme there is a permissible range of variation in the phones that realize it. As far as occurrence in the same context goes, there are no serious problems. More critical is the question of whether two phones occurring in different contexts can be said to realize the same phoneme or not. To take a standard example from English: the phone that occurs at the beginning of the word “pit” differs from the phone that occurs after the initial /s/ of “spit.” The “p sound” occurring after the /s/ is unaspirated (i.e., it is pronounced without any accompanying slight puff of air). The aspirated and unaspirated “p sounds” may be symbolized rather more narrowly as [ph] and [p] respectively. The question then is whether [ph] and [p] realize the same phoneme /p/ or whether each realizes a different phoneme. They satisfy the criterion of phonetic similarity, but this, though a necessary condition of phonemic identity, is not a sufficient one.

The next question is whether there is any pair of words in which the two phones are in minimal contrast (or opposition); that is, whether there is any context in English in which the occurrence of the one rather than the other has the effect of distinguishing two or more words (in the way that [ph] versus [b] distinguishes the so-called minimal pairs “pit” and “bit,” “pan” and “ban,” and so on). If there is, it can be said that, despite their phonetic similarity, the two phones realize (or “belong to”) different phonemes—that the difference between them is phonemic. If there is no context in which the two phones are in contrast (or opposition) in this sense, it can be said that they are variants of the same phoneme—that the difference between them is nonphonemic. Thus, the difference between [ph] and [p] in English is nonphonemic; the two sounds realize, or belong to, the same phoneme, namely /p/. In several other languages—e.g., Hindi—the contrast between such sounds as [ph] and [p] is phonemic, however. The question is rather more complicated than it has been represented here. In particular, it should be noted that [p] is phonetically similar to [b] as well as to [ph] and that, although [ph] and [b] are in contrast, [p] and [b] are not. It would thus be possible to regard [p] and [b] as variants of the same phoneme. Most linguists, however, have taken the alternative view, assigning [p] to the same phoneme as [ph]. Here it will suffice to note that the criteria of phonetic similarity and lack of contrast do not always uniquely determine the assignment of phones to phonemes. Various supplementary criteria may then be invoked.

Phones that can occur and do not contrast in the same context are said to be in free variation in that context, and, as has been shown, there is a permissible range of variation for the phonetic realization of all phonemes. More important than free variation in the same context, however, is systematically determined variation according to the context in which a given phoneme occurs. To return to the example used above: [p] and [ph], though they do not contrast, are not in free variation either. Each of them has its own characteristic positions of occurrence, and neither occurs, in normal English pronunciation, in any context characteristic for the other (e.g., only [ph] occurs at the beginning of a word, and only [p] occurs after s). This is expressed by saying that they are in complementary distribution. (The distribution of an element is the whole range of contexts in which it can occur.) Granted that [p] and [ph] are variants of the same phoneme /p/, it can be said that they are contextually, or positionally, determined variants of it. To use the technical term, they are allophones of /p/. The allophones of a phoneme, then, are its contextually determined variants and they are in complementary distribution.

The post-Bloomfieldians made the assignment of phones to phonemes subject to what is now generally referred to as the principle of bi-uniqueness. The phonemic specification of a word or utterance was held to determine uniquely its phonetic realization (except for free variation), and, conversely, the phonetic description of a word or utterance was held to determine uniquely its phonemic analysis. Thus, if two words or utterances are pronounced alike, then they must receive the same phonemic description; conversely, two words or utterances that have been given the same phonemic analysis must be pronounced alike. The principle of bi-uniqueness was also held to imply that, if a given phone was assigned to a particular phoneme in one position of occurrence, then it must be assigned to the same phoneme in all its other positions of occurrence; it could not be the allophone of one phoneme in one context and of another phoneme in other contexts.

A second important principle of the post-Bloomfieldian approach was its insistence that phonemic analysis should be carried out prior to and independently of grammatical analysis. Neither this principle nor that of bi-uniqueness was at all widely accepted outside the post-Bloomfieldian school, and they have been abandoned by the generative phonologists (see below).

Phonemes of the kind referred to so far are segmental; they are realized by consonantal or vocalic (vowel) segments of words, and they can be said to occur in a certain order relative to one another. For example, in the phonemic representation of the word “bit,” the phoneme /b/ precedes /i/, which precedes /t/. But nonsegmental, or suprasegmental, aspects of the phonemic realization of words and utterances may also be functional in a language. In English, for example, the noun “import” differs from the verb “import” in that the former is accented on the first and the latter on the second syllable. This is called a stress accent: the accented syllable is pronounced with greater force or intensity. Many other languages distinguish words suprasegmentally by tone. For example, in Mandarin Chinese the words haò “day” and haǒ “good” are distinguished from one another in that the first has a falling tone and the second a falling-rising tone; these are realized, respectively, as (1) a fall in the pitch of the syllable from high to low and (2) a change in the pitch of the syllable from medium to low and back to medium. Stress and tone are suprasegmental in the sense that they are “superimposed” upon the sequence of segmental phonemes. The term tone is conventionally restricted by linguists to phonologically relevant variations of pitch at the level of words. Intonation, which is found in all languages, is the variation in the pitch contour or pitch pattern of whole utterances, of the kind that distinguishes (either of itself or in combination with some other difference) statements from questions or indicates the mood or attitude of the speaker (as hesitant, surprised, angry, and so forth). Stress, tone, and intonation do not exhaust the phonologically relevant suprasegmental features found in various languages, but they are among the most important.

A complete phonological description of a language includes all the segmental phonemes and specifies which allophones occur in which contexts. It also indicates which sequences of phonemes are possible in the language and which are not: it will indicate, for example, that the sequences /bl/ and /br/ are possible at the beginning of English words but not /bn/ or /bm/. A phonological description also identifies and states the distribution of the suprasegmental features. Just how this is to be done, however, has been rather more controversial in the post-Bloomfieldian tradition. Differences between the post-Bloomfieldian approach to phonology and approaches characteristic of other schools of structural linguistics will be treated below.


Methods of synchronic linguistic analysis » Structural linguistics » Morphology
The grammatical description of many, if not all, languages is conveniently divided into two complementary sections: morphology and syntax. The relationship between them, as generally stated, is as follows: morphology accounts for the internal structure of words, and syntax describes how words are combined to form phrases, clauses, and sentences.

There are many words in English that are fairly obviously analyzable into smaller grammatical units. For example, the word “unacceptability” can be divided into un-, accept, abil-, and -ity (abil- being a variant of -able). Of these, at least three are minimal grammatical units, in the sense that they cannot be analyzed into yet smaller grammatical units—un-, abil-, and ity. The status of accept, from this point of view, is somewhat uncertain. Given the existence of such forms as accede and accuse, on the one hand, and of except, exceed, and excuse, on the other, one might be inclined to analyze accept into ac- (which might subsequently be recognized as a variant of ad-) and -cept. The question is left open. Minimal grammatical units like un-, abil-, and -ity are what Bloomfield called morphemes; he defined them in terms of the “partial phonetic-semantic resemblance” holding within sets of words. For example, “unacceptable,” “untrue,” and “ungracious” are phonetically (or, phonologically) similar as far as the first syllable is concerned and are similar in meaning in that each of them is negative by contrast with a corresponding positive adjective (“acceptable,” “true,” “gracious”). This “partial phonetic-semantic resemblance” is accounted for by noting that the words in question contain the same morpheme (namely, un-) and that this morpheme has a certain phonological form and a certain meaning.

Bloomfield’s definition of the morpheme in terms of “partial phonetic-semantic resemblance” was considerably modified and, eventually, abandoned entirely by some of his followers. Whereas Bloomfield took the morpheme to be an actual segment of a word, others defined it as being a purely abstract unit, and the term morph was introduced to refer to the actual word segments. The distinction between morpheme and morph (which is, in certain respects, parallel to the distinction between phoneme and phone) may be explained by means of an example. If a morpheme in English is posited with the function of accounting for the grammatical difference between singular and plural nouns, it may be symbolized by enclosing the term plural within brace brackets. Now the morpheme [plural] is represented in a number of different ways. Most plural nouns in English differ from the corresponding singular forms in that they have an additional final segment. In the written forms of these words, it is either -s or -es (e.g., “cat” : “cats”; “dog” : “dogs”; “fish” : “fishes”). The word segments written -s or -es are morphs. So also is the word segment written -en in “oxen.” All these morphs represent the same morpheme. But there are other plural nouns in English that differ from the corresponding singular forms in other ways (e.g., “mouse” : “mice”; “criterion” : “criteria”; and so on) or not at all (e.g., “this sheep” : “these sheep”). Within the post-Bloomfieldian framework no very satisfactory account of the formation of these nouns could be given. But it was clear that they contained (in some sense) the same morpheme as the more regular plurals.

Morphs that are in complementary distribution and represent the same morpheme are said to be allomorphs of that morpheme. For example, the regular plurals of English nouns are formed by adding one of three morphs on to the form of the singular: /s/, /z/, or /iz/ (in the corresponding written forms both /s/ and /z/ are written -s and /iz/ is written -es). Their distribution is determined by the following principle: if the morph to which they are to be added ends in a “sibilant” sound (e.g., s, z, sh, ch), then the syllabic allomorph /iz/ is selected (e.g., fish-es /fiš-iz/, match-es /mač-iz/); otherwise the nonsyllabic allomorphs are selected, the voiceless allomorph /s/ with morphs ending in a voiceless consonant (e.g., cat-s /kat-s/) and the voiced allomorph /z/ with morphs ending in a vowel or voiced consonant (e.g., flea-s /fli-z/, dog-s /dog-z/). These three allomorphs, it will be evident, are in complementary distribution, and the alternation between them is determined by the phonological structure of the preceding morph. Thus the choice is phonologically conditioned.

Very similar is the alternation between the three principal allomorphs of the past participle ending, /id/, /t/, and /d/, all of which correspond to the -ed of the written forms. If the preceding morph ends with /t/ or /d/, then the syllabic allomorph /id/ is selected (e.g., wait-ed /weit-id/). Otherwise, if the preceding morph ends with a voiceless consonant, one of the nonsyllabic allomorphs is selected—the voiceless allomorph /t/ when the preceding morph ends with a voiceless consonant (e.g., pack-ed /pak-t/) and the voiced allomorph /d/ when the preceding morph ends with a vowel or voiced consonant (e.g., row-ed /rou-d/; tame-d /teim-d/). This is another instance of phonological conditioning. Phonological conditioning may be contrasted with the principle that determines the selection of yet another allomorph of the past participle morpheme. The final /n/ of show-n or see-n (which marks them as past participles) is not determined by the phonological structure of the morphs show and see. For each English word that is similar to “show” and “see” in this respect, it must be stated as a synchronically inexplicable fact that it selects the /n/ allomorph. This is called grammatical conditioning. There are various kinds of grammatical conditioning.

Alternation of the kind illustrated above for the allomorphs of the plural morpheme and the /id/, /d/, and /t/ allomorphs of the past participle is frequently referred to as morphophonemic. Some linguists have suggested that it should be accounted for not by setting up three allomorphs each with a distinct phonemic form but by setting up a single morph in an intermediate morphophonemic representation. Thus, the regular plural morph might be said to be composed of the morphophoneme /Z/ and the most common past-participle morph of the morphophoneme /D/. General rules of morphophonemic interpretation would then convert /Z/ and /D/ to their appropriate phonetic form according to context. This treatment of the question foreshadows, on the one hand, the stratificational treatment and, on the other, the generative approach, though they differ considerably in other respects.

An important concept in grammar and, more particularly, in morphology is that of free and bound forms. A bound form is one that cannot occur alone as a complete utterance (in some normal context of use). For example, -ing is bound in this sense, whereas wait is not, nor is waiting. Any form that is not bound is free. Bloomfield based his definition of the word on this distinction between bound and free forms. Any free form consisting entirely of two or more smaller free forms was said to be a phrase (e.g., “poor John” or “ran away”), and phrases were to be handled within syntax. Any free form that was not a phrase was defined to be a word and to fall within the scope of morphology. One of the consequences of Bloomfield’s definition of the word was that morphology became the study of constructions involving bound forms. The so-called isolating languages, which make no use of bound forms (e.g., Vietnamese), would have no morphology.

The principal division within morphology is between inflection and derivation (or word formation). Roughly speaking, inflectional constructions can be defined as yielding sets of forms that are all grammatically distinct forms of single vocabulary items, whereas derivational constructions yield distinct vocabulary items. For example, “sings,” “singing,” “sang,” and “sung” are all inflectional forms of the vocabulary item traditionally referred to as “the verb to sing”; but “singer,” which is formed from “sing” by the addition of the morph -er (just as “singing” is formed by the addition of -ing), is one of the forms of a different vocabulary item. When this rough distinction between derivation and inflection is made more precise, problems occur. The principal consideration, undoubtedly, is that inflection is more closely integrated with and determined by syntax. But the various formal criteria that have been proposed to give effect to this general principle are not uncommonly in conflict in particular instances, and it probably must be admitted that the distinction between derivation and inflection, though clear enough in most cases, is in the last resort somewhat arbitrary.

Bloomfield and most linguists have discussed morphological constructions in terms of processes. Of these, the most widespread throughout the languages of the world is affixation; i.e., the attachment of an affix to a base. For example, the word “singing” can be described as resulting from the affixation of -ing to the base sing. (If the affix is put in front of the base, it is a prefix; if it is put after the base, it is a suffix; and if it is inserted within the base, splitting it into two discontinuous parts, it is an infix.) Other morphological processes recognized by linguists need not be mentioned here, but reference may be made to the fact that many of Bloomfield’s followers from the mid-1940s were dissatisfied with the whole notion of morphological processes. Instead of saying that -ing was affixed to sing they preferred to say that sing and -ing co-occurred in a particular pattern or arrangement, thereby avoiding the implication that sing is in some sense prior to or more basic than -ing. The distinction of morpheme and morph (and the notion of allomorphs) was developed in order to make possible the description of the morphology and syntax of a language in terms of “arrangements” of items rather than in terms of “processes” operating upon more basic items. Nowadays, the opposition to “processes” is, except among the stratificationalists, almost extinct. It has proved to be cumbersome, if not impossible, to describe the relationship between certain linguistic forms without deriving one from the other or both from some common underlying form, and most linguists no longer feel that this is in any way reprehensible.


Methods of synchronic linguistic analysis » Structural linguistics » Syntax
Syntax, for Bloomfield, was the study of free forms that were composed entirely of free forms. Central to his theory of syntax were the notions of form classes and constituent structure. (These notions were also relevant, though less central, in the theory of morphology.) Bloomfield defined form classes, rather imprecisely, in terms of some common “recognizable phonetic or grammatical feature” shared by all the members. He gave as examples the form class consisting of “personal substantive expressions” in English (defined as “the forms that, when spoken with exclamatory final pitch, are calls for a person’s presence or attention”—e.g., “John,” “Boy,” “Mr. Smith”); the form class consisting of “infinitive expressions” (defined as “forms which, when spoken with exclamatory final pitch, have the meaning of a command”—e.g., “run,” “jump,” “come here”); the form class of “nominative substantive expressions” (e.g., “John,” “the boys”); and so on. It should be clear from these examples that form classes are similar to, though not identical with, the traditional parts of speech and that one and the same form can belong to more than one form class.

What Bloomfield had in mind as the criterion for form class membership (and therefore of syntactic equivalence) may best be expressed in terms of substitutability. Form classes are sets of forms (whether simple or complex, free or bound), any one of which may be substituted for any other in a given construction or set of constructions throughout the sentences of the language.

The smaller forms into which a larger form may be analyzed are its constituents, and the larger form is a construction. For example, the phrase “poor John” is a construction analyzable into, or composed of, the constituents “poor” and “John.” Because there is no intermediate unit of which “poor” and “John” are constituents that is itself a constituent of the construction “poor John,” the forms “poor” and “John” may be described not only as constituents but also as immediate constituents of “poor John.” Similarly, the phrase “lost his watch” is composed of three word forms—“lost,” “his,” and “watch”—all of which may be described as constituents of the construction. Not all of them, however, are its immediate constituents. The forms “his” and “watch” combine to make the intermediate construction “his watch”; it is this intermediate unit that combines with “lost” to form the larger phrase “lost his watch.” The immediate constituents of “lost his watch” are “lost” and “his watch”; the immediate constituents of “his watch” are the forms “his” and “watch.” By the constituent structure of a phrase or sentence is meant the hierarchical organization of the smallest forms of which it is composed (its ultimate constituents) into layers of successively more inclusive units. Viewed in this way, the sentence “Poor John lost his watch” is more than simply a sequence of five word forms associated with a particular intonation pattern. It is analyzable into the immediate constituents “poor John” and “lost his watch,” and each of these phrases is analyzable into its own immediate constituents and so on, until, at the last stage of the analysis, the ultimate constituents of the sentence are reached. The constituent structure of the whole sentence is represented by means of a tree diagram in Figure 1.

Each form, whether it is simple or composite, belongs to a certain form class. Using arbitrarily selected letters to denote the form classes of English, “poor” may be a member of the form class A, “John” of the class B, “lost” of the class C, “his” of the class D, and “watch” of the class E. Because “poor John” is syntactically equivalent to (i.e., substitutable for) “John,” it is to be classified as a member of A. So too, it can be assumed, is “his watch.” In the case of “lost his watch” there is a problem. There are very many forms—including “lost,” “ate,” and “stole”—that can occur, as here, in constructions with a member of B and can also occur alone; for example, “lost” is substitutable for “stole the money,” as “stole” is substitutable for either or for “lost his watch.” This being so, one might decide to classify constructions like “lost his watch” as members of C. On the other hand, there are forms that—though they are substitutable for “lost,” “ate,” “stole,” and so on when these forms occur alone—cannot be used in combination with a following member of B (cf. “died,” “existed”); and there are forms that, though they may be used in combination with a following member of B, cannot occur alone (cf. “enjoyed”). The question is whether one respects the traditional distinction between transitive and intransitive verb forms. It may be decided, then, that “lost,” “stole,” “ate” and so forth belong to one class, C (the class to which “enjoyed” belongs), when they occur “transitively” (i.e., with a following member of B as their object) but to a different class, F (the class to which “died” belongs), when they occur “intransitively.” Finally, it can be said that the whole sentence “Poor John lost his watch” is a member of the form class G. Thus the constituent structure not only of “Poor John lost his watch” but of a whole set of English sentences can be represented by means of the tree diagram given in Figure 2. New sentences of the same type can be constructed by substituting actual forms for the class labels.

Any construction that belongs to the same form class as at least one of its immediate constituents is described as endocentric; the only endocentric construction in the model sentence above is “poor John.” All the other constructions, according to the analysis, are exocentric. This is clear from the fact that in the letters at the nodes above every phrase other than the phrase A + B (i.e., “poor John,” “old Harry,” and so on) are different from any of the letters at the ends of the lower branches connected directly to these nodes. For example, the phrase D + E (i.e., “his watch,” “the money,” and so forth) has immediately above it a node labelled B, rather than either D or E. Endocentric constructions fall into two types: subordinating and coordinating. If attention is confined, for simplicity, to constructions composed of no more than two immediate constituents, it can be said that subordinating constructions are those in which only one immediate constituent is of the same form class as the whole construction, whereas coordinating constructions are those in which both constituents are of the same form class as the whole construction. In a subordinating construction (e.g., “poor John”), the constituent that is syntactically equivalent to the whole construction is described as the head, and its partner is described as the modifier: thus, in “poor John,” the form “John” is the head, and “poor” is its modifier. An example of a coordinating construction is “men and women,” in which, it may be assumed, the immediate constituents are the word “men” and the word “women,” each of which is syntactically equivalent to “men and women.” (It is here implied that the conjunction “and” is not a constituent, properly so called, but an element that, like the relative order of the constituents, indicates the nature of the construction involved. Not all linguists have held this view.)

One reason for giving theoretical recognition to the notion of constituent is that it helps to account for the ambiguity of certain constructions. A classic example is the phrase “old men and women,” which may be interpreted in two different ways according to whether one associates “old” with “men and women” or just with “men.” Under the first of the two interpretations, the immediate constituents are “old” and “men and women”; under the second, they are “old men” and “women.” The difference in meaning cannot be attributed to any one of the ultimate constituents but results from a difference in the way in which they are associated with one another. Ambiguity of this kind is referred to as syntactic ambiguity. Not all syntactic ambiguity is satisfactorily accounted for in terms of constituent structure.


Methods of synchronic linguistic analysis » Structural linguistics » Semantics
Bloomfield thought that semantics, or the study of meaning, was the weak point in the scientific investigation of language and would necessarily remain so until the other sciences whose task it was to describe the universe and man’s place in it had advanced beyond their present state. In his textbook Language (1933), he had himself adopted a behaviouristic theory of meaning, defining the meaning of a linguistic form as “the situation in which the speaker utters it and the response which it calls forth in the hearer.” Furthermore, he subscribed, in principle at least, to a physicalist thesis, according to which all science should be modelled upon the so-called exact sciences and all scientific knowledge should be reducible, ultimately, to statements made about the properties of the physical world. The reason for his pessimism concerning the prospects for the study of meaning was his feeling that it would be a long time before a complete scientific description of the situations in which utterances were produced and the responses they called forth in their hearers would be available. At the time that Bloomfield was writing, physicalism was more widely held than it is today, and it was perhaps reasonable for him to believe that linguistics should eschew mentalism and concentrate upon the directly observable. As a result, for some 30 years after the publication of Bloomfield’s textbook, the study of meaning was almost wholly neglected by his followers; most American linguists who received their training during this period had no knowledge of, still less any interest in, the work being done elsewhere in semantics.

Two groups of scholars may be seen to have constituted an exception to this generalization: anthropologically minded linguists and linguists concerned with Bible translation. Much of the description of the indigenous languages of America has been carried out since the days of Boas and his most notable pupil Sapir by scholars who were equally proficient both in anthropology and in descriptive linguistics; such scholars have frequently added to their grammatical analyses of languages some discussion of the meaning of the grammatical categories and of the correlations between the structure of the vocabularies and the cultures in which the languages operated. It has already been pointed out that Boas and Sapir and, following them, Whorf were attracted by Humboldt’s view of the interdependence of language and culture and of language and thought. This view was quite widely held by American anthropological linguists (athough many of them would not go as far as Whorf in asserting the dependence of thought and conceptualization upon language).

Also of considerable importance in the description of the indigenous languages of America has been the work of linguists trained by the American Bible Society and the Summer Institute of Linguistics, a group of Protestant missionary linguists. Because their principal aim is to produce translations of the Bible, they have necessarily been concerned with meaning as well as with grammar and phonology. This has tempered the otherwise fairly orthodox Bloomfieldian approach characteristic of the group.

The two most important developments evident in recent work in semantics are, first, the application of the structural approach to the study of meaning and, second, a better appreciation of the relationship between grammar and semantics. The second of these developments will be treated in the following section on Transformational-generative grammar. The first, structural semantics, goes back to the period preceding World War II and is exemplified in a large number of publications, mainly by German scholars—Jost Trier, Leo Weisgerber, and their collaborators.

The structural approach to semantics is best explained by contrasting it with the more traditional “atomistic” approach, according to which the meaning of each word in the language is described, in principle, independently of the meaning of all other words. The structuralist takes the view that the meaning of a word is a function of the relationships it contracts with other words in a particular lexical field, or subsystem, and that it cannot be adequately described except in terms of these relationships. For example, the colour terms in particular languages constitute a lexical field, and the meaning of each term depends upon the place it occupies in the field. Although the denotation of each of the words “green,” “blue,” and “yellow” in English is somewhat imprecise at the boundaries, the position that each of them occupies relative to the other terms in the system is fixed: “green” is between “blue” and “yellow,” so that the phrases “greenish yellow” or “yellowish green” and “bluish green” or “greenish blue” are used to refer to the boundary areas. Knowing the meaning of the word “green” implies knowing what cannot as well as what can be properly described as green (and knowing of the borderline cases that they are borderline cases). Languages differ considerably as to the number of basic colour terms that they recognize, and they draw boundaries within the psychophysical continuum of colour at different places. Blue, green, yellow, and so on do not exist as distinct colours in nature, waiting to be labelled differently, as it were, by different languages; they come into existence, for the speakers of particular languages, by virtue of the fact that those languages impose structure upon the continuum of colour and assign to three of the areas thus recognized the words “blue,” “green,” “yellow.”

The language of any society is an integral part of the culture of that society, and the meanings recognized within the vocabulary of the language are learned by the child as part of the process of acquiring the culture of the society in which he is brought up. Many of the structural differences found in the vocabularies of different languages are to be accounted for in terms of cultural differences. This is especially clear in the vocabulary of kinship (to which a considerable amount of attention has been given by anthropologists and linguists), but it holds true of many other semantic fields also. A consequence of the structural differences that exist between the vocabularies of different languages is that, in many instances, it is in principle impossible to translate a sentence “literally” from one language to another.

It is important, nevertheless, not to overemphasize the semantic incommensurability of languages. Presumably, there are many physiological and psychological constraints that, in part at least, determine one’s perception and categorization of the world. It may be assumed that, when one is learning the denotation of the more basic words in the vocabulary of one’s native language, attention is drawn first to what might be called the naturally salient features of the environment and that one is, to this degree at least, predisposed to identify and group objects in one way rather than another. It may also be that human beings are genetically endowed with rather more specific and linguistically relevant principles of categorization. It is possible that, although languages differ in the number of basic colour categories that they distinguish, there is a limited number of hierarchically ordered basic colour categories from which each language makes its selection and that what counts as a typical instance, or focus, of these universal colour categories is fixed and does not vary from one language to another. If this hypothesis is correct, then it is false to say, as many structural semanticists have said, that languages divide the continuum of colour in a quite arbitrary manner. But the general thesis of structuralism is unaffected, for it still remains true that each language has its own unique semantic structure even though the total structure is, in each case, built upon a substructure of universal distinctions.


Methods of synchronic linguistic analysis » Transformational-generative grammar
A generative grammar, in the sense in which Noam Chomsky uses the term, is a rules system formalized with mathematical precision that generates, without need of any information that is not represented explicitly in the system, the grammatical sentences of the language that it describes, or characterizes, and assigns to each sentence a structural description, or grammatical analysis. All the concepts introduced in this definition of “generative” grammar will be explained and exemplified in the course of this section. Generative grammars fall into several types; this exposition is concerned mainly with the type known as transformational (or, more fully, transformational-generative). Transformational grammar was initiated by Zellig S. Harris in the course of work on what he called discourse analysis (the formal analysis of the structure of continuous text). It was further developed and given a somewhat different theoretical basis by Chomsky.



Methods of synchronic linguistic analysis » Transformational-generative grammar » Harris’s grammar
Harris distinguished within the total set of grammatical sentences in a particular language (for example, English) two complementary subsets: kernel sentences (the set of kernel sentences being described as the kernel of the grammar) and nonkernel sentences. The difference between these two subsets lies in nonkernel sentences being derived from kernel sentences by means of transformational rules. For example, “The workers rejected the ultimatum” is a kernel sentence that may be transformed into the nonkernel sentences “The ultimatum was rejected by the workers” or “Did the workers reject the ultimatum?” Each of these may be described as a transform of the kernel sentence from which it is derived. The transformational relationship between corresponding active and passive sentences (e.g., “The workers rejected the ultimatum” and “The ultimatum was rejected by the workers”) is conventionally symbolized by the rule N1 V N2 → N2 be V + en by N1, in which N stands for any noun or noun phrase, V for any transitive verb, en for the past participle morpheme, and the arrow (→) instructs one to rewrite the construction to its left as the construction to the right. (There has been some simplification of the rule as it was formulated by Harris.) This rule may be taken as typical of the whole class of transformational rules in Harris’s system: it rearranges constituents (what was the first nominal, or noun, N1, in the kernel sentence is moved to the end of the transform, and what was the second nominal, N2, in the kernel sentence is moved to initial position in the transform), and it adds various elements in specified positions (be, en, and by). Other operations carried out by transformational rules include the deletion of constituents; e.g., the entire phrase “by the workers” is removed from the sentence “The ultimatum was rejected by the workers” by a rule symbolized as N2 be V+en by N1 → N2 be V+en. This transforms the construction on the left side of the arrow (which resulted from the passive transformation) by dropping the by-phrase, thus producing “The ultimatum was rejected.”



Methods of synchronic linguistic analysis » Transformational-generative grammar » Modifications in Chomsky’s grammar
Chomsky’s system of transformational grammar was substantially modified in 1965. Perhaps the most important modification was the incorporation, within the system, of a semantic component, in addition to the syntactic component and phonological component. (The phonological component may be thought of as replacing the morphophonemic component of Syntactic Structures.) The rules of the syntactic component generate the sentences of the language and assign to each not one but two structural analyses: a deep structure analysis as represented by the underlying phrase marker, and a surface structure analysis, as represented by the final derived phrase marker. The underlying phrase marker is assigned by rules of the base (roughly equivalent to the PS [Phrase-Structure] rules of the earlier system); the derived phrase marker is assigned by the transformational rules. The interrelationship of the four sets of rules is shown diagrammatically in Figure 7. The meaning of the sentence is derived (mainly, if not wholly) from the deep structure by means of the rules of semantic interpretation; the phonetic realization of the sentence is derived from its surface structure by means of the rules of the phonological component. The grammar (“grammar” is now to be understood as covering semantics and phonology, as well as syntax) is thus an integrated system of rules for relating the pronunciation of a sentence to its meaning. The syntax, and more particularly the base, is at the “heart” of the system, as it were: it is the base component (as the arrows in the diagram indicate) that generates the infinite class of structures underlying the well-formed sentences of a language. These structures are then given a semantic and phonetic “interpretation” by the other components.

The base consists of two parts: a set of categorial rules and a lexicon. Taken together, they fulfill a similar function to that fulfilled by the phrase-structure rules of the earlier system. But there are many differences of detail. Among the most important is that the lexicon (which may be thought of as a dictionary of the language cast in a particular form) lists, in principle, all the vocabulary words in the language and associates with each all the syntactic, semantic, and phonological information required for the correct operation of the rules. This information is represented in terms of what are called features. For example, the entry for “boy” might say that it has the syntactic features: [+ Noun], [+ Count], [+ Common], [+ Animate], and [+ Human]. The categorial rules generate a set of phrase markers that have in them, as it were, a number of “slots” to be filled with items from the lexicon. With each such “slot” there is associated a set of features that define the kind of item that can fill the “slot.” If a phrase marker is generated with a “slot” for the head of a noun phrase specified as requiring an animate noun (i.e., a noun having the feature [+ Animate]), the item “boy” would be recognized as being compatible with this specification and could be inserted in the “slot” by the rule of lexical substitution. Similarly, it could be inserted in “slots” specified as requiring a common noun, a human noun, or a countable noun, but it would be excluded from positions that require an abstract noun (e.g., “sincerity”) or an uncountable noun (e.g., “water”). By drawing upon the syntactic information coded in feature notation in the lexicon, the categorial rules might permit such sentences as “The boy died,” while excluding (and thereby defining as ungrammatical) such nonsentences as “The boy elapsed.”

One of the most controversial topics in the development of transformational grammar is the relationship between syntax and semantics. Scholars working in the field are now agreed that there is a considerable degree of interdependence between the two, and the problem is how to formalize this interdependence. One school of linguists, called generative semanticists, accept the general principles of transformational grammar but have challenged Chomsky’s conception of deep structure as a separate and identifiable level of syntactic representation. In their opinion, the basic component of the grammar should consist of a set of rules for the generation of well-formed semantic representations. These would then be converted by a succession of transformational rules into strings of words with an assigned surface-structure syntactic analysis, there being no place in the passage from semantic representation to surface structure identifiable as Chomsky’s deep structure. Chomsky himself has denied that there is any real difference between the two points of view and has maintained that the issue is purely one of notation. That this argument can be put forward by one party to the controversy and rejected by the other is perhaps a sufficient indication of the uncertainty of the evidence. Of greater importance than the overt issues, in so far as they are clear, is the fact that linguists are now studying much more intensively than they have in the past the complexities of the interdependence of syntax, on the one hand, and semantics and logic, on the other. Whether it will prove possible to handle all these complexities within a comprehensive generative grammar remains to be seen.

The role of the phonological component of a generative grammar of the type outlined by Chomsky is to assign a phonetic “interpretation” to the strings of words generated by the syntactic component. These strings of words are represented in a phonological notation (taken from the lexicon) and have been provided with a surface-structure analysis by the transformational rules (see ). The phonological elements out of which the word forms are composed are segments consisting of what are referred to technically as distinctive features (following the usage of the Prague school, see below The Prague school). For example, the word form “man,” represented phonologically, is composed of three segments: the first consists of the features [+ consonantal], [+ bilabial], [+ nasal], etc.; the second of the features [+ vocalic], [+ front], [+ open], etc.; and the third of the features [+ consonantal], [+ alveolar], [+ nasal], etc. (These features should be taken as purely illustrative; there is some doubt about the definitive list of distinctive features.) Although these segments may be referred to as the “phonemes” /m/, /a/, and /n/, they should not be identified theoretically with units of the kind discussed in the section on Phonology under Structural linguistics. They are closer to what many American structural linguists called “morphophonemes” or the Prague school linguists labelled “archiphonemes,” being unspecified for any feature that is contextually redundant or predictable. For instance, the first segment of the phonological representation of “man” will not include the feature [+ voice]; because nasal consonants are always phonetically voiced in this position in English, the feature [+ voice] can be added to the phonetic specification by a rule of the phonological component.

One further important aspect of generative phonology (i.e., phonology carried out within the framework of an integrated generative grammar) should be mentioned: its dependence upon syntax. Most American structural phonologists made it a point of principle that the phonemic analysis of an utterance should be carried out without regard to its grammatical structure. This principle was controversial among American linguists and was not generally accepted outside America. Not only has the principle been rejected by the generative grammarians, but they have made the phonological description of a language much more dependent upon its syntactic analysis than has any other school of linguists. They have claimed, for example, that the phonological rules that assign different degrees of stress to the vowels in English words and phrases and alter the quality of the relatively unstressed vowel concomitantly must make reference to the derived constituent structure of sentences and not merely to the form class of the individual words or the places in which the word boundaries occur.


Methods of synchronic linguistic analysis » Tagmemics
The system of tagmemic analysis, as presented by Kenneth L. Pike, was developed for the analysis not only of language but of all of human behaviour that manifests the property of patterning. In the following treatment, only language will be discussed.




Methods of synchronic linguistic analysis » The Prague school » Theory of markedness
The notion of markedness was first developed in Prague school phonology but was subsequently extended to morphology and syntax. When two phonemes are distinguished by the presence or absence of a single distinctive feature, one of them is said to be marked and the other unmarked for the feature in question. For example, /b/ is marked and /p/ unmarked with respect to voicing. Similarly, in morphology, the regular English verb can be said to be marked for past tense (by the suffixation of -ed) but to be unmarked in the present (cf. “jumped” versus “jump”). It is often the case that a morphologically unmarked form has a wider range of occurrences and a less definite meaning than a morphologically marked form. It can be argued, for example, that, whereas the past tense form in English (in simple sentences or the main clause of complex sentences) definitely refers to the past, the so-called present tense form is more neutral with respect to temporal reference: it is nonpast in the sense that it fails to mark the time as past, but it does not mark it as present. There is also a more abstract sense of markedness, which is independent of the presence or absence of an overt feature or affix. The words “dog” and “bitch” provide examples of markedness of this kind on the level of vocabulary. Whereas the use of the word “bitch” is restricted to females of the species, “dog” is applicable to both males and females. “Bitch” is the marked and “dog” the unmarked term, and, as is commonly the case, the unmarked term can be neutral or negative according to context (cf. “That dog over there is a bitch” versus “It’s not a dog, it’s a bitch”). The principle of markedness, understood in this more general or more abstract sense, is now quite widely accepted by linguists of many different schools, and it is applied at all levels of linguistic analysis.


Methods of synchronic linguistic analysis » The Prague school » Recent contributions
Current Prague school work is still characteristically functional in the sense in which this term was interpreted in the pre-World War II period. The most valuable contribution made by the postwar Prague school is probably the distinction of theme and rheme and the notion of “functional sentence perspective” or “communicative dynamism.” By the theme of a sentence is meant that part that refers to what is already known or given in the context (sometimes called, by other scholars, the topic or psychological subject); by the rheme, the part that conveys new information (the comment or psychological predicate). It has been pointed out that, in languages with a free word order (such as Czech or Latin), the theme tends to precede the rheme, regardless of whether the theme or the rheme is the grammatical subject and that this principle may still operate, in a more limited way, in languages, like English, with a relatively fixed word order (cf. “That book I haven’t seen before”). But other devices may also be used to distinguish theme and rheme. The rheme may be stressed (“Jóhn saw Mary”) or made the complement of the verb “to be” in the main clause of what is now commonly called a cleft sentence (“It’s Jóhn who saw Mary”).

The general principle that has guided research in “functional sentence perspective” is that the syntactic structure of a sentence is in part determined by the communicative function of its various constituents and the way in which they relate to the context of utterance. A somewhat different but related aspect of functionalism in syntax is seen in current work in what is called case grammar. Case grammar is based upon a small set of syntactic functions (agentive, locative, benefactive, instrumental, and so on) that are variously expressed in different languages but that are held to determine the grammatical structure of sentences. Although case grammar does not derive directly from the work of the Prague school, it is very similar in inspiration.



Historical (diachronic) linguistics » The comparative method » Criticisms of the comparative method
One of the criticisms directed against the comparative method is that it is based upon a misleading genealogical metaphor. In the mid-19th century, the German linguist August Schleicher introduced into comparative linguistics the model of the “family tree.” There is obviously no point in time at which it can be said that new languages are “born” of a common parent language. Nor is it normally the case that the parent language “lives on” for a while, relatively unchanged, and then “dies.” It is easy enough to recognize the inappropriateness of these biological expressions. No less misleading, however, is the assumption that languages descended from the same parent language will necessarily diverge, never to converge again, through time. This assumption is built into the comparative method as it is traditionally applied. And yet there are many clear cases of convergence in the development of well-documented languages. The dialects of England are fast disappearing and are far more similar in grammar and vocabulary today than they were even a generation ago. They have been strongly influenced by the standard language. The same phenomenon, the replacement of nonstandard or less prestigious forms with forms borrowed from the standard language or dialect, has taken place in many different places at many different times. It would seem, therefore, that one must reckon with both divergence and convergence in the diachronic development of languages: divergence when contact between two speech communities is reduced or broken and convergence when the two speech communities remain in contact and when one is politically or culturally dominant.

The comparative method presupposes linguistically uniform speech communities and independent development after sudden, sharp cleavage. Critics of the comparative method have pointed out that this situation does not generally hold. In 1872 a German scholar, Johannes Schmidt, criticized the family-tree theory and proposed instead what is referred to as the wave theory, according to which different linguistic changes will spread, like waves, from a politically, commercially, or culturally important centre along the main lines of communication, but successive innovations will not necessarily cover exactly the same area. Consequently, there will be no sharp distinction between contiguous dialects, but, in general, the further apart two speech communities are, the more linguistic features there will be that distinguish them.


Historical (diachronic) linguistics » The comparative method » Internal reconstruction
The comparative method is used to reconstruct earlier forms of a language by drawing upon the evidence provided by other related languages. It may be supplemented by what is called the method of internal reconstruction. This is based upon the existence of anomalous or irregular patterns of formation and the assumption that they must have developed, usually by sound change, from earlier regular patterns. For example, the existence of such patterns in early Latin as honos : honoris (“honor” : “of honor”) and others in contrast with orator : oratoris (“orator” : “of the orator”) and others might lead to the supposition that honoris developed from an earlier *honosis. In this case, the evidence of other languages shows that *s became r between vowels in an earlier period of Latin. But it would have been possible to reconstruct the earlier intervocalic *s with a fair degree of confidence on the basis of the internal evidence alone. Clearly, internal reconstruction depends upon the structural approach to linguistics.

The most recent development in the field of historical and comparative linguistics has come from the theory of generative grammar (see above Transformational-generative grammar). If the grammar and phonology of a language are described from a synchronic point of view as an integrated system of rules, then the grammatical and phonological similarities and differences between two closely related languages, or dialects, or between two diachronically distinct states of the same language can be described in terms of the similarities and differences in two descriptive rule systems. One system may contain a rule that the other lacks (or may restrict its application more or less narrowly); one system may differ from the other in that the same set of rules will apply in a different order in the one system from the order in which they apply in the other. Language change may thus be accounted for in terms of changes introduced into the underlying system of phonological and grammatical rules (including the addition, loss, or reordering of rules) during the process of language acquisition. So far these principles have been applied principally to sound change. There has also been a little work done on diachronic syntax.


Historical (diachronic) linguistics » Language classification
There are two kinds of classification of languages practiced in linguistics: genetic (or genealogical) and typological. The purpose of genetic classification is to group languages into families according to their degree of diachronic relatedness. For example, within the Indo-European family, such subfamilies as Germanic or Celtic are recognized; these subfamilies comprise German, English, Dutch, Swedish, Norwegian, Danish, and others, on the one hand, and Irish, Welsh, Breton, and others, on the other. So far, most of the languages of the world have been grouped only tentatively into families, and many of the classificatory schemes that have been proposed will no doubt be radically revised as further progress is made.

A typological classification groups languages into types according to their structural characteristics. The most famous typological classification is probably that of isolating, agglutinating, and inflecting (or fusional) languages, which was frequently invoked in the 19th century in support of an evolutionary theory of language development. Roughly speaking, an isolating language is one in which all the words are morphologically unanalyzable (i.e., in which each word is composed of a single morph); Chinese and, even more strikingly, Vietnamese are highly isolating. An agglutinating language (e.g., Turkish) is one in which the word forms can be segmented into morphs, each of which represents a single grammatical category. An inflecting language is one in which there is no one-to-one correspondence between particular word segments and particular grammatical categories. The older Indo-European languages tend to be inflecting in this sense. For example, the Latin suffix -is represents the combination of categories “singular” and “genitive” in the word form hominis “of the man,” but one part of the suffix cannot be assigned to “singular” and another to “genitive,” and -is is only one of many suffixes that in different classes (or declensions) of words represent the combination of “singular” and “genitive.”

There is, in principle, no limit to the variety of ways in which languages can be grouped typologically. One can distinguish languages with a relatively rich phonemic inventory from languages with a relatively poor phonemic inventory, languages with a high ratio of consonants to vowels from languages with a low ratio of consonants to vowels, languages with a fixed word order from languages with a free word order, prefixing languages from suffixing languages, and so on. The problem lies in deciding what significance should be attached to particular typological characteristics. Although there is, not surprisingly, a tendency for genetically related languages to be typologically similar in many ways, typological similarity of itself is no proof of genetic relationship. Nor does it appear true that languages of a particular type will be associated with cultures of a particular type or at a certain stage of development. What has emerged from recent work in typology is that certain logically unconnected features tend to occur together, so that the presence of feature A in a given language will tend to imply the presence of feature B. The discovery of unexpected implications of this kind calls for an explanation and gives a stimulus to research in many branches of linguistics.



Linguistics and other disciplines » Psycholinguistics
The term psycholinguistics was coined in the 1940s and came into more general use after the publication of Charles E. Osgood and Thomas A. Sebeok’s Psycholinguistics: A Survey of Theory and Research Problems (1954), which reported the proceedings of a seminar sponsored in the United States by the Social Science Research Council’s Committee on Linguistics and Psychology.

The boundary between linguistics (in the narrower sense of the term: see the introduction of this article) and psycholinguistics is difficult, perhaps impossible, to draw. So too is the boundary between psycholinguistics and psychology. What characterizes psycholinguistics as it is practiced today as a more or less distinguishable field of research is its concentration upon a certain set of topics connected with language and its bringing to bear upon them the findings and theoretical principles of both linguistics and psychology. The range of topics that would be generally held to fall within the field of psycholinguistics nowadays is rather narrower, however, than that covered in the survey by Osgood and Sebeok.


Linguistics and other disciplines » Psycholinguistics » Language acquisition by children
One of the topics most central to psycholinguistic research is the acquisition of language by children. The term acquisition is preferred to “learning,” because “learning” tends to be used by psychologists in a narrowly technical sense, and many psycholinguists believe that no psychological theory of learning, as currently formulated, is capable of accounting for the process whereby children, in a relatively short time, come to achieve a fluent control of their native language. Since the beginning of the 1960s, research on language acquisition has been strongly influenced by Chomsky’s theory of generative grammar, and the main problem to which it has addressed itself has been how it is possible for young children to infer the grammatical rules underlying the speech they hear and then to use these rules for the construction of utterances that they have never heard before. It is Chomsky’s conviction, shared by a number of psycholinguists, that children are born with a knowledge of the formal principles that determine the grammatical structure of all languages, and that it is this innate knowledge that explains the success and speed of language acquisition. Others have argued that it is not grammatical competence as such that is innate but more general cognitive principles and that the application of these to language utterances in particular situations ultimately yields grammatical competence. Many recent works have stressed that all children go through the same stages of language development regardless of the language they are acquiring. It has also been asserted that the same basic semantic categories and grammatical functions can be found in the earliest speech of children in a number of different languages operating in quite different cultures in various parts of the world.

Although Chomsky was careful to stress in his earliest writings that generative grammar does not provide a model for the production or reception of language utterances, there has been a good deal of psycholinguistic research directed toward validating the psychological reality of the units and processes postulated by generative grammarians in their descriptions of languages. Experimental work in the early 1960s appeared to show that nonkernel sentences took longer to process than kernel sentences and, even more interestingly, that the processing time increased proportionately with the number of optional transformations involved. More recent work has cast doubt on these findings, and most psycholinguists are now more cautious about using grammars produced by linguists as models of language processing. Nevertheless, generative grammar continues to be a valuable source of psycholinguistic experimentation, and the formal properties of language, discovered or more adequately discussed by generative grammarians than they have been by others, are generally recognized to have important implications for the investigation of short-term and long-term memory and perceptual strategies.


Linguistics and other disciplines » Psycholinguistics » Speech perception
Another important area of psycholinguistic research that has been strongly influenced by recent theoretical advances in linguistics and, more especially, by the development of generative grammar is speech perception. It has long been realized that the identification of speech sounds and of the word forms composed of them depends upon the context in which they occur and upon the hearer’s having mastered, usually as a child, the appropriate phonological and grammatical system. Throughout the 1950s, work on speech perception was dominated (as was psycholinguistics in general) by information theory, according to which the occurrence of each sound in a word and each word in an utterance is statistically determined by the preceding sounds and words. Information theory is no longer as generally accepted as it was a few years ago, and more recent research has shown that in speech perception the cues provided by the acoustic input are interpreted, unconsciously and very rapidly, with reference not only to the phonological structure of the language but also to the more abstract levels of grammatical organization.


Citations
MLA Style:

"linguistics." Encyclopædia Britannica. 2009. Encyclopædia Britannica Online. 04 May. 2009 .

APA Style:

linguistics. (2009). In Encyclopædia Britannica. Retrieved May 04, 2009, from Encyclopædia Britannica Online: http://www.britannica.com/EBchecked/topic/342418/linguistics

For a definition of "linguistics (science)", visit Merriam-Webster. TABLE OF CONTENTS
Main
History of linguistics
Earlier history
Non-Western traditions
Greek and Roman antiquity
The European Middle Ages
The Renaissance
The 19th century
Development of the comparative method
The role of analogy
Other 19th-century theories and development
Inner and outer form
Phonetics and dialectology
The 20th century
Structuralism
Structural linguistics in Europe
Structural linguistics in America
Transformational-generative grammar
Tagmemic, stratificational, and other approaches
Methods of synchronic linguistic analysis
Structural linguistics
Phonology
Morphology
Syntax
Semantics
Transformational-generative grammar
Harris’s grammar
Chomsky’s grammar
Modifications in Chomsky’s grammar
Tagmemics
Modes of language
Hierarchy of levels
Stratificational grammar
Technical terminology
Interstratal relationships
The Prague school
Combination of structuralism and functionalism
Phonological contributions
Theory of markedness
Recent contributions
Historical (diachronic) linguistics
Linguistic change
Sound change
Grammatical change
Semantic change
Borrowing
The comparative method
Grimm’s law
Proto-Indo-European reconstruction
Steps in the comparative method
Criticisms of the comparative method
Internal reconstruction
Language classification
Linguistics and other disciplines
Psycholinguistics
Language acquisition by children
Speech perception
Other areas of research
Sociolinguistics
Delineation of the field
Social dimensions
Other relationships
Anthropological linguistics
Computational linguistics
Mathematical linguistics
Stylistics
Philosophy of language
Applied linguistics
Dialectology and linguistic geography
Dialect geography
Early dialect studies
Dialect atlases
The value and applications of dialectology
Social dialectology
Additional Reading
Related Articles
External Web sites
Citations
Figure 1: The constituent structure of a simple sentence (see text). Figure 2: The constituent structure of a class of simple sentences with arbitrary letters used to … Figure 3: The constituent structure, or phrase structure, assigned by the rule VP → Verb + NP … Figure 4: Structural description of the sentence “The man will hit the ball,” assigned … Figure 5: Phrase markers. Figure 6: A possible derived phrase marker for a passive sentence (see text). Figure 7: Diagrammatic representation of a transformational grammar (see text). PREVNEXT About Us Privacy Policy Terms of Use RSS Feeds E-mail Updates Contact Us Advertise with Us Games MORE... Britannica Content:
Student Edition Test Prep Syndication International Publishing Webmaster Partners
Other Britannica sites:
Australia France India Korea Taiwan United Kingdom Britannica Mobile - iPhone Edition More Britannica Sites Dictionary Thesaurus
©2009 Encyclopædia Britannica, Inc. Please login first before printing this topic.
Please login first before emailing this topic.
Please login first before viewing the External Web Site links for this topic.
Please login or activate a free trial membership to access Britannica iGuide links.
Login
Username: Password: OK Remember me
Forgot your password?
Help?

Enter the e-mail address you used when enrolling for Britannica Premium Service and we will e-mail your password to you.
Email Address:
OK Cancel
"Username" is the e-mail address you used when you registered.

"Password" is case sensitive.


If you need additional assistance, please contact customer support.


OK

The Britannica Store
Books Software DVDs Toys & Games Foreign Language Kids Collectors Items Globes & Maps Award Winners A-Z Browse
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0-9
We welcome your comments. Any revisions or updates suggested for this article will be reviewed by our editorial staff.
Contact us here.

This is a BETA release of TOPIC HISTORYType Title Description Contributor Date


To
From
Subject
Enter your comments here... Send Link to this article and share the full text with the readers of your Web site or blog post.

If you think a reference to this article on "" will enhance your Web site, blog post, or any other Web content, then feel free to link to it, and your readers will gain complete access to the full article, even if they do not subscribe to our service.

You may want to use the HTML code fragment provided below. Copy Link Login
Username: Password: OK Remember meForgot your password?
Help?

Enter the e-mail address you used when enrolling for Britannica Premium Service and we will e-mail your password to you.
Email Address:
OK Cancel
"Username" is the e-mail address you used when you registered.

"Password" is case sensitive.


If you need additional assistance, please contact customer support.


OK Encyclopædia Britannica Additional Content

Featured Result
Did You Mean...
All Results
There are currently no results related to your search.

Please check to see that you spelled your query correctly. Or, try a different or more general query term.
*