2 paragraph in total
I uploaded the assignment + chapter 10 from the book>>the one you may need
The assignment is to choose only two of my classmate’s post and reply to their post. One paragraph or two is enough for each of my classmates. Because it is like a discussion post try to be informal and use words like ( I like your points on…, I found your post really interesting, you have a good point…, when I read your post I…, I believe that ….) something like that (using the “I” word)
For the Response Posts you will be graded as follows:
|Criteria||You did not complete any response posts, or your response Posts did not contribute to the discussion question.||You only completed one response post and/or your responses did not contribute to the discussion in a significant way.||Your responses advanced the conversation in a meaningful way, and provided a helpful and unique perspective on the discussion topic.|
The question was:
In the movie this week the critical period hypothesis was discussed. What is the critical period hypothesis and how was this hypothesis put to the test with both Genie and Victor (provide specific examples and details from the video)? How was Genie’s language acquisition similar to that of young children learning a first language, and how was it different (again, provide examples from the video to support your response)? Do you think the case of Genie was handled appropriately? Why or why not?
You can find the videos in the link below:
Genie: Secret of the Wild Child: https://www.dailymotion.com/video/x3i5x05
Your answer was:
This is a story of the child called Genie who spent more than 10 years alone, secluded from the people. She was like imprisoned all alone in the house for 10 years. Funnily, she could not phonate at all. The doctor postulate that she must have been beaten for making noise which made her not to phonate at all. Apart from this, she had also a funny walking different from the others. The long year of seclusion was rescued by a neighbor making her story spread a lot. On the examination, EKG showed that she had a lot of the sleeping bundles than normal which lead to the expansion of the research on her. The period hypothesis was the ability to improve speaking in the people who have been secluded for a long time. Through the Genie and Victor case, these hypotheses were tested by teaching them to see how they respond.
Genie having been impaired on the phonation, through a series of forming interaction, we find that she started to acquire a new way of communication through repetition or repeating the words. This is similar to other children way of learning a particular language. She was repeating words from the doctors like for example she repeated words such as ‘doctor’ and ‘tie.’ Learning of the first language for young children do learn by repeating from others. However, the difference came in is the time these two occurs. Phonation and ability to worm words in children occur early in life. However, for the case of Genie, we find that she was suppressed to phonate when young hence she was afraid to do so unlike the other children who do so at will. A good example, for Genie, she could not make sounds or word long after being rescued but had to form a relationship with the doctor before starting to phonate and emulate others to speak.
I think the issue of Genie was handled appropriately. I found that every party was willing to support Genie to relate and speak well with others. She was well welcomed by doctors who understood her and brought up the necessary resources to help Genie become better. This made her be able to reach out to people. Through being known as a secluded child, everyone was willing to help her. Various scholars and psychologies came about with different theories about language. Through the case of genie, they found it as an opportunity to test these theories. Example of these theories is that language is encoded in the gene and therefore it cannot be suppressed. Others said that language is acquired in a certain stage of life and therefore when the time has elapsed, it cannot be reversed. By the fact that they made her research is somehow not good but instead it should be focused to help her develop better.
The critical period hypothesis theory suggests that humans are born with the principles of language, but there is a window of time in which a first language can be acquired, and that window closes at puberty.
The Genie Team put the hypothesis to test by helping Genie learn to attach words to her world and her emotions. Examples of this in the movie show Genie working with Susan Curtiss, learning about shapes and textures. Another example is when Marilyn Rigler showed Genie how to turn her anger and frustration outward by expressing it physically, and, although limited, verbally. Genie also needed guidance in getting connected with her physical world. This was demonstrated in the scene where Marilyn Rigler, who was reviewing the statement “I jump” with Genie, encouraged Genie to demonstrate ‘jump’. She eventually began linking her new vocabulary to the past, and was able to describe where she spent her waking hours at her parents’ home (in the potty chair).
Genie’s language acquisition was similar to that of young children learning a first language in that she repeated words, as seen in the shoe-tying scene, where she says ’doctor’. She is hard to understand, but nonetheless repeating. It differed in the way that she wasn’t at all vocal in the beginning, as babies and toddlers are, and her team attributed this to the physical abuse that she suffered prior to her rescue. Although she was past puberty, she seemed to be learning a first language. Years into the study though, Genie was unable to put sentences together, proving the hypothesis that at least some aspects of language acquisition are time-sensitive.
In Victor’s case, Dr. Itard used methods to teach the deaf to teach Victor language skills. Itard incorporated traditional methods for the time, along with some that he created, such as the cut-out letters, to teach Victor how to read. He teaches Victor how to put together simple phrases, and also tests his understanding. An example is when Itard switched up the words in the commands he was issuing to test Victor’s understanding. Although the commands didn’t make sense, Victor was able to improvise, suggesting that he did understand. This was demonstrated when Itard asked Victor to tear a stone, and he smashed the stone. After 6 years of working with Victor, Itard’s ‘forbidden experiment’ was ended, because Victor’s progress had plateaued, similar to Genie’s.
I don’t think Genie’s case was handled appropriately because she was bounced around too much. Even though her mother was acquitted of abuse, she was not capable of caring for Genie, and it was a set-back for Genie to return there. We don’t know for sure if there were any agendas- everyone seemed to care deeply for her, but in addition to much therapy, she needed a stable and loving home environment, and she never got it.
The critical period hypothesis says that there is a period of life in humans that is particularly ripe for learning languages. If the human doesn’t learn a first language by the time the human reaches puberty, it may be too late to learn any languages. Lenneberg hypothesized that there was a deadline for learning language. This was only a hypothesis because you can’t ethically deprive a child of learning a language during their critical period. It was called The Forbidden Experiment. Genie had multiple teachers throughout her life that were able to teach her English. Genie’s learning of English was different from other children. She had emotion words, shape words, color words, etc. However, the process in which she learned these words were very similar to other children. She learned English through books, flashcards and even pointing at things asking what they are. Rigler also taught Genie ASL, which is something Victor’s teacher failed to do. Unfortunately, Genie was unable to form grammatically correct sentences. She was able to successfully use her vocabulary, and successfully convey what she was feeling, but was unable to do so grammatically correct. This supported Lenneberg’s critical period hypothesis. Victor was also given a teacher who helped him to learn English. He was taught English in the same way that deaf children were taught English. Although, Victor never really learned how to talk. He only knew how to read simple words. This also supported Lenneberg’s hypothesis. Both the children were given teachers, and the teachers attempted to teach the children a first language. The outcome of both children’s scenarios supported Lenneberg’s hypothesis.
I think that Genie’s case was handled about as appropriately as it could have been. She was left in the care of scientists who grew to love her. Their only reason for being there in the first place was to learn something, but they left wanting the best for her. I think they did the best that they could in finding a balance between taking care of Genie, and also doing what they came there for, which was obviously science. The only criticism I have was the choice Rigler and his wife made to give up Genie once the study was over. If they loved her as much as they claimed, why wouldn’t they keep her? Surely she’s a handful, but they are just as qualified as anyone to raise her. Evidently, she was placed into horrible homes and was abused. I feel like she ended up in a similar place to where she started. That isn’t cool at all; they had a hand it putting her there. Otherwise though, I think they did a pretty good job, or at least the best they could have done given the circumstances.
The critical period hypothesis is the idea that language acquisition can only happen during early childhood and stops around age 13. With both Genie and Victor, this was put into test in different ways. Victor was taught simple words and sentences, but was never really able to talk. He was able to follow commands, as well as come up with ways to follow commands using different methods. When asked to ‘tear the cup,’ he smashed it on the ground.
Genie was taught to speak and also to sign. She understood simple words and sentences, and she was able to talk/sign and use those words to describe her wants, feelings, etc. Genie was never able to communicate in a way that was grammatically correct, she would say ‘applesauce store buy.’
The biggest difference between Genie’s language acquisition and that of a young child’s, is that Genie was around 12 years old, and had not been nurtured or cared for at all during her formative years. A young child learns language by being nurtured, spoken to, sung to, read to, etc. One similarity I noticed, is that Genie’s speech acquisition was never coherent–for lack of a better term–instead she seemed to imitate/repeat the intonations of the words she heard. Reflecting on the video, the word ‘sharp’ comes to mind. The teacher said the words ‘sharp square’ in a high pitched, baby voice of sorts, and then Genie repeated ‘sharp’ using that same high pitched voice.
I think Genie’s case could have been handled much better towards the end. When she was first discovered, she was a marvel. A possible psychological breakthrough, especially after Wild Child and Victor’s story. When her story got out, her team was given ample funding to support their research on her. She was loved and cared for in the hospital, but as time went on, her development plateaued and the funding ran dry. There was no more compensation for the research, so Genie’s team took less and less interest in helping her develop and grow, and treated her like an experiment gone wrong.
The critical period hypothesis is the hypothesis that there is a period that children are the most receptive of learning a language and if language is not learned in this time, the child may not be capable of learning a language. It is proposed that after a certain time, around adolescence, it may be too late. This hypothesis was challenged in the cases of Genie and Victor through the various different types of tests they were put through. There were clips shown of Genie actively repeating new words she was learning. This was done with flashcards when she worked with the linguistics specialist and done in the moment of being exposed to new objects. It was shown through these exercises that she was capable of applying different vocabulary words to objects around her. This was shown with Victor in the exercise where his caretaker would create sentences and show him what the sentence meant and then Victor would repeat the sentence and behavior. Victor was also able to show an understanding of the words in a sentence, even when the sentence didn’t grammatically make sense. With their capabilities to succeed in these exercises, it seems like they would be able to learn a first language, even if it was later than expected. However, neither of them ever were able to speak fluently.
Genie’s language acquisition was similar to young children learning a language in the sense that she started off repeating words she learned and showing what they meant, which is typical of children. It was said that she learned certain words in a different order than usual; she was familiar with some more advanced words while not knowing some basic words. Genie didn’t have the childhood experience of learning language through hearing it a lot as a child, so her overall acquisition of words was different as it was being actively tested by different professionals. She was put through rigorous tests that the average child wouldn’t go through.
I think how her case was handled was just about as I could expect it would be. It definitely would have been handled more appropriately if she had fewer professionals testing her for different things. It seems like if she had just met with a linguistic professional and maybe a psychologist, that would be enough to possibly supplement her knowledge to be able to communicate. It seems like it was wrong for so many people to be interested in her case, but I think it makes sense that people would be eager to be responsible for such a study. The video mentioned that it was abnormal for one man to be responsible for so many tasks in one person’s medical case. With the way her life situation ended up after he decided to stop caring for her, it seems like it would have been better for him to continue to care for her. A lot of things could have been differently.
In this video, two specific individuals were observed: Genie and Victor. Victor was a young boy, about the age of 12, when her had emerged from the forest. Genie was 13 years old when she was found being kept in isolation in her own home. Both of these children were in/around their stages of puberty (females range from 10-14 yrs and males range from 12-16 yrs). This is where the critical period hypothesis comes into play. This hypothesis states that prior to puberty is the ideal time for language acquisition. Since both of these individuals were at this time in their lives, their language development was in question. Itard worked with Victor to prove that his apparent mental retardation was caused by his isolation from the world, just as Genie’s team was doing with her. In both cases, when they were found, neither of them had the posture of a normal human, they could not speak, and acted with very animal-like movements for lack of better words. It was up to the psychologists and doctors to see if the language aspect that makes us human was still able to be obtained in these two individuals.
Chomsky, a linguist from MIT, claimed that language is acquired because we are born with the principles necessary to acquire it, not strictly from nurture. Curtiss was one of the linguists to work with Genie, and she saw right from the beginning that her speech was limited-more in the sense of babbling noises and pointing as opposed to forming her thoughts. Kent, also part of Genie’s team, followed Genie’s emotional development due to her lack of relationships. One of the things from the video that stuck out to me about Genie’s mental age, was her fascination with the helium balloons. So simple, yet the emotions that are described as her reaction to the balloon was almost how I would picture a baby laughing at their parents making silly faces. Soon after that, they describe her curiosity with the outside world (or colors) almost like when a toddler continuously asks “why?” to what seems like everything. She was unable to express her anger, and was taught by Marilyn (Rigler’s wife) taught her how to express herself in tantrums, like those of a young child.
They worked with Genie on expressing herself, forming coherent thoughts, and even tried to help Genie recall memories from her childhood through role playing exercises. They repeatedly tested her with vocabulary, flashcards, books with pictures, shapes, and even taught her sign language to avoid the vocal mistakes of Victor. It appears that she was beginning to learn language, but it then seemed to plateau despite the fact that her mental age seemed to increase by one year every year since she had been found.
Victor was placed in the Paris Institute for the Deaf to try to help him become reintegrated with society. He responded will to the deaf teaching styles. He would use cut-out letters, reading, and phrases and the actions that went along with these phrases. For the most part, Victor showed improvement, but he also plateaued in success (at a much lower level than Genie). Victor seemed to never learn the speaking aspect of language.
As far as the testing that was done on Genie, I do not believe that it was inappropriate. When Genie came to the hospital, she had little if any normal functioning. Her developmental status was that of a very young child. When a parent has a young child, they teach them about the things around them and language through repetition and close observation. This is what was being done with Genie. Yes, her case was more scientific in the sense that they were not her parents, but they did teach her how to handle language, emotions, and relationships. These are all things that are typically done by the caregiver in the same sense. As for how they recorded their information, I do think they could have been more professional and organized, but when you have a case such as this one that requires almost constant attention, gathering the data should have been more important than the strict organization of it. Overall, I thought the video was very interesting. To see scientific examples of the development of language in these circumstances is not something that you see everyday, for obvious reasons. Seeing how doctors and psychologists approached the circumstances was very interesting, and I do not see how they could have done it any other way.
Genie’s case was very unique that everyone is interested to test on her for her ability to learn a language. They want to see if she deprive language can be acquired or not. Their hypothesis that there is a period in the life of humans when they’re ripe for learning languages. They also were not sure if Genie was born with brain-damaged. They test her by indicate words on paper and made her read. She struggles with some words like “very very very dark blue”. She understood basic words only. She did not want any attachments with anyone.
The problem with Victor was never able to read or talk. Lane tracked Victor’s case. The hypothesis was can Victor learn any language and what is it to make us human? They used a forbidden experiment on Victor and brought him to the national institute for the deaf. The problem was the environment was not suitable for Victor to learn. Itard believed Victor was a retard and unable to learn. Itard tested Victor by making him stand in the front of the mirror and showed him apple in his hand behind Victor.
Her language acquisition was similar to young children when she was able to listen and learn by visual but not able to talk yet. The difference was that Genie was able to read and express the words later while not many young children are able to read and express (I am not sure the age for young children). But if the young children are around 5 years old, some of them are able to read like Genie. The difference, in general, was Genie remembers her development breakdown when she was young while the young children may not remember. For instance, she remembers when she went into the house where she grew up.
I think the case of Genie was handled appropriately because she had a lot of time to learn and able to process herself from the bottom like a child. Also, the process itself was professional since there were doctors and psychologists involved and they knew what was the best for Genie. At the end of the video, Genie can read and express the words.
The Organization of Language Language use involves a special type of translation. I might, for example, want to tell you about a happy event in my life, and so I need to translate my ideas about the event into sounds that I can utter. You, in turn, detect those sounds and need to convert them into some sort of comprehension. How does this translation-from ideas to sounds, and then back to ideas-take place?
The answer lies in the fact that language relies on well- defined patterns-patterns in how individual words are used, patterns in how words are put together into phrases. I follow those patterns when I express my ideas, and the same patterns guide you in figuring out what I just said. In essence, then, we’re both using the same “codebook” with the result that (most of the time) you can understand my messages, and I yours.
But where does this “codebook” come from? And what’s in the codebook? More concretely, what are the patterns of English (or whatever language you speak) that-apparently-we all know and use? As a first step toward tackling these issues, let’s note that language has a well-defined structure, as depicted in Figure 10.1. At the highest level of the structure (not shown in the figure) are the ideas intended by the speaker, or the ideas that the listener derives from the input. These ideas are typically expressed in sentences-coherent sequences of words that express the speaker’s intended meaning. Sentences, in turn, are composed of phrases, which are composed of words. Words are composed of morphemes, the smallest language units that carry meaning. Some morphemes, like “umpire” or “talk,” are units that can stand alone, and they usually refer to particular objects, ideas, or actions. Other morphemes get “bound” onto these “free” morphemes and add information crucial for interpretation. Examples of bound morphemes in Figure 10.1 are the past-tense morpheme “ed” and the plural morpheme “s.” Then, finally, in spoken language, morphemes are conveyed by sounds called phonemes, defined as the smallest unit of sound that serves to distinguish words in a language.
Language is also organized in another way: Within each of these levels, people can combine and recombine the units to produce novel utterances-assembling phonemes into brand-new morphemes or assembling words into brand-new phrases. Crucially, though, not all combinations are possible-so that a new breakfast cereal, for example, might be called “Klof but would probably seem strange to English speakers if it were called “Ngof.” Likewise, someone might utter the novel sentence “I admired the lurking octopi” but almost certainly wouldn’t say, “Octopi admired the I lurking” What lies behind these points? Why are some sequences acceptable-even if strange-while others seem awkward or even unacceptable? The answers to these questions are crucial for any understanding of what language is. Phonology
Let’s use the hierarchy in Figure 10.1 as a way to organize our examination of language. We’ll start at the bottom of the hierarchy-with the sounds of speech.
The Production of Speech
In ordinary breathing, air flows quietly Figure 10.2). There will usually be some sort of sound, though, if this airflow is interrupted or out of the lungs and up through the nose and mouth (see altered, and this fact is crucial for vocal communication.
For example, within the larynx there are two flaps of muscular tissue called the “vocal folds.” (These structures are also called the “vocal cords,” although they’re not cords at all.) These folds can be rapidly opened and closed, producing a buzzing sort of vibration known as this vibration by putting your palm on your throat while you produce a [z] sound. You’ll feel no vibration, though, if you hiss like a snake, producing a sustained [s] sound. Try it! The [z] sound is voicing. You can feel voiced; the [s] is not.
You can also produce sound by narrowing the air passageway within the mouth itself. For example, hiss like a snake again and pay attention to your tongue’s position. To produce this sound, you placed your tongue’s tip near the roof of your mouth, just behind your teeth:; the [s] sound is the sound of the air rushing through the narrow gap you created.
If the gap is somewhere a different sound results. For example, to produce the [sh] sound (as in “shoot” or “shine”), the tongue is positioned so that it creates a narrow gap a bit farther back in the mouth; air rushing through this gap causes the desired sound. Alternatively, the narrow gap can be more toward the front. Pronounce an [f] sound; in this case, the sound is produced by air rushing between your bottom lip and your top teeth. These various aspects of speech production provide a basis for categorizing speech sounds. We can distinguish sounds, first, according to how the airflow is restricted; this is referred to as manner of production. Thus, air is allowed to move through the nose for some speech sounds but not others. Similarly, for some speech sounds, the flow of air is fully stopped for a moment (e.g., [p], [b], and [t). For other sounds, the air passage is restricted, but air continues to flow (e.g., [f], [z], and [r]).
Second, we can distinguish between sounds that are voiced-produced with the vocal folds vibrating-and those that are not. The sounds of [v], [z], and [n] (to name a few) are voiced; [f], [s], [t], and [k] are unvoiced. (You can confirm this by running the hand-on-throat test while producing each of these sounds.) Finally, sounds can be categorized according to where the airflow is restricted; this is referred to as place of articulation. For example, you close your lips to produce “bilabial” sounds like [p] and [b]; you place your top teeth close to your bottom lip to produce “labiodental” sounds like [f] and [v]; and you place your tongue just behind your upper teeth to produce “alveolar” sounds like [t] and [d].
This categorization scheme enables us to describe any speech sound in terms of a few simple features. For example, what are the features of a [p] sound? First, we specify the manner of production: This sound is produced with air moving through the mouth (not the nose) and witha full interruption to the flow of air. Second, voicing: The [p] sound happens to be unvoiced. Third, place of articulation: The [p] sound is bilabial. These features are all we need to identify the [p], and if any of these features changes, so does the sound’s identity. In English, these features of sound production are combined and recombined to produce 40 or so different phonemes. Other languages use as few as a dozen phonemes; still others use many more. (For example, there are 141 different phonemes in the language of Khoisan, spoken by the Bushmen of Africa; Halle, 1990.) In all cases, though, the phonemes are created by simple combinations of the features just described. The Complexity of Speech Perception This description of speech sounds invites a simple proposal about speech perception. We’ve just said that each speech sound can be defined in terms of a small number of features. Perhaps, then, all a perceiver needs to do is detect these features, and with this done, the speech sounds are identified.
It turns out, though, that speech perception is more complicated. Consider Figure 10.3, which shows the moment-by-moment sound amplitudes produced by a speaker uttering brief greeting. It’s these amplitudes, in the form of air-pressure changes, that reach the ear, and so, in an important sense, the figure shows the pattern of input with which “real” speech perception begins.
Notice that within this stream of speech there are no markers to indicate where one phoneme ends and the next begins. Likewise, there are, for the most part, no gaps to indicate the boundaries between successive syllables or successive words. Therefore, as your first step toward phoneme identification, you need to “slice” this stream into the appropriate segments-a step known as speech segmentation.
For many people, this pattern comes as a surprise. Most of us are convinced that there are brief pauses between words in the speech that we hear, and it’s these pauses, we assume, that mark the word boundaries. But this perception turns out to be an illusion, and we are “hearing” pauses that aren’t actually there. This is evident when we “hear” the pauses in the “wrong places” and segment the speech stream in a way the speaker didn’t intend (see Figure 10.4). The illusion is also revealed when we physically measure the speech stream (as we did in order to create Figure 10.3) or when we listen to speech we can’t understand-for example, speech in a circumstance, we lack the skill needed to segment the stream, so we’re unable to “supply” the word boundaries. As a consequence, we hear what is really there: a continuous, uninterrupted flow of foreign language. In the latter sound. That is why speech in a foreign language often sounds so fast.
Speech perception is further complicated by a phenomenon known as coarticulation (Liberman, 1970; also Daniloff & Hammarberg, 1973). This term refers to the fact that in producing speech, you don’t utter one phoneme at a time. Instead, the phonemes overlap, the [s] sound in “soup,” for example, your mouth is getting ready to say the vowel. While uttering the vowel, you’re already starting to move your tongue, lips, and teeth into position for producing the [P].
This overlap helps to make speech production faster and considerably overlap also has consequences for the sounds produced, ready for one upcoming vowel is actually different from the [s] you produce while getting ready for a different vowel. As a result, we can’t point to a specific acoustical pattern and say, “This is the so that while you’re producing more fluent. But the so that the [s] you produce while getting [s] sound.” Instead, the acoustical pattern is different in different contexts. Speech pattern of an perception therefore has to “read past” these context differences in order to identify the phonemes produced. Aids to Speech Perception The need for segmentation in a continuous speech stream, the variations caused by coarticulation, and also the variations from speaker to speaker all make speech perception rather complex. Nonetheless, you manage to perceive speech accurately and easily. How do you do it?
Part of the answer lies in the fact that the speech you encounter, day by day, is surprisingly limited in its range. Each of us knows tens of thousands of words, but most of these words are rarely used. In fact, we’ve known for many years that the 50 most commonly used words in English make up roughly half of the words you actually hear (Miller, 1951).
In addition, the perception of speech shares a crucial attribute with other types of perception: a reliance on knowledge and expectations that supplement the input and guide your interpretation. In other words, speech perception (like perception in other domains) weaves together “bottom-up” and “top-down” processes-processes that, on the one side, are driven by the input itself, and, on the other side, are driven by the broader pattern of what you know.
In perceiving speech, therefore, you don’t rely just on the stimuli you receive (that’s the bottom- up part). Instead, you supplement this input with other knowledge, guided by the context. This is evident, for example, in the phonemic restoration effect. To demonstrate this effect, researchers start by recording a bit of speech, and then they modify what they’ve recorded. For example, they might remove the [s] sound in the middle of “legislatures” and replace the [s] with a brief burst of noise. This now-degraded stimulus can then be presented to participants, embedded in a sentence such as The state governors met with their respective legi*latures.
When asked about this stimulus, participants insist that they heard the complete word, “legislatures,” accompanied by a burst of noise (Repp, 1992; Samuel, 1987, 1991). It seems, then, that they use the context to figure out what the word must have been, but then they insist that they actually heard the are often inaccurate if asked when exactly they heard the noise burst. word. In fact, participants They can’t tell whether they heard the noise during the second syllable of “legislatures” (so that it blotted out the missing [s], forcing them to infer the missing sound) or at some other point (so that they were able to hear the missing [s] with no interference). Apparently, the top-down process literally changes what participants hear-leaving them with no way to distinguish what was heard from what was inferred.
How much does the context in which we hear a word help us? In a classic experiment, Pollack and Pickett (1964) recorded a number of naturally occurring conversations. From these recordings they spliced out individual words and presented them in isolation to their research participants. With no context to guide them, participants were able to identify only half of the words. If restored to their original context, though, the same stimuli were easy to identify. Apparently, the benefits of context are considerable. Categorical Perception
Speech perception also benefits from a pattern called categorical perception. This term refers to the fact that people are much better at hearing the differences between categories of sounds than they are at hearing the variations within a category of sounds. In other words, you’re very sensitive to the differences between, [k], between a [d] and a [t]. But you’re surprisingly [gl sound and a or the differences say, a insensitive to differences within each of these categories, have hard time So you a distinguishing, say, one somewhat different [pl sound. And, of course, [p] sound from another, this pattern is precisely what you want, because it enables you to hear the differences that matter without hearing (and being distracted by) inconsequential variations within the category. Demonstrations of categorical perception generally rely on a series of stimuli, created by bal sound. Another stimulus might be a ba] computer. The first stimulus in the series might be a that has been distorted a tiny bit, to make it a little bit closer to a [pa] sound. A third stimulus might be a [ba] that has been distorted a bit more, so that it’s a notch closer to a [pa], and so on. In this way we create a series of stimuli, each slightly different from the one before, ranging from a clear [ba sound at one extreme, through a series of “compromise” sounds, until we reach at the other extreme a clear [pa] sound.
How do people perceive these various sounds? Figure 10.5A shows the pattern we might expect. gradually shading from a clear [ba] to a clear [pa]. Therefore, as we move might expect people to be less and less likely to identify each stimulus as a pa]. In the terms we used in After all, our stimuli are through the series, we [ba], and correspondingly more and more likely to identify each as a Chapter 9, this would be a “graded-membership” pattern: Test cases close to the [ba] as we move away from this prototype, cases should be harder and harder to categorize.
However, the actual data, shown in Figure 10.5B, don’t fit with this prediction. Even though the stimuli are gradually changing from one extreme to another, participants “hear” an abrupt shift, so that roughly half the stimuli are reliably categorized as [ba] and half are reliably categorized as [pa]. Moreover, participants seem indifferent to the differences within each category. Across the first dozen stimuli, the syllables are becoming less and less [ba]-like, but this is not reflected in how the listeners identify the sounds. Likewise, across the last dozen stimuli, the syllables are becoming more and more [pa]-like, but again, this trend has little effect. What listeners seem to hear is either a [pa] or a [ba], with no gradations inside of either category. (For early demonstrations, see Liberman Harris, Hoffman, & Griffith, 1957; Lisker & Abrahmson, 1970; for reviews, see Handel, 1989; Yeni- Komshian, 1993.)
It seems, then, that your perceptual apparatus is “tuned” to provide just the information you need. After all, you want to know whether someone advised you to “take a path” or “take a bath.” You certainly care whether a friend said, “You’re the best” or “You’re the pest.” Plainly, the difference between [b] and [p] matters to you, and this difference is clearly marked in your perception. In contrast, you usually don’t care how exactly the speaker pronounced “path” or “best”-that’s not information that matters for getting the meaning of these utterances. And here too, your perception serves you well by largely ignoring these “subphonemic” variations. (For more on the broad issue of speech perception, see Mattys, 2012.) Combining Phonemes English relies on just a few dozen phonemes, but these sounds can be combined and recombined to produce thousands of different morphemes, which can themselves be combined to create word after word after word. As we mentioned earlier, though, there are rules governing these combinations, and users of the language reliably respect these rules. So, in English, certain sounds (such as the final sound in “going” or “flying”) can occur at the end of a word but not at the beginning. Other combinations seem prohibited outright. For example, the sequence “tlof” seems anomalous to English speakers; no words in English contain the “tl” combination within a single syllable. (The “sweetly”) combination can, however, occur at the boundary between syllables-as in “motley” or These limits, however, are simply facts about English; they are not at all a limit on what human ears can hear or human tongues can produce, and other languages routinely use combinations that for unspeakable. English speakers seem There are also rules governing the adjustments that occur when certain phonemes are uttered one after another. For example, consider the “s” ending that marks the English plural-as in “books,” “cats,” and “tapes.” In these cases, the plural is pronounced as an [s]. In other contexts, though, the plural ending is pronounced differently. Say these words out loud: “bags,” “duds,” “pills.” If you listen carefully, you’ll realize that these words actually end with a [z] sound, not an [s] sound. English speakers all seem to know the rule that governs this distinction. (The rule hinges on whether the base noun ends with a voiced or an unvoiced sound; for classic statements of this rule, see Chomsky & Halle, 1968; Halle, 1990.) Moreover, they obey this rule even with novel, made-up cases. For example, I have one wug, and now I acquire another. Now, I have two . . . what? Without hesitation, people pronounce “wugs” using the [z] ending-in accord with the standard pattern. Even young children pronounce “wugs” with a [z], and so, it seems, they too have internalized- and obey- the relevant principles (Berko, 1958). e. Demonstration 10.1: Phonemes and Subphonemes
A phoneme is the smallest unit of sound that can distinguish words within a language. The difference between “bus” and “fuss,” therefore, lies in the initial phoneme, and the difference between “bus” and “butt” lies in the words’ last phoneme. However, not all sound differences are phonemic differences. For example, the word “bus” can be pronounced with a slightly longer [s] SOund at the end, or with a slightly shorter [s]. Even with this difference, both of these pronunciations refer to the motor vehicle used to transport groups of people. The difference between the long [s] and the short [s], in other words, is a subphonemic difference-one that does not distinguish words within the language.
What counts as a subphonemic difference, however, depends on the language. To see this imagine that you were going to say the word “key” out loud. Get your tongue and teeth in position so that you’re ready in an instant to say this word-but then freeze in that position, as if you were waiting for a “go” signal before making any noise. Now, imagine that you were going to say “cool” out loud. Again, get your tongue and teeth in position so that you’re ready to say this word. Now, go back being ready, in an instant, to say “key.” Now, go back to being ready, in an instant, to say “cool.” to Can you feel that your tongue is moving into different positions for these two words? That’s because, in English, the k] sound used in the word “key” is different from the [k] sound used in “cool,” and so, to make the different sounds, you need to move your tongue into different positions.
The difference between the two [k] sounds is, however, subphonemic in English (so it doesn’t
matter for meaning). In other words, there is an acoustic difference between the two sounds, but this difference is inconsequential for English speakers. The difference does matter, though, in other languages, including several Arabic languages. In these languages, there are two distinct [k] sounds, and changing from one to the other will alter the identity of the word being spoken, even if nothing [k] sounds is phonemic in else in the word changes. In other words, the difference between the two [k] sound is phonemic in Arabic, not subphonemic.
These differences languages actually lead to changes in how we perceive sounds. Most English speakers, for example, cannot hear the difference between the two [k] sounds, or an Arabic speaker, the difference is obvious. A related example is the difference between the starting sound in “led” and the starting sound in “red.” This difference is phonemic in English, so that (for example) “led” and “red” are different words, and English speakers easily hear the difference. The same acoustic difference, however, is subphonemic in many Asian languages, and speakers of these languages have trouble hearing the difference between words like “led” and “red” or between “load” and “road.” It seems, then, that speakers of different languages literally hear the world differently.
Can you learn to control which [k] sound you produce? Once again, get your mouth ready to say “key” out loud, but freeze in place before you make any sound. Now, without changing your tongue position at all, say “cool” out loud. If you do this carefully, you’ll end up producing the [k] sound that’s normally the start for “key,” but you’ll pronounce it with the “oo” vowel. If you practice a little you’ll get better at this; and if you listen very carefully, you can learn to hear the difference between “cool” pronounced with this [k] and “cool” pronounced with its usual [k]. If so, you’ve mastered one (tiny) element of speaking Arabic! In this exercise, you’ve received instruction in how to produce (and perhaps perceive) these distinct sounds. Most people, however, receive no such instruction, leading us to ask: How do they required by the language they ordinarily speak? The answer, oddly enough, is that they don’t learn. Instead, young infants just a perfectly capable of hearing the difference between the two [k] sounds, or an Arabic-speaking community. Likewise, young infants a few months old seem perfectly capable of hearing the difference between “led” and English-speaking environment or in Tokyo. However, if a to hear these differences. learn to hear these different sounds, and to produce them, as couple of months old seem English-speaking community whether they live in an “red,” whether they’re born in an distinction is not phonemic in their language, the infants lose the ability This is, in other words, a clear case of “use it or lose it,” and so, in a sense, very young infants (say, 3 months of age) can hear and make more distinctions than older infants (10 or 11 months). e. Demonstration 10.2: The Speed of Speech Humans are incredibly skilled in most aspects of language use, including the “decoding” of the complex acoustic signals that reach our ears whenever we’re listening to someone speaking. There some observations about how well we can are many ways to document this skill, including understand fast speech.
The “normal” rate of speaking is hard to define. People from New York, for example, have a reputation for speaking quickly (although most New Yorkers are convinced that they speak at a “normal” speed and everyone else speaks slowly!). Likewise, people from America’s South have a reputation for speaking slowly (although, again, they’re convinced that theirs is the normal rate, and everyone else is in too much of a hurry). Across this variation, though, the rate of 180 words per minute (roughly 15 phonemes per second) is often mentioned as a normal rate for spoken English.
Languages also differ in their speed. According to one recent estimate, Japanese speakers typically fire off speech at a pace of 7.84 syllables per second; Spanish is a bit slower, at 7.82 syllables per second. English is slower still (6.19 syllables per second) but not as slow as Mandarin Chinese (5.18). We need to be clear, though, that languages also differ in how much information they pack into a single syllable. Chinese speakers, for example, provide further information via the “tone” with which the syllable is pronounced, so even if they produce fewer syllables per second, they may provide more information per syllable! In any language, though, what happens with faster speech? Some “fast talkers” are hard to understand because they slur their speech or blur their pronunciation; this is true, for example, of many auctioneers, who may be blurring their speech deliberately to create a sense of excitement in the auction (with the aim of encouraging more bids!). For examples, visit YouTube and search for “fastest talker” You’ll find videos of individuals who hold the (www.youtube.com) Guinness Record for fast talking (about 600 words per minute), and you’ll see that, at this speed, these people are almost impossible Instead, these super-fast speakers use various shortcuts to achieve their speed, and the shortcuts create difficulties for the listener. For example, these fast talkers don’t change their pitch as much as to understand. However, the problem here isn’t the speed itself. in a monotone. This is a concern for listeners, who they end up speaking other people do, so ordinarily use prosody as one of the cues marking phrase boundaries, and also as a cue highlighting key words. (Hearing these key words helps the listener to understand the gist of what’s being said, which in turn helps the listener to “decode” the speech.) In addition, ordinary speakers use variations in speed as a further cue, slowing down (for example) mark the end of a phrase. Fast talkers, in contrast, don’t just talk at a high speed; they also talk at a to put emphasis on a word or to uniform speed, and this impedes the listener. If we set all these shortcuts to the side, what is the fastest speech we can understand? Again, the answer varies, and some of us seem to be “faster listeners,” just as some of us are “faster talkers.” As a rough estimate, though, most English-speakers can follow speech at 250 words per minute (40% faster than “normal”). But this estimate may be conservative. Again, visit YouTube, and search for some of the classic TV commercials for FedEx starring John Moschitta. (Or search for Moschitta’s appearance on Sesame Street.) You’ll need to pay close attention to follow his speech, but you probably can do it!
For more on this topic, see Pellegrino, F., Coupé, C., & Marsico, E. (2011). Across-language perspective For more on this topic, speech information rate. Language, 87, 539-558 e. Demonstration 10.3: Coarticulation Speech perception is made more challenging by the fact of coarticulation-the fact that as you pronounce each phoneme, you’re already getting ready for the next phoneme. Coarticulation guarantees variation in the actual sounds associated with each phoneme-because the pronunciation (and therefore the sound) is shaped by the phoneme before and the phoneme after the one you’re uttering right now.
We’ve already encountered coarticulation in an earlier demonstration, focused on the contrast between phonemic and subphonemic differences. (Specifically, in Demonstration 10.1, your pronunciation of the “k” sound was influenced by the upcoming vowel.) But the variations caused by quite widespread. As another example, consider the shape of your mouth when coarticulation are you pronounce the “s” in “seat” and the shape of your mouth when you pronounce the “s” in “suit.” phoneme (the [s] ) in both cases, but your mouth shapes (and hence the You’re producing the same sounds) are quite different because as you pronounce the “s” sound, you’re already getting ready to produce the vowel-and that alters how you pronounce the “s” itself. In fact, if you focus your attention in the right way, you can hear the difference between these two sounds. Get your mouth ready to say “seat,” and with your mouth “frozen” in that position, hiss the “s” sound. Now, get your mouth ready to say “suit,” and with your mouth again frozen in place, hiss this “s” sound. You may have to alternate back and forth once or twice to hear the difference. easure the sound using the appropriate instruments, and But, in truth, the difference is large if we mea with a tiny bit of effort, you can hear the difference too. e. Demonstration 10.4: The Most Common Words The text describes several complexities in speech perception, but there are also factors that make speech perception easier. One important factor lies in the observation that, even though there are tens of thousands of words in our language, most of what we hear, minute by minute or day by day, relies on a tiny vocabulary. Indeed, George Miller estimated, many years ago, that the 50 most commonly used words make up roughly half of what we hear.
So what are these commonly used words? You can probably guess most of them. Take a blank piece of paper, and write down your nominees for English’s most commonly used words. Then, once you’ve finished, you can check your list against the list compiled by the people who publish the Oxford English Dictionary (hidden below). Their list of the 100 most commonly used words can be found below. How many of the (actual) top 10 appeared on your list? How many of the (actual) top 20? The Most Commonly Used Words in English
Source:http://en.wikipedia.org/wiki/Most_common_words_in_English#cite_note -langfacts-0 Morphemes and Words A typical college graduate in the United States knows between 75,000 and 100,000 different words. These counts have been available for many years (e.g., Oldfield, 1963; Zechmeister, Chronis, Cull, D’Anna, & Healy, 1995), and there’s no reason to think they’re changing. For each word, the speaker knows the word’s sound (the sequence of phonemes that make up the word) and its orthography (the sequence of letters that spell the word). The speaker also knows how to use the word within various governed by the rules of syntax (see Figure 10.6). Finally, speakers know the meaning of a word; they have a semantic representation for the word to go with the phonological representation.
Building New Words Estimates of vocabulary size, however, need to be interpreted with caution, because the size of someone’s vocabulary is subject to change. One reason is that new words are created all the time. For example, the world of computers has demanded many new terms-with the result that someone who wants to know something will often “google” it; many of us get information from “blogs”; and most of us are no longer fooled by the “phishing” we sometimes find in our “email.” The terms “software” and “hardware” have been around for a while, but “spyware” and “malware” are relatively new.
Changes in social habits and in politics also lead to new vocabulary. It can’t be surprising that slang terms come and go, but some additions to the language seem to last. Changes in diet, for example, have put words like “vegan,” “localvore/locavore,” and “paleo” into common use. The term “metrosexual” has been around for a couple of decades, and likewise “buzzword” It was only in 2012, that Time magazine listed “selfie” as one of the year’s top ten buzzwords, and it was a 2016 vote in Great Britain that had people talking about “Brexit.”
Often, these new words are created by combining or adjusting existing words (and so “Brexit” combines “Britain” and “exit;” “paleo” is a shortened form of “Paleolithic”). In addition, once these new entries are in the language, they can be combined with other elements-usually by adding the appropriate morphemes. Imagine that you’ve just heard the word “hack” for the first time. You know that someone who does this activity is a “hacker” and that the activity itself is “hacking” and you understand someone who says, “I’ve been hacked” Once again, therefore, note the generativity of language-that is, the capacity to create an endless series of new combinations, all built from the same set of fundamental units. Therefore, someone who “knows English” (or someone who knows any language) hasn’t just memorized the vocabulary of the language and some set of phrases. Instead, people who “know English” know how to create new forms within the language: They know how to combine morphemes to create new words, know how to “adjust” phonemes when they’re put together into novel combinations, and so on. This knowledge isn’t conscious-and so most English speakers could not articulate the principles governing the sequence of morphemes within a word, or why they pronounce “wugs” with a [z] [s]. Nonetheless, speakers honor these principles with remarkable consistency sound rather than an use of the language and in their day-to-day creation of novel words. e. Demonstration 10.5: Patterns in Language Chapter 10 argues that in many ways our language is generative but also patterned. The generativity is evident in the fact that new forms can always be developed-and so you can create new words and phrases that no one has ever uttered before. The patters, though, new are evident in the observation that some sequences of words (or morphemes, or phonemes) do not-as if the generativity seem acceptable and some were governed by rules of some sort. Thus, the rules allow you to create novel sentences like “Comical cartons cruise carefully,” but not the sequence “Cruise cartons carefully comical.” Likewise, you might say, “The leprechaun envied the cow’s artichoke,” but not “Artichoke cow’s the envied leprechaun the.”
apparently governing these sequences? As o
But what is the nature of the “rules” that are apparently governing these sequences? As one way to explore this issue, ask the following questions couple of friends:
to a If you later should see the tree, you’ll
· Imagine that a beaver has been gnawing see the on a tree. If you later should see the tree, you’ll see marks the beaver has left behind. Are these tooth-marks or teeth-marks?
· Imagine that a hundred field mice are living in a farmhouse. Is the house mouse-infested or mice-infested?
· Now, imagine that 50 rats have also moved into the farmhouse. Is the house now rat-infested or rats-infested?
· Or, if 4-year-old Connor has written on the wall with three pens, did he leave behind pen- marks or pens-marks?
For the first two questions, most people feel like they could go either way-tooth-marks or teeth- marks; mouse-infested or mice-infested. They may have a preference for one form or the other, but the preference is usually weak, and they regard the other form as acceptable. For the next two questions, though, people reliably reject one of the options and insist that the house is rat-infested and that Connor left pen-marks. Is this consistent with how your friends answered?
What’s going on here? It turns out that these combinations follow a rule-one that governs how morphemes combine. In essence, the rules says that if a noun has an irregular plural, then it can be combined with other morphemes in either its plural form or its singular form. But if a noun has a regular plural, it can be combined with other morphemes only in the singular form.
Our point here, though, is not to explore this particular rule. (In fact, this rule is derived from other rules governing how morphemes combine.) Our point instead is that this regular pattern exists within English, even though most people have never been trained on this pattern, have never thought about the pattern, and may not even realize the pattern is in place. In following this rule therefore, people are surely not consciously aware of the rule. Nonetheless, it’s a rule they’ve been following since they were 3 or 4 years old. This by itself makes it clear that language is in fact heavily patterned, and that a large part of what it means to “know a language” is to learn (and to respect) these patterns. Syntax The potential for producing new forms is even more remarkable when we consider the uppèr levels in the language hierarchy-the levels of phrases and sentences. This point becomes obvious when we ask: If you have 60,000 words in your vocabulary, or 80,000, how many sentences can you build from those words?
Sentences range in length from the very brief (“Go!” or “I do”) to the absurdly long. Most sentences, though, contain 20 words or fewer. With this length limit, it has been estimated that there are 100,000,000,000,000,000,000 possible sentences in English (Pinker, 1994). If you could read these sentences at the insane rate of 1,000 per second, you’d still need over 30,000 centuries to read through this list! (In fact, this estimate may be too low. Decades before Pinker’s work, Miller, Galanter, & Pribram, 1960, estimated that the number of possible sentences is actually 1050-billions of times larger than the estimate we’re using here.)
Once again, though, there are limits on which combinations (i.e., which sequences of words) are acceptable and which ones are not. For example, in English you could say, “The boy hit the ball” but not “The boy hit ball the.” Likewise, you could say, “The moose squashed the car” but not “The moose squashed the” or just “Squashed the car.” Virtually any speaker of the language would agree that these errant sequences have something wrong in them, but what exactly is the problem with these “bad” strings? The answer lies in the rules of syntax-rules that govern the structure of a phrase or sentence. One might think that the rules of syntax depend on meaning, so that meaningful sequences are accepted as “sentences” while meaningless sequences are rejected as non-sentences. This suggestion, though, is wrong. As one concern, many non-sentences do seem meaningful, and no one’s confused when Sesame Street’s Cookie Monster insists “Me want cookie” Likewise, viewers understood the monster’s wistful comment in the 1935 movie Bride of Frankenstein: “Alone, bad; friend, good.”
In addition, consider these two sentences:
‘Twas brillig, and the slithy toves did gyre and gimble in the wabe.
Colorless green ideas sleep furiously.
(The first of these is from Lewis Carrol’s famous poem “Jabberwocky”; the second was penned by the linguist Noam Chomsky.) These sentences are, of course, without meaning: Colorless things aren’t green; ideas don’t sleep; toves aren’t slithy. Nonetheless, speakers of English, after a moment’s reflection, regard these sequences as grammatically acceptable in a way that “Furiously sleep ideas green colorless” is not. It seems, therefore, that we need principles of syntax that are separate from considerations of semantics or sensibility.Phrase Structure The rules of syntax take several forms, but they include rules that specify which elements must appear in a phrase and (for some languages) that govern the sequence of those elements. These phrase-structure rules also specify the overall organization of the sentence-and therefore determine how the various elements are linked to one another. One way to depict phrase-structure rules is with a tree structure like the one shown in Figure 10.7. You can read the structure from top to bottom, and as you move from one level to the next, you can see that each element (e.g., a noun phrase or a verb phrase) has been “expanded” in a way that’s strictly governed by the phrase-structure rules.
Prescriptive Rules, Descriptive Rules
We need to be clear, though, about what sorts of rules we’re discussing. Let’s begin with the fact that most of us were taught, at some stage of our education, how to talk and write “properly” We were taught never to say “ain’t.” Many of us were scolded for writing in the passive voice or starting a sentence with “And.” Warnings like these are the result of prescriptive rules-rules describing how something (in this case: language) is “supposed to be” Language that doesn’t follow these rules, it’s claimed, is “improper” or maybe even “bad.”
You should, however, be skeptical about these prescriptive rules. After all, languages change with the passage of time, and what’s “proper” in one period is often different from what seems right at other times. In the 1600s, for example, people used the pronouns “thou” and “ye,” but those words are gone from modern usage. In more recent times, people just one generation back insisted it was wrong to end a sentence with a preposition; modern speakers think this prohibition is silly. Likewise consider the split infinitive. Prominent writers of the 18th and 19th centuries (e.g., Ben Franklin, William Wordsworth, Henry James) commonly split their infinitives; grammarians of the early 20th century, in contrast, energetically condemned this construction. Now, in the 21st century, most English speakers seem even know what split infinitive is). entirely indifferent to whether their infinitives are split or not (and may not This pattern of change makes it difficult to justify prescriptive rules. Some people, for example, still insist that split infinitives are improper and must be avoided. This suggestion, however, seems to rest on the idea that the English spoken in, say, 1926 was proper and correct, and that the English spoken few decades before or after this “Golden Age” is somehow inferior. It’s hard to think of any basis for this claim, so it seems instead that this prescriptive rule reflects only the habits and preferences of a particular group at a particular time-and there’s no reason why our usage should governed by their preferences. In addition, it’s not surprising that the groups that set these rules are usually groups with high prestige or social standing (Labov, 2007). When people strive to follow prescriptive rules, then, it’s often because they hope to join (or, at least, be associated with) these elite groups.
Phrase-structure rules, in contrast, are not prescriptive; they are descriptive rules-that is, rules characterizing the language as it’s ordinarily used by fluent speakers and listeners. There are, after all, strong regularities in the way English is used, and the rules we’re discussing here describe these patterns. No value judgment is offered about whether these patterns constitute “proper” or “good” English. These patterns simply describe how English is structured-or perhaps we should say, what English is. The Function of Phrase Structure
No one claims that language users are consciously aware of phrase-structure rules. Instead, the idea is that people have somehow internalized these rules and obey the rules in their use of, and judgments about, language.
For example, your intuitions about whether a sentence is well formed or not respect phrase- structure rules-and so, if a sequence of words lacks an element that should, according to the rules, be in place, you’ll probably think there’s a mistake in the sequence. Likewise, you’ll balk at sequences of words that include elements that (according elements that should be in a different position to the rules) shouldn’t be there, or within the string. These points allow us to explain why you think sequences like these need some sort of repair: “His argument emphasized in modern society” “Susan or appeared cat in the door.”
Perhaps more important, phrase-structure rules help you understand the sentences you hear or read, because syntax in general specifies the relationships among the words in each sentence. For example, the NP+VP sequence typically divides a sentence into the “doer” (the NP) and some information about that doer (the VP). Likewise, the V + NP sequence usually indicates the action described by the sentence and then the recipient of that action. In this way, the phrase structure of a sentence provides an initial “road map” that’s useful in understanding the sentence. For a simple example, it’s syntax that tells us who’s doing what when we hear “The boy chased the girl.” Without syntax (e.g., if our sentences were merely lists of words, such as “boy, girl, chased”), we’d have no way to know who was the chaser and who (if anyone) was chaste. (Also see Figure 10.8.)
Sometimes, though, two different phrase structures can lead to the same sequence of words, and if you encounter one of these sequences, you may not know which structure was intended. How will this affect you? We’ve just suggested that phrase structures guide interpretation, and so, with multiple phrase structures available, there should be more than one way to interpret the sentence. This turns out to be correct-often, with comical consequences (see Figure 10.9).
e. Demonstration 10.6: Ambiguity Language is a remarkable tool for conveying ideas from one person to another. Sometimes, however, the transmission of ideas doesn’t work quite the way it should. In some cases, the sentencès you hear (or read) are unclear or misleading. In other cases, the sentences you encounter are ambiguous, so the meaning you draw from them may be different from the meaning that the speaker (or writer) intended.
Sometimes the ambiguity you encounter concerns just one word: If you hear “Sam is looking for the bank” do you think Sam is looking for the river’s edge or a financial institution? Sometimes the ambiguity is tied to the sentence’s structure: If you hear “I saw a man on a horse wearing armor,” then who is wearing the armor? Is it the man or the horse?
Remarkably, though, ambiguity in language is often overlooked. That’s because we’re all so skilled at picking up the intended meaning of the speaker that we don’t even realize there’s another way we might have interpreted the words. (We’ll return to this point in Chapter 14.) This failure to detect ambiguity is usually a good thing, because if we’re oblivious to the alternative meaning, we won’t be distracted or misled by it. But the failure to detect ambiguity can sometimes be a problem. In many cases, for example, newspapers have printed headlines without realizing that there was more than one way to interpret what they’d printed-leading to considerable embarrassment. In fact, all of the following are actual headlines that were printed in real newspapers. In each case, can you find both ways to interpret each headline?
EYE DROPS OFF SHELF
KIDS MAKE NUTRITIOUS SNACKS
STOLEN PAINTING FOUND BY TREE
DEALERS WILL HEAR CAR TALK AT NOON
MINERS REFUSE TO WORK AFTER DEATH
MILK DRINKERS ARE TURNING TO POWDER
COMPLAINTS ABOUT NBA REFEREES GROWING UGLY
POLICE BEGIN CAMPAIGN TO RUN DOWN JAYWALKERS
GRANDMOTHER OF EIGHT MAKES HOLE IN ONE
HOSPITALS ARE SUED BY 7 FOOT DOCTORS
ENRAGED COW INJURES FARMER WITH AX
SQUAD HELPS DOG BITE VICTIM
HERSHEY BARS PROTEST
Sentence Parsing A sentence’s phrase structure, we’ve said, conveys crucial information about who did what to whom. Once you know the phrase structure, therefore, you’re well on your way to understanding the sentence. But how do you figure out the phrase structure in the first place? This would be an easy question if sentences were uniform in their structure: “The boy hit the ball. The girl drove the car. The elephant trampled the geraniums.” But, of course, sentences are more variable than this, and this variation makes the identification of a sentence’s phrase structure much more difficult.
How, therefore, do you parse a sentence-that is, figure out each word’s syntactic role? It seems plausible that you’d wait until the sentence’s end, and only then go to work on figuring out the structure. With this strategy, your comprehension might be slowed a little (because you’re waiting for the sentence’s end), but you’d avoid errors, because your interpretation could be guided by full information about the sentence’s content.
It turns out, though, that people don’t use this wait-for-all-the-information strategy. Instead, they parse sentences as they hear them, trying to figure out the role of each word the moment it arrives (e.g., Marcus, 2001; Savova, Roy, Schmidt, & Tenenbaum, 2007; Tanenhaus & Trueswell, 2006). This approach is efficient (since there’s no waiting) but, as we’ll see, can lead to errors.
Garden Paths Even simple sentences can be ambiguous if you’re open-minded (or perverse) enough:
Mary had a little lamb. (But I was quite hungry, so I had the lamb and also a bowl of soup.)
Time flies like an arrow. (But fruit flies, in contrast, like a banana.)
Temporary ambiguity is also common inside a sentence. More precisely, the early part of a sentence is often open to multiple interpretations, but then the later part of the sentence clears things up. Consider this example:
The old man the ships.
In this sentence, most people read the initial three words as a noun phrase: “the old man” However, this interpretation leaves the sentence with no verb, so a different interpretation is needed, with the subject of the sentence being “the old” and with “man” being the verb. (Who mans the ships? It is the old, not the young. The old man the ships.) Likewise:
The secretary applauded for his efforts was soon promoted.
Here, people tend to read “applauded” as the sentence’s main verb, but it isn’t. Instead, this sentence is just a shorthand way of answering the question, “Which secretary was soon promoted?” (Answer: “The one who was applauded for his efforts.”)
These examples are referred to as garden-path sentences: You’re initially led to one interpretation (you are, as they say, “led down the garden path”), but this interpretation then turns out to be wrong. So you need to reject your first interpretation and find an alternative. Here are two more examples:
Fat people eat accumulates.
Because he ran the second mile went quickly
Garden-path sentences highlight the risk attached to the strategy of interpreting a sentence as it arrives: The information you need in order to understand these sentences arrives only late in the sequence, and so, to avoid an interpretive dead end, you’d be better off remaining neutral about the sentence’s meaning until you’ve gathered enough information. That way, you’d know that “the old man” couldn’t be the sentence’s subject, that “applauded” couldn’t be the sentence’s main verb, and so on. But this isn’t what you do. Instead, you commit yourself fairly early to one interpretation and then try to “fit” subsequent words, as they arrive, into that interpretation. This strategy is often effective, but it does lead to the “double-take” reaction when late-arriving information forces you to abandon your initial interpretation (Grodner &Gibson, 2005). Syntax as a Guide to Parsing
What is it that leads you down the garden path? Why do you initially choose one interpretation of a sentence, one parsing, rather than another? Many cues are relevant, because many types of information influence parsing. For one, people usually seek the simplest phrase structure that will accommodate the words heard so far. This strategy is fine if the sentence structure is indeed simple; the strategy produces problems, though, with more complex sentences. To see how this plays out, consider the earlier sentence, “The secretary applauded for his efforts was soon promoted.” As you phrase plus the beginning of a separate clause modifying “secretary.” This is the correct interpretation, and it’s required by the way the sentence ends. However, you ignored this possibility, at least initially, and went instead with a simpler interpretation-of a noun phrase plus verb, with no idea of a separate read “The secretary applauded,” you had the option of interpreting this as a noun embedded clause.
People also tend to assume that they’ll be hearing (or reading) active-voice sentences rather than passive-voice sentences, so they generally interpret a sentence’s initial noun phrase as the “doer” of the action and not the recipient. As it happens, most of the sentences you encounter are active, not passive, so this assumption is usually correct (for early research, see Hornby, 1974; Slobin, 1966; Svartik, 1966). However, this assumption can slow you down when you do encounter a passive sentence, and, of course, this assumption added to your difficulties with the “secretary” sentence: The embedded clause there is in the passive voice (the secretary was applauded by someone else); your tendency to assume active voice, therefore, works against the correct interpretation of this sentence. Not surprisingly, parsing is also influenced by the function words that appear in a sentence and by the various morphemes that signal syntactic role (Bever, 1970). For example, people easily grasp the structure of “He gliply rivitched the flidget” That’s because the “-ly” morpheme indicates that “glip” is an adverb; the “-ed” identifies “rivitch” as a verb; and “the” signals that “flidget” is a noun-all excellent cues to the sentence structure. This factor, too, is relevant to the “secretary” sentence, which included none of the helpful function words. Notice that we didn’t say, “The secretary who was applauded. . “, if we had said that, the chance of misunderstanding would have been greatly reduced.
With all these factors stacked against you, it’s no wonder you were (temporarily) confused about “the secretary.” Indeed, with all these factors in place, garden-path sentences can sometimes be enormously difficult to comprehend. For example, spend a moment puzzling over this (fully grammatical) sequence:
The horse raced past the barn fell.
(If you get stuck with this sentence, try adding the word “that” after “horse”) Background Knowledge as a Guide to Parsing
Parsing is also guided by background knowledge, and in general, people try to parse sentences way that makes sense to them. So, for example, readers are unlikely to misread the headline Drunk Gets Six Months in Violin Case (Gibson, 2006; Pinker, 1994; Sedivy, Tanenhaus, Chambers, & Carlson, 1999). And this point, too, matters for the “secretary” sentence: Your background knowledge tells you that women secretaries are more common than men, and this added to your confusion in figuring out who was applauding and who was applauded.
How can we document these knowledge effects? Several studies have tracked how people move their eyes while reading, and these movements can tell us when the reading is going smoothly and when the reader is confused. Let’s say, then, that we ask someone to read a garden-path sentence. The moment the person realizes he has misinterpreted the words so far, he’ll backtrack and reread the sentence’s start, and, with appropriate instruments, we can easily detect these backwards eye movements (MacDonald, Pearlmutter, & Seidenberg, 1994; Trueswell, Tanenhaus, & Garnsey, 1994).
Using this technique, investigators have examined the effects of plausibility on readers interpretations of the words they’re seeing. For example, participants might be shown a sentence beginning “The detectives examined. . . “; upon seeing this, the participants sensibly assume that “examined” is the sentence’s main verb and are therefore puzzled when the sentence continues “by the reporter…” (see Figure 10.10A). We detect this puzzlement in their eye movements: They pause and look back at “examined” realizing that their initial interpretation was wrong. Then, after this recalculation, they press onward. Things go differently, though, if the sentence begins “The evidence examined. (see Figure 10.10B). Here, readers can draw on the fact that “evidence” can’t examine anything, so “examined” can’t be the sentence’s main verb. As a result, they’re not surprised when the sentence continues “by the reporter.. .” Their understanding of the world had already told them that the first three words were the start of a passive sentence, not an active one. (Also see Figures 10.11 and 10.12.)
The Extralinguistic Context We’ve now mentioned several strategies that you use in parsing the sentences you encounter. The role of these strategies is obvious when the strategies mislead you, as they do with garden-path sentences. Bear in mind, though, that the same strategies are used for all sentences and usually do lead to the correct parsing.
It turns out, however, that our catalogue of strategies isn’t complete, because you also make use of another factor: the context in which you encounter sentences, including the conversational context. For example, the garden-path problem is much less likely to occur in the following setting:
Jack: Which horse fell?
Kate: The horse raced past the barn fell.
Just as important is the extralinguistic context-the physical and social setting in which you encounter sentences. To see how this factor matters, consider the following sentence:
Put the apple on the towel into the box.
At its start, this sentence seems to be an instruction to put an apple onto a towel; this interpretation must be abandoned, though, when the words “into the box” arrive. Now, you realize that the box is the apple’s destination; “on the towel” is simply a specification of which apple is to be moved. (Which apple should be put into the box? The one that’s on the towel.) In short, this is another garden-path sentence-initially inviting one analysis but eventually requiring another.
This confusion is avoided, however, if the sentence is spoken in the appropriate setting. Imagine that two apples are in view, as shown in Figure 10.13. In this context, a listener hearing the sentence’s start (“Put the apple…”) would immediately see the possibility for confusion (which apple is being referred to?) and so would expect the speaker to specify which one is to be moved. Therefore, when the phrase “on the towel” comes along, the listener immediately understands it as the needed specification. There is no confusion and no garden path (Eberhard, Spivey- (correctly) Knowlton, Sedivy, & Tanenhaus, 1995; Tanenhaus & Spivey-Knowlton, 1996).
Prosody One other cue is also useful in parsing: the rise and fall of speech intonation and the partern of pauses. These pitch and rhythm cues, together called prosody, can communicate a great deal of information. Prosody can, for example, reveal the mood of a speaker; it can also direct the listener’s attention by specifying the focus or theme of a sentence (Jackendoff, 1972; also see Kraus & Slater, 2016). Consider the simple sentence “Sol sipped the soda” Now, imagine how you’d pronounce this sentence in response to each of these questions: “Was it Sue who sipped the soda?”; “Did Sol gulp the soda?”; or “Did Sol sip the soup?” You’d probably say the same words (“Sol sipped the soda”) in response to each of these queries, but you’d adjust the prosody in order to highlight the information crucial for each question. (Try it. Imagine answering each question and pay attention to how you shift your pronunciation.)
Prosody can also render unambiguous a sentence that would otherwise be entirely confusing (Beach, 1991). This is why printed versions of garden-path sentences, and ambiguous sentences general, are more likely to puzzle you, because in print prosody provides no information. Imagine, therefore, that you heard the sentence “The horse raced past the barn fell.” The speaker would probably pause momentarily between “horse” and “raced,” and again between “barn” and “fell.” making it likely that you’d understand the sentence with no problem. As a different example, consider two objects you might buy for your home. One is a small box as a house for bluebirds. The other is a small box that can be used by any type of bird, and designed the box happens to be painted blue. In print, we’d call the first of these a “bluebird house,” and the second a “blue birdhouse.” But now, pronounce these phrases out loud, and you’ll notice how prosody serves to distinguish these two structures.
Some aspects of prosody depend on the language being spoken, and even on someone’s dialect distinguish these two structures. serves to cues-especially cues that signal the speaker’s emotions and within a language. Other prosodic attitudes-seem to be shared across languages. This point was noted more than a century ago by Charles Darwin (1871) and has been amply confirmed in the years since then (e.g., Bacharowski, 1999; Pittham & Scherer, 1993). Pragmatics What does it mean to “know a language”-to “know English” for example? It should be clear by now that the answer has many parts. Any competent language user needs somehow to know (and obey) a rich set of rules about how (and whether) elements can be combined. Language users rely further set of principles whenever they perceive and understand linguistic inputs. Some of these on a principles are rooted in syntax; others depend on semantics (e.g., knowing that detectives can “examine” but evidence can’t); still others depend on prosody or on the extralinguistic context. All these factors then seem to interact, so that your understanding of the sentences you hear (or see in print) is guided by all these principles at the same time.
These points, however, still understate the complexity of language complexity of the knowledge someone must have in order to use a language. This point becomes clear when we consider language use at levels beyond the hierarchy shown in Figure 10.1-for use and, with that, the example, when we consider language as it’s used in ordinary conversation. As an illustration, consider the following bit of dialogue (after Pinker, 1994; also see Gernsbacher & Kaschak, 2013: Graesser & Forsyth, 2013; Zwaan, 2016):
Woman: I’m leaving you.
Man: Who is he?
You easily provide the soap-opera script that lies behind this exchange, but you do so by drawing on a fabric of additional knowledge-in this case, knowledge about the vicissitudes of romance. Likewise, in Chapter 1 we talked about the importance of background knowledge in your understanding of a simple story. (It was the story that began, “Betsy wanted to bring Jacob a present “). There, too, your understanding depended piggy banks, and more. Without those facts, the story would have been incomprehensible.
Your use of language also depends communicate with each other-assumptions that involve the pragmatics of language use. For on you providing a range of facts about gift-giving, on your assumptions about how, in general, people example, if someone asks, “Do you know the time?” you understand this as a request that you report the time-even though the question, understood literally, is a yes/no question about the extent of your temporal knowledge. What do the pragmatics of language-that is, your knowledge of how language is ordinarily used- actually involve? Many years ago, philosopher Paul Grice described the conversational “rules” in terms of a series of maxims that speakers follow and listeners count on (Grice, 1989). The “maxim of relation,” for example, says that speakers should say things that are relevant to the conversation. For example, imagine that someone asks, “What happened sure looks happy.” Here, your assumption of relevance will most to the roast beef?” and gets a reply, “The dog likely lead you to infer that the dog must have stolen the meat. Likewise, the “maxim of quantity” specifies that a speaker shouldn’t be more informative than is necessary. On this point, imagine that you ask someone, “What color are your eyes?” and he responds, “My left eye is blue.” The extra detail here invites you to assume that the speaker specified “left eye” for a reason-and so you’ll probably infer that the person’s right eye is some other color. In these ways, listeners count on speakers to be cooperative and collaborative, and speakers proceed knowing that listeners make these assumptions. (For more on the collaborative nature of conversation and the assumptions that conversational partners make, see Andor, 2011: Clark, 1996; Davis & Friedman, 2007; Graesser, Millis, & Zwaan, 1997; Holtgraves, 2002; Noveck & Reboul, 2008; Noveck & Sperber, 2005). e. Demonstration 10.7: Language Use in Jokes Language is an extraordinary tool. We use it to communicate information, to make inquiries, language to tell to share our feelings, to express our intentions, and much, much more. But we also use jokes.
One type of joke depends on the lowest level of the hierarchy shown in Figure 10.1; these jokes play with the sounds of language, and so these jokes work in spoken form, but they work less well (if at all) in print.
“Becoming a vegetarian can be a missed steak.”
“Once you’ve seen one shopping center, you’ve seen the mall.”
Notice also that these jokes play with the processes of speech perception-so that, for example, you need to re-segment “them all” as “the mall.”
Other jokes also play with the sounds of language, but they rely on the fact that your enormous skill in understanding language allows you to make sense of sounds that aren’t quite right. Consider for example., all of the answers to this question: “Did you hear about the Italian chef who was thrown out of cooking school because he failed the exam?”
“He couldn’t pasta test.”
“He cannoli do so much.”
“His failure will become a pizza history.”
“Here today, gone tomato.”
“You never sausage a sad thing.”
Other jokes depend on the morpheme level and on your “taking apart” a word and rethinking how the elements are related to each other.
“Whiteboards are remarkable.”
“What do you call an alligator in a vest? An inwestigator”
Still other jokes depend on the ambiguity of intact single words-but with the reinterpretation of the word often forcing a reinterpretation of the entire sentence.
“Santa’s helpers are subordinate Clauses.” “An E-Flat, a G-Flat. and a B-Flat walked into a bar. The bartender said, I’m sorry, but we don’t serve minors.”
“Once the magician used trapdoors in every act, but he stopped. It was, it seems, just a stage he was going through.”
And still more jokes play with the rules of conversation-usually, by deliberately breaking the rules. So Related, some jokes play with the “standard” pattern for jokes-often, by saying something pointless that it’s funny.
“Knock knock.” “Who’s there?” “Spell.” “Spell who?” “Okay, I will. W. H. O.”
“What does Tarzan say when he sees a herd of elephants in the distance?”
“Look, a herd of elephants in the distance.”
“What does Tarzan say when he sees a herd of elephants wearing sunglasses?”
“Nothing. He doesn’t recognize them.”
“How do you get an elephant into the refrigerator?”
“Open the door. Insert the elephant. Close the door.”
Do you think that jokes at one of these levels are more effective-that is, funnier!-than jokes at the other levels? Do you have a favorite joke? If so, what aspects of language does it exploit? The Biological Roots of Language Each of us uses language all the time-to learn, to gossip, to instruct, to persuade, to warn, to express affection. We use this tool as easily as we breathe; we spend far more effort in choosing our clothes in the morning than we do in choosing the words we will speak. But these observations must not hide the facts that language is a remarkably complicated tool and that we are all exquisitely skilled in its use.
How is all of this possible? How is it that ordinary human beings-even ordinary two-and-a-half- year-olds-manage the extraordinary task of mastering and fluently using language? According to equipped with sophisticated neural many authors, the answer lies in the fact that humans are machinery specialized for learning, and then using, language. Let’s take a quick look at this machinery. Aphasiasdisruption of As we described at the chapter’s start, damage to specific parts of the brain can cause a aphasia. Damage to the brain’s left frontal lobe, especially a region known as language known as Broca’s area (see Figure 10.14), usually produces a pattern of symptoms known as nonfluent aphasia. People with this disorder can understand language they hear but cannot write or speak. In extreme cases, a patient with this disorder cannot utter any words at all. In less severe cases, only part of the patient’s vocabulary is lost, but the patient’s speech becomes labored and fragmented and articulating each word requires special effort. One early study quoted a patient with aphasia as saying, “Here. . . head… operation… here… speech . . . none… . . . what … illness” (Luria, 1966, p. 406).
Different symptoms are associated with damage to a brain site known as Wernicke’s area (again see Figure 10.14). Patients with this sort of damage usually suffer from a pattern known as fluent aphasia. These patients can talk freely, but they say very little. One patient, for example, uttered, “I was over the other one, and then after they had been in the department, I was in this one” (Geschwind, 1970, p. 904). Or another patient: “Oh, I’m taking the word the wrong way to say, all of the barbers here whenever they stop you it’s going around and around, if you know what I mean, that is tying and tying for repucer, repuceration, welll, we were trying the best that we could while another time it was with the beds over there the same thing” (Gardner, 1974, p. 68).
This distinction between fluent and nonfluent aphasia, however, captures the data only in the broadest sense. One reason lies in the fact that-as we’ve seen-language use involves the coordination of many different steps, many different processes. These include processes needed to “look up word meanings in your “mental dictionary,” processes needed to figure out the structural relationships within a sentence, processes needed to integrate information about a sentence’s structure with the meanings of the words within the sentence, and so on. Each of these processes relies on its own set of brain pathways, so damage result, the language loss in aphasia can sometimes be quite specific, with impairment just to a specific processing step (Cabeza & Nyberg, 2000; Demonet, Wise, & Frackowiak, 1993; Martin, 2003).
Even with these complexities, the point here is that humans have a considerable amount of neural tissue that is specialized for language. Damage to this tissue can disrupt language understanding, language production, or both. In all cases, though, the data make it clear that our skill in using language rests in part on the fact that we have a lot of neural apparatus devoted to to those pathways disrupts the process. As a precisely this task. The Biology of Language Learning
The biological roots of language also show up in another manner-in the way that language is learned. This learning occurs remarkably rapidly, and so, by the age of 3 or 4, almost every child is able to converse at a reasonable level. Moreover, this learning can proceed in an astonishingly wide range of environments. Children who talka lot with adults learn language, and so do children who talk very little with adults. In fact, children learn language even if their communication with adults is strictly limited. Evidence on this last point comes from children who are born deaf and have no opportunity learn sign language. (In some cases, this is because their caretakers don’t know how to sign; in other cases, it’s because their caretakers choose not to teach signing.) Even in these extreme cases, language emerges: Children in this situation invent their own gestural language (called “home sign”) and teach the language to the people in their surroundings (Feldman, Goldin- Meadow, & Gleitman, 1978; Goldin-Meadow, 2003, 2017; Senghas, Román, & Mavillapalli, 2006). How should we think about this? According to many psychologists, the answer lies in highly sophisticated learning capacities that have specifically evolved for language learning. Support for this claim comes from many sources, including observations of specific-language impairment (SLI). Children with this disorder have normal intelligence and no problems with the muscle movements needed to produce language. Nonetheless, they are slow to learn language and, throughout their lives, have difficulty in understanding and producing many sentences. They are also impaired on tasks designed to test their linguistic knowledge. They have difficulty, for example, completing passages like this one: “I like to blife. Today I blife. Tomorrow I will blife. Yesterday I did the same thing. Yesterday I _______”. Most 4-year-olds know that the answer is “Yesterday I blifed.” But adults with SLI cannot do this task-apparently having failed to learn the simple rule of language involved in forming the past tense of regular verbs (Bishop & Norbury, 2008; Lai, Fisher, Hurst, Vargha-Khadem, & Monaco, 2001; van der Lely, 2005; van der Lely & Pinker, 2014).
Claims about SLI remain controversial, but many authors point to this disorder as evidence for brain mechanisms that are somehow specialized for language learning. Disruption to these mechanisms throws language off track but seems to leave other aspects of the brain’s functioning undisturbed. The Processes of Language Learning
Even with these biological contributions, there’s no question that learning plays an essential role in the acquisition of language. After all, children who grow up in Paris learn to speak French; children who grow up in Beijing learn to speak Chinese. In rather obvious way, language learning depends on the child’s picking up information from her environment.
But what learning mechanisms are involved here? Part of the answer rests on the fact that exquisitely sensitive to patterns and regularities in what they hear, as though each child children are were an astute statistician, keeping track of the frequency-of-occurrence of this form or that. In one a 2-minute recording that sounded something like study, 8-month-old infants heard “bidakupadotigolabubidaku.” These syllables were spoken in a monotonous tone, with no difference in stress from one syllable to the next and no pauses in between the syllables. But there was a pattern. The experimenters had decided in advance to designate the sequence “bidaku” as a word. Therefore, they arranged the sequences so that if the infant heard “bida,” then “ku” was sure to follow. For other syllables, there was no such pattern. For instance, “daku” (the end of the nonsense word “bidaku”) would sometimes be followed by “go,” sometimes by “pa,” and so on. The babies reliably detected these patterns. In a subsequent test, babies showed no surprise if they heard the string “bidakubidakubidaku.” From the babies’ point of view, these were simply repetitions of a word they already knew. However, the babies showed surprise if they were presented with the string “dakupadakupadakupa.” This wasn’t a “word” they had heard before, although they had heard each of its syllables many times. It seems, then, that the babies had learned the “vocabulary” of this made-up language. They had detected the statistical pattern of which syllables followed which, despite their brief, passive exposure to these sounds and despite the absence of any supporting cues such as pauses or shifts in intonation (Aslin, Saffran, & Newport, 1998; Marcus, Vijayan, Rao, & Vishton, 1999; Saffran, 2003; Xu & Garcia, 2008).
In addition, children don’t just detect patterns in the speech they hear. Children also seem to derive broad principles from what they hear. Consider, for example, how English-speaking children they memorize learn to form the past tense. Initially,, they proceed in a word-by-word fashion, so that the past tense of “play” is “played,” the past tense of “climb” is “climbed,” and so on. By age 3 or so, however, children seem to realize that they don’t have to memorize each word’s past tense as a separate vocabulary item. Instead, they realize they can produce the past tense by manipulating morphemes-that is, by adding the “-ed” ending onto a word. This is, of course, an important discovery for children, because this principle allows them to generate the past tense for many new verbs. including verbs they’ve never encountered before. However, children over-rely on this pattern, and their speech at this age contains “Yesterday I runned.” The goed” or overregularization errors: thing happens with other morphemes, so that children of this age also overgeneralize their use They say things like “Yesterday we same of the plural ending-they say things like, “I have two foots” or “I lost three tooths” (Marcus et al., 1992). They also generalize the use of contractions; having heard “she isn’t” and “you aren’t,” they say things like “I amn’t.”
It seems, then, that children (even young infants) are keenly sensitive to patterns in the language that they’re learning, and they’re able to figure out the (sometimes rather abstract) principles that govern these patterns. In addition, language learning relies on a theme that has been in view throughout this chapter: Language has many elements (syntax, semantics, phonology, prosody, etc.), and these elements interact in ordinary language use (so that you rely on a sentence’s syntactic form to figure out its meaning; you rely on semantic cues in deciphering the syntax). In the same way, language learning also relies on all these elements in an interacting fashion. For example, children rely on prosody (the rise and fall of pitch, the pattern of timing) as clues to syntax, and adults speaking helpfully exaggerate these prosodic signals, easing the children’s interpretive burden. Children also rely on their vocabulary, listening for words they already to children know as clues helping them to process more complex strings. Likewise, children rely on their knowledge of semantic relationships as a basis for figuring out syntax -a process known as semantic bootstrapping (Pinker, 1987). In this way, the very complexity of language is both a burden for the child (because there’s so much to learn in “learning a language”) and an aid (because the child can use each element as a source of information in trying to figure out the other elements).
Animal Language We suggested earlier that humans are biologically prepared for language learning, and this claim has many implications. Among other points, can we locate the genes that underlie this preparation? Many researchers claim that we can, and they point to a gene called “FOXP2” as crucial; people who have a mutated form of this gene are markedly impaired in their language learning (e.g., Vargha-Khadem, Gadian, Copp, & Mishkin, 2005).
As a related point, if language learning is somehow tied to human genetics, then we might expect not to find language capacity in other species. Of course, many species do have sophisticated including the songs and clicks of dolphins and whales, the dances of honeybees, and the various alarm calls of monkeys. These naturally communication systems- occurring systems, however are extremely limited-with small vocabularies and little (or perhaps nothing) that corresponds to the rules of syntax that are evident in human language. These systems will certainly not support the sort of generativity that is a prominent feature sort of generativity that is a prominent feature of human language-and so these other species don’t have anything approaching our capacity to produce or understand an unending variety of new sentences.
Perhaps, though, these naturally occurring systems understate what animals do. can Perhaps animals can do more if only we help them a bit. To explore this issue, researchers have tried to train animals to use more sophisticated forms of communication. Some researchers have tried to train dolphins to communicate with humans; one project involved an African grey parrot; other projects have focused on primates-asking what a chimpanzee, gorilla, or bonobo might be capable of. The results from these studies are impressive, but it’s notable that the greatest success involves animals that are quite similar to humans genetically (e.g., Savage- Rumbaugh & Lewin, 1994; Savage-Rumbaugh & Fields, 2000). For example, Kanzi, a male bonobo, seems to understand icons on a keyboard as symbols that refer to other ideas, and he also has some mastery of syntax-so he responds differently and (usually) appropriately, using stuffed animals, to the instructions “Make the doggie bite the snake” or “Make the snake bite the doggie.” Kanzi’s abilities, though, after an enormous amount of careful training, are way below those of the average 3- or 4-year-old human who has received no explicit language training. (For example, as impressive as Kanzi is, he hasn’t mastered the distinction between present, past, and future tense, although every human child effortlessly learns this basic aspect of language.) Therefore, it seems that other species (especially those closely related to us) can learn the rudiments of language, but nothing in their performance undercuts the amazing differences between human language capacity and that in other organisms. “Wolf Children” Before moving on, we should address one last point- one that concerns the limits on our “biological our human biology gives us a fabulous start on preparation” for language. To put the matter simply language learning, but to turn this “start” into “language capacity,” we also need a communicative partner. In 1920, villagers in India discovered a wolf mother in her den together with four cubs. Two were baby wolves, but the other two were human children, subsequently named Kamala and Amala. No one knows how they got there or why the wolf adopted them. Roger Brown (1958) tells us what these children were like:
Kamala was about eight years old and Amala was only one and one-half. They were thoroughly wolfish in appearance and behavior: Hard callus had developed on their knees and palms from going on all fours. Their teeth were sharp edged. They moved their nostrils sniffing food. Eating and drinking were accomplished by lowering their mouths to the plate. They ate raw meat. . . . At night they prowled and sometimes howled. They shunned other children but followed the dog and cat They slept rolled up together on the floor. . . . Amala died within a year but Kamala lived to be eighteen. … In time, Kamala learned to walk erect, to wear clothing, and even to speak a few words. (p. 100)
The outcome was similar for the 30 or so other wild children for whom researchers have evidence. When found, they were all shockingly animal-like. None could be rehabilitated to use language normally, although some (like Kamala) did learn to speak a few words.
Of course, the data from these wild children are difficult to interpret, partly because we don’t know why the children were abandoned in the first place. (Is it possible that these children were abandoned because their human parents detected some birth defect? If so, these children may have been impaired in their functioning from the start.) Nonetheless, the consistency of these findings underscores an important point: Language learning may depend on both a human genome and a human environment. Language and ThoughtVirtually every human knows and uses a language. But it’s also important that people speak different languages-for example, some of us speak English, others German, and still others Abkhaz or Choctaw or Kanuri or Quanzhou. How do these differences matter? Is it possible that people who speak different languages end up being different in their thought processes? Linguistic Relativity The notion that language shapes thought is generally attributed to the anthropologist Benjamin Whorf and is often referred to as the “Whorfian hypothesis.” Whorf (e.g., 1956) argued that the language you speak forces you into certain modes of thought. He claimed, therefore, that people who speak different languages inevitably think differently-a claim of linguistic relativity.
To test this claim, one line of work has examined how people perceive colors, building on the fact languages have many terms for colors (red, orange, mauve, puce, salmon, fawn, ocher etc.) and others have few (see Figure 10.15). Do these differences among languages affect that some perception? Evidence suggests, in fact, that people who speak languages with a richer color vocabulary may perceive colors differently-making finer and more (Özgen, 2004; Roberson, Davies, & Davidoff, 2000; Winawer et al., 2007). sharply defined distinctions Other studies have focused on other ways in which languages differ. Some languages, for example, emphasize absolute directions (terms like the English words “east” or “west” that are defined independently of which way the speaker is facing at the moment). Other languages on which way the speaker is or “left” that do depend emphasize relative directions (words like “right” facing). Research suggests that these language differences can lead to how people remember-and perhaps how they perceive-position (Majid, Bowerman, Kita, Haun, & corresponding differences in Levinson, 2004; Pederson et al., 1998).
Languages also differ in how they describe events. In English, we tend to use active-voice sentences that name the agent of the action, even if the action was accidental (“Sam made a mistake”). It sounds awkward or evasive to describe these events in other terms (“Mistakes were Spanish, it’s common not to mention the agent for made”). In other languages, including Japanese or an accidental event, and this in turn can shape memory: After viewing videos of accidental events, Japanese and Spanish speakers are less likely than English speakers to remember the person who triggered the accident (Boroditsky, 2011). How should we think about all these results? One possibility-in line with Whorfs original hypothesis-is that language has a direct impact on cognition, so that the categories recognized by your language become the categories used in your thought. In this view, language has a unique effect on cognition (because no other factor can shape cognition in this way), and because language’s influence is unique, it is also irreversible: Once your language has led you to think in certain ways, you will forever think in those ways. From this perspective, therefore, there are literally some ideas that, say, a Japanese speaker can contemplate but that an English speaker cannot, and vice versa- and likewise, say, for a Hopi or a French speaker.
A different possibility is more modest-and also more plausible: The language you hear guides what you pay attention to, and what you pay attention to shapes your thinking. In this view, language does have an influence, but the influence is indirect: The influence works via the mechanisms of attention. Why is this distinction (direct effect vs. indirect effect) important? The key is that other factors can also guide your attention, with the result that in many settings these factors will erase any impact that language might have. Put differently, the idea here is that your language might bias your attention in one way, but other factors will bias your attention in the opposite way-canceling out language’s impact. On this basis, the effects of language on cognition might easily be reversible, and certainly not as fundamental as Whorf proposed. To see how this point plays out, let’s look at a concrete case. We’ve mentioned that when English speakers describe an event, our language usually requires that we name (and so pay attention to) the actor who caused the event; when a Spanish speaker describes the same event, her language doesn’t have this requirement, and so it doesn’t force her to think about the actor. In this way, the structure of each language influences what the person will pay attention to, and the data tell us that this difference in focus has consequences for thinking and for memory.
But we could, if we wished, simply give the Spanish speaker actor.” Or we could make sure that the actor is wearing a brightly colored coat, using a perceptual an instruction: “Pay attention to the guide attention. These simple steps can (and often do) offset the bias created by language. cue to The logic is similar for the effect of language on color perception. If you’re a speaker of Berinmo (a language spoken in New Guinea), your language makes no distinction between “green” and “blue,” so it never leads you to think about these as separate categories. If you’re an English speaker, your language does make this distinction, and this can draw your attention to what all green objects have in common and what all blue objects have in common. If your attention is drawn to this point again and again, you’ll gain familiarity with the distinction and eventually become better at making the distinction. Once more, therefore, language does matter-but it matters because of language’s impact on attention. Again, let’s be clear on the argument here: If language directly and uniquely shapes thought, then the effects of language on cognition will be systematic and permanent. But the alternative is that it’s your experience that shapes thought, and your experience depends on what you pay attention to, and (finally) language is just one of the many factors guiding what you pay attention to. On this basis, the effects of language may sometimes be large, but can be offset by a range of other influences. (For evidence, see Boroditsky, 2001; and then Chen, 2007, or Abarbanell, Gleitman, & Papafragou, 2011; Li & Gleitman, 2002.)
More than a half-century ago, Whorf argued for a strong claim-that the language people speak plays a unique role in shaping their thought and has a lifelong impact, determining what they can or cannot think what ideas they can or cannot consider. There is an element of truth here, because January & Kako, 2007. Also see Li, language can and does shape cognition. But language’s impact is neither profound nor permanent, and there is no reason to accept Whorfs ambitious proposal. (For more on these issues, see Gleitman & Papafragou, 2012; Hanako & Smith, 2005; Hermer-Vazquez, Spelke, & Katsnelson, 1999; Kay & Regier, 2007; Özgen & Davies, 2002; Papafragou, Li, Choi, & Han, 2007; Stapel & Semin, 2007.) Bilingualism
There’s one more-and intriguing-way that language is said to influence cognition. It comes from cases in which someone knows more than one language.
Children raised in bilingual homes generally learn both languages quickly and well (Kovelman Shalinksy, Berens, & Petitto, 2008). Bilingual children do tend to have smaller vocabularies, compared to monolingual children, but this contrast is evident only at an early age, and bilingual children soon catch up on this dimension (Bialystok, Craik, Green, & Gollan, 2009).
These findings surprise many people, on the expectation that bilingual children would become confused-blurring together their languages and getting mixed up about which words and which rules belong in each language. But this confusion seems not to occur. In fact, children who are raised bilingually seem to develop skills that specifically help them avoid this sort of confusion-so that they develop a skill of (say) turning off their French-based habits in this setting so that they can speak uncompromised English, and then turning off their English-based habits in that setting so that they can speak fluent French. This skill obviously supports their language learning, but it may also help them in other settings. (See Bialystok et al., 2009; Calvo & Bialystok, 2013; Engel de Abreau, Cruz- Santos, Tourinho, Martion, & Bialystok, 2012; Hernández, Costa, & Humphreys, 2012; Hilchey & Klein, 2011; Kroll, Bobb, & Hoshino, 2014; Pelham & Abrams, 2014; Zelazo, 2006.) In Chapter 5 we introduced the idea of executive control, and the suggestion here is that being raised bilingually may encourage better executive control. As a result, bilinguals may be better at avoiding distraction, switching between competing tasks, or holding information in mind while working on some other task. There has, however, been considerable debate about these findings, and not all experiments find a bilingual advantage in executive control. (See, e.g., Bialystok & Grundy, 2018; Costa, Hernández, Costa-Faidella, & Sebastián-Galés, 2009; de Bruin, Treccani, & Della Salla, 2015; Goldsmith & Morton, 2018; Von Bastian, Souza & Gade, 2016; Zhou & Kross, 2016.) There is some suggestion that this advantage only emerges with certain tasks or in certain age groups (perhaps in children, but not adults). There is also some indication that other forms of training can improve executive control- and so bilingualism may be just one way to achieve this goal. Obviously, further research is needed in this domain, especially since the alleged benefits of bilingualism have important implications-for intriguing public policy, for education, and for parenting. These implications become all the more when we bear in mind that roughly a fifth of the population in the United States speaks a language at higher in home that is different from the English they use in other settings; the proportion is even some states, including California, Texas, New Mexico, and Nevada (Shin & Kominski, 2010). These points aside, though, research on bilingualism provides one more (and perhaps which scholars continue to explore the ways in which language surprising) arena in a use may shape cognition. COGNITIVE PSYCHOLOGY AND EDUCATION writing Students are often required to do a lot of writing-for example, in an essay exam or a term paper. Can cognitive psychology provide any help in this activity-specifically, helping you to write more clearly and more persuasively?
Research tells us that people usually have an easier time understanding active sentences than passive, and so (all things being equal) active sentences are preferable. We also know that people approach a sentence with certain parsing strategies, and that’s part of the reason why sentences are clearer if the structure of the sentence is laid out early, with the details following, rather than the other way around. Some guidelines refer to this as an advantage for “right-branching sentences” rather than “left-branching sentences.” The idea here is that the “branches” represent the syntactic and semantic complexity, and you want that complexity to arrive late, after the base structure is established.
By the same logic, lists are easier to understand if they arrive late in the sentence (“I went to the store with Juan, Fred, George, Sue, and Judy”) than arriving early (“Juan, Fred, George, Sue, Judy, and I went to the store”) before the structure. so that they can be fitted into the structure, rather Readers are also helped by occasional words or phrases that signal the flow of ideas in the material they’re reading. Sentences that begin “In contrast,” or “Similarly,” or “However,” provide the reader with some advance warning about what’s coming up and how it’s related to the ideas covered so far. This warning, in turn, makes it easier for the reader to see how the new material fits into the framework established up to that point. The warning also requires the writer to think about these relationships, and often that encourages the writer to do some fine-tuning of the sequence of sentences!
In addition, it’s important to remember that many people speak more clearly than they write, and it’s interesting to ask why this is so. One reason is prosody-the pattern of pitch changes and pauses that we use in speaking. These cannot be reproduced in writing-although prosodic cues can sometimes be mimicked by the use of commas (to indicate pauses) or italics (to indicate emphasis). These aspects of print can certainly be overused, but they are in all cases important, and writers should probably pay more attention to them than they do-in order to gain in print some of the benefits that (in spoken language) are provided by prosody.
But how should you use these cues correctly? One option is to rely on the fact that as listeners and speakers we all know how to use prosodic cues, and we can exploit that knowledge when we write by means of a simple trick: reading your prose out loud. If you see a comma on the page but you’re not inclined, as a speaker, to pause at that moment, then the comma is probably unnecessary Conversely, if you find yourself pausing as you read aloud but there’s no comma, then you may need one. Another advantage of spoken communication, as opposed to written, is the prospect of immediate feedback. If you say something that isn’t clear, your conversation partner may frown, look confused, or say something to indicate misunderstanding. What can take the place of this feedback when you’re writing? As one option, it’s almost always useful to have someone (a friend, perhaps) read over your drafts; this peer editing can often catch ambiguities, absence of clarity, or absence of flow that you might have missed on your own. Even without a peer editor, you can gain some of the same benefits from reading your own prose out loud. Some studies suggest that reading your own prose out loud helps you to gain some distance from it that you might not have with ordinary (silent) reading, so that you can, at least in a rough way, provide your own peer editing. As a related point, we know that people routinely skip words when they’re reading (this was important in the Chapter 4 discussion of speed-reading). The skipping helps when you’re reading, but it’s a problem when you’re editing your own prose (how can you edit words that you didn’t even see?). It’s important, therefore that the skipping is less likely when you read the prose out loud-another advantage of reading aloud when you’re checking on your own writing. Finally, many people shift into a different style of expressing themselves when they’re writing. Maybe they’re convinced that they need some degree of formality in their written expression. Maybe they’re anxious while writing, and this stiffens their prose. Or maybe they’re trying to impress the reader, so they deliberately reach for more complex constructions and more obscure vocabulary. Whatever the reason for these shifts, they are often counterproductive and make your writing less clear, wordier, and stiffer than your ordinary (spoken) expression. Part of the cure here is to abandon the idea that complex and formal prose is better prose. And part of the cure lies in peer editing or reading aloud. In either case, the question to ask is this: Would you express these ideas more clearly, more simply, if you were speaking them rather than writing them? Often, this will lead you to better writing.
Will these simple suggestions improve every writer? Probably not. Will these suggestions take obscure, fractured prose and lift it to a level that makes you eligible for a Pulitzer Prize? Surely not. Even so, the suggestions offered here may well help you in some ways, and for anyone’s writing, any source of improvement should be welcome! COGNITIVE PSYCHOLOGY AND THE LAW remembering conversation Many legal cases hinge on a witness reporting details of how a conversation unfolded. Did the shooter say, “Die, fool!” just before pulling the trigger? If so, this provides evidence that the murder was (at leastbriefly) premeditated. Did Fred tell Jim about the new product when they were together at the business meeting? If so, this might indicate that Jim is guilty of insider trading.
How accurate is memory for conversation? The answer reflects two themes. First, in most conversations, people have no reason to care about the exact sequence of words or how exactly the ideas are expressed. Instead, what people usually care about and pay attention to is the “propositional content” of the conversation- the underlying gist.
If people are focused on the gist and remember what they’ve paid attention to, then we should expect them to remember the gist and not the wording. We should expect, in other words, that people will have difficulty recalling how utterances were phrased or how exactly a conversation unfolded. Second, in most conversations, a lot of information goes unsaid, but the people involved effortlessly make inferences to fill in the bits that aren’t overtly expressed. This filling-in is made possible by the fact that participants in a conversation draw on their “common ground”-that is, beliefs shared by the individuals taking part in that conversation. When there is no common ground the risk of misunderstanding is increased.
Again, note the implication. We’ve just said that the gist of a conversation depends on inferences and assumptions. We’ve also said that people tend to remember the gist of a conversation, not the actual spoken words. Putting these pieces together, we’re led to the expectation that people will often blur together in their recall the propositions that were merely implied. Said differently, people will often recall fragments of conversation that weren’t actually expressed and those that were spoken at all.
A range of evidence supports these claims. In one study, mothers had a conversation with their 4-year-old daughters and then, three days later, were asked to recall exactly how the conversation had unfolded. The mothers’ memories for the exact words spoken were quite poor, and they had difficulty recalling (among other points) whether their child’s statements were spontaneous or prompted by specific questions. They even had difficulty recalling which utterances were spoken by them and which ones by the children. This study (and others making the same point) is worrisome for law enforcement. In evaluating a child’s report, we want to know if it was spontaneous or prompted; we want to know if the details in the report came from the child or were suggested by an adult. Unfortunately, evidence suggests that memory for these crucial points is often inaccurate. Here’s a different line of evidence, with its own implications for the justice system. Memory for conversations, we’ve said, usually includes many points that were actually absent from the conversation but were instead supplied by the conversational participants, based on the “common ground” they brought to the conversation. But what happens if there is no common ground-if the individuals bring different assumptions to the conversation?
Think about the “conversation” that takes place between a police detective and a suspect in an interrogation room. The suspect’s statement will be inadmissible at trial if the detectives make any promises or threats during the interrogation, but detectives can (and often do) imply promises or threats. The problem with this lies in the fact that the investigator and the suspect are “playing by different rules.” The investigator’s perspective emphasizes the specific words spoken (words that don’t contain an overt promise or threat). The suspect, in contrast, will likely adopt a perspective that’s sensible in most settings-one that (as we’ve said) focuses on the gist of the exchange, with minimal attention to how that gist is expressed. The suspect, in other words, has no reason to focus on whether a promise was expressed or merely implied. As a result, the suspect is likely to “hear” the (implied) promise or threat, even though the investigator uttered neither. COGNITIVE PSYCHOLOGY AND THE LAW jury instructions In some countries-including the United States-court cases are often decided by juries, and because most jurors aren’t lawyers, they need to be instructed in the law. This is usually accomplished by having the judge provide jury instructions as the last step of the trial-just before the jury begins its deliberation. The instructions in a typical case might include a reminder of the jury’s overall task, the exact definition of the crime being considered, the elements that must be proved in order to count a person as guilty of that crime, and so on.
These instructions need to be accurate, and so they must capture the precision and exact definitions involved in the law. But the instructions also need to be clear-so that ordinary jurors (often, people with just a high school education) can understand and remember the instructions and thus be guided by the law in their deliberation. It turns out, however, that these two requirements are often in conflict with each other, and sometimes perfectly precise language isn’t the same as clear and comprehensible language. As a result, the judges’ instructions are often difficult to understand. Research suggests, in fact, that jurors often misunderstand judges’ instructions; in some studies jurors understand less than half of what the judge tells them and, moments later, remember even less. These failures to understand instructions can be documented in college students who are given the instructions as part of an experiment, and in actual jurors who hear the instructions as part of a real trial. In some cases the misunderstandings concern subtle details, but in other cases jurors seem to misunderstand fundamental points about the law. For example, law in the United States rests on the idea that someone is innocent until proven guilty and that the defendant must, in a criminal trial, be proven guilty beyond a reasonable doubt. There’s room for discussion about what “reasonable doubt” means exactly, but the implication of this idea is very clear: When the jurors are genuinely uncertain about the verdict, they cannot vote “guilty.” Nonetheless, juries sometimes seem to misunderstand this point and adopt a rule of “If in doubt, then be careful, and that means don’t risk letting the defendant go free-and so, if in doubt, convict.”
What can we do about this? Studies of language processing tell us that people understand more when participating in a conversation than when merely hearing a conversation. It’s troubling therefore, that jurors are expected to sit passively as the judge recites the instructions; they are almost never allowed to ask questions during the instructions about things they don’t understand. And if the jurors realize, during their subsequent discussion, that they didn’t understand the instructions, there’s often little they can do. Requests for clarification often result in the judge simply repeating the same words that caused the confusion in the first place. The instructions themselves are also a problem, but psycholinguistic studies provide guides for how the instructions can be simplified. For example, we know that people generally have an easier time understanding active-voice sentences than passive-voice ones, affirmative sentences rather than negative ones. We also know that sentence understanding often depends on certain well- defined strategies, with the result that sentences are easier to understand if they have a structure that’s compatible with these strategies. Using principles such as these, a number of researchers have developed jury instructions that still reflect the law correctly but are much easier to understand.
We also know that vocabulary matters and that language comprehension suffers if the vocabulary is abstract or unfamiliar. It’s troubling, therefore, that instructions for juries often contain a lot of legal jargon. Another forward step, then, is simply to seek ways to replace this vocabulary with easier, more concrete, more familiar words!
Where does this leave us? It is of paramount importance that jurors understand the law; otherwise, the jury system cannot function properly. It is therefore worrisome that jurors comprehension of their instructions is so limited. However, it’s encouraging that by using what we know about language processing, we can make easy adjustments to improve the situation significantly.