Speech timing revealed, as reported in Nature News 20 February 2013 and ABC News in Science 21 February 2013.
Edward Chang and colleagues from the University of California San Francisco have carried out a study of brain activity during speech. They were able to record directly from the surface of the brains of patients who had an electrode array implanted under the skull in preparation for brain surgery whilst the patients read consonant and vowel syllables aloud.
According to the research team, “Speaking is one of the most complex actions that we perform, but nearly all of us learn to do it effortlessly”. Speaking involves carefully controlled coordinated movements of many muscles in the face, mouth, neck and larynx. These movements are controlled by a region of brain named the ventral sensorimotor cortex (vSMC).
Chang’s team’s research was able to show how this region is organised, i.e. which parts control the different components of the vocal tract (jaw, lips, larynx, etc.), how they are arranged, and how they work together during speech. They found the parts of the brain that controlled different parts of the vocal tract were arranged according to the location of these body parts relative to one another. Chang explained: “Like your hand is next to your wrist and your wrist is next to your elbow, the brain seems to reflect those same things with the vocal tract – it seems to go from the lips to the tongue to the jaw to the larynx in the layout”.
In order to produce speech different parts of the brain have to be activated in coordinated sequences with millisecond timing, like the musicians in a symphony orchestra each playing notes on their instrument at just the right time. The researchers found that patterns of brain activity consonants and vowels were different even though they use the same parts of the vocal tract. This could explain why “slips of the tongue” usually involve substituting consonant for consonant or vowel for vowel, rather than replacing a vowel with a consonant or vice versa.
The study also provides a clue as to why “tongue twisters” are hard to say. The brain coordinates speech movement by how the muscles need to move, rather than by the sound produced. They found different patterns of brain activity for three categories of consonant: front-of-the-tongue sounds, e.g. “ss”; back-of-the-tongue sounds e.g. “g” and lip sounds e.g. “mm” and two categories of vowels – whether they require rounded lips, (as in “oo”) or not. This indicates tongue twisters are hard because they include lots of sounds that are stored in overlapping parts of the brain. For example: in “She sells seashells on the sea shore” the ‘ss’ and ‘sh’ are both stored in the brain as front-of-the-tongue sounds, and are easily confused.
The researchers also found the human larynx is controlled by two areas of the vSMC, whereas in primates there is only one. They commented that the extra brain region for the larynx may be “a unique feature of human vSMC for the specialized control of speech.”
Editorial Comment: We are not surprised this research found a brain feature that seems to be unique to humans. Human speech is fundamentally different from any animal noises, and this finding adds to evidence that the human brain is specially designed for speech.
As such, this new study on human speech control is a good reminder that the reason we speak is not just because we have more brain cells than other living things, but because we were made ‘in the image of the God who speaks’, and chimps and all their cousins were not. This creator God made our brains to be equipped for speech, under the control of the minds he also gave us.
Likewise the millisecond timing in the coordination of the vocal tract is a challenge to anyone who claims speech could evolve by chance random processes. Several of the news articles about this study used the analogy of the split second timing involved in a symphony orchestra producing music, and it is a good illustration of the muscle co-ordinated needed for speech. Playing notes at random timing will not produce any music. To get music, rather than noise, you need to start with information from the composer, which is written down in the music score, which is acted upon with plan and purpose by musicians, who read the music and know how to play their instruments to get the correct sounds. The same occurs with speech. It has to start with information from the mind of the person speaking, and then must be acted upon by the brain cells in the region of brain analysed in this new study.
Talk about evidence for a designed system, where all claims we are 98.6% the same as chimps go out the window as futile hope by evolutionists.
Evidence News 27 March 2013
Were you helped by this item? If so, consider making a donation so we can keep sending out Evidence News and add more items to this archive. For USA tax deductible donations click here. For UK tax deductible donations click here. For Australia and rest of world click here.