This post is about the article “Transcranial direct current stimulation over left inferior frontal cortex improves speech fluency in adults who stutter” by Jennifer Chesters, Riika Möttönen, and Kate E. Watkins, recently published online in the journal Brain – see here (free full text).
The authors applied transcranial direct current stimulation (tDCS) to the left inferior frontal cortex during speech production in combination with choral reading and metronome-timed speech. They found a significantly greater and more lasting effect of the fluency training combined with tDCS as compared with the same fluency training combined with sham stimulation.
The authors propose “that tDCS over the left inferior frontal cortex during the fluent mode of speaking facilitated plasticity of the frontal speech network and prolonged its normalized functioning, resulting in lasting improvements in fluency.” I completely agree with this explanation, and it is amazing that this could be shown after a fluency training of only five days with sessions of twenty minutes a day.
However, six weeks after intervention, the reduction of stuttering in conversation had decreased significantly also in the tDCS group. It is therefore, important to find out how the fluency training can be improved – it may not be sufficient to only extend the duration of training and tDCS. We should therefore ask what exactly happens during chorus reading or metronome-timed speech. What is the effect depending on, and how can a similar effect be achieved in everyday talking?
Chorus reading as well as metronome-timed speech were shown to transiently normalize the activation in the left inferior frontal gyrus (IFG, Broca’s area), associated with a reduction or elimination of stuttering. But not only the left IFG is under-activated during stuttered speech, the left auditory cortex (Wernicke’s area) is under-activated too, and also this is normalized by chorus reading and metronome-timed speech (see Table 1 on my website for an overview). There seems to be a relationship between left IFG activation and auditory activation during speech.
Fibers of the superior longitudinal fasciculus terminate in the IFG (see, e.g., Makris et al, 2005), and we can assume that the IFG is involved in the processing of auditory feedback information provided via that fiber tract. This processing, and by that, auditory-motor integration may be impaired when the left IFG is under-activated during stuttered speech. However, the immediate efficacy of chorus reading and metronome timing on speech fluency suggests that the left-hemispheric speech network is quite able to work well if certain requirements are met – which is obviously the case during chorus reading and metronome-timed speech. But what requirements are that?
Some researchers believe that the speaker gets clues for syllable starts, which helps stutterers because they have difficulty generating their own speech rhythm. But that is not plausible for at least two reasons. First, chorus reading and metronome-timed speech cannot function in the way that the speaker each time reacts to a signal from outside. If you, at each time, wait until you hear the beat of the metronome, or until you hear the co-speakers starting a syllable, and only then say the syllable yourself, then you will never be synchronous but always too late because of the reaction time. Instead, you are required to capture the given beat or pace so that you can predict and anticipate it. Then you must adjust your own pace to the given pace and continuously monitor whether you are still in sync with the metronome or the co-speakers. That is, you must attentively listen to both, the given pace and your own speech, in order to correct your pace if necessary.
Therefore, chorus reading and metronome-timed speech do not only make the speaker listen to the externally given beat or pace, but also listen to his/her own speech. In this way, these conditions improve the processing of auditory feedback (the processing of verbal input is attention-depending – see below) and by that, auditory-motor integration is improved, which results in fluent speech.
A second argument against the hypothesis people who stutter benefit from external cues for syllable starts is that they are quite able to generate their own rhythm, for example, in singing as well as in speaking accompanied by rhythmic arm/hand movements. This is confirmed by an experiment conducted by Howell and El-Yaniv (1987): Adults who stutter were reading a short story (1) normally, (2) while listening to the clicks of a metronome and (3) while listening to clicks occurring at the beginning of every syllable, triggered by the intensity of the speaker’s voice (i.e., it was the participant’s self-generated rhythm). The third condition reduced stuttering nearly as effectively as the second one: The mean number of disfluencies on average in the story was 20.25 in the normal condition, 0.6 with metronome, and 2.5 with click at syllable onset.
Complex automatized sequential motor behavior, e.g., in manual working, sports, dancing, playing music, driving a car, etc., and also speaking requires the ongoing integration of sensory input, among them sensory feedback, in several modalities (visual, acoustic, tactile, kinesthetic) and with that the appropriate allocation of perceptual- and processing capacities for the respective behavior or task. I simply call this the allocation of attention, even if the person often is not aware of it, as it is an integral component of the behavior and was learned and automatized together with that motor ability.
Errors in automatized sequential behavior occur if the appropriate allocation of attention is disturbed or was not correctly learned. For example, attention may be distracted from sensory input, or overly focused on one component of the sensory input, e.g., on one sensory modality. In speaking, the first happens when the speaker is overly focused on the thoughts or emotions that shall be expressed, or on the fear of disfluency; the second happens when the speaker is too much focused on the feedback of articulatory movements (e.g. in the attempt to avoid stuttering) to the detriment of the auditory component and/or the proprioception of breathing.
I propose that developmental stuttering is caused by a misallocation of attention, that is, of perceptual- and processing capacity during speech. The misallocation may be due to several factors (see Chapter 5), but chorus reading and metronome-timed speech seem to be tasks which compel the speaker to reallocate his/her attention, namely: to listen during speech – not only to the co-speaker or to the metronome, but also to his/her own speech.
In chorus reading and in metronome-timed speech, the speaker is compelled to reallocate attention, but might mostly not become aware of this fact, especially not of the fact that he or she must listen not only to the co-speaker or the metronome, but also to his/her own speech. Thus the reallocation of attention is not maintained in everyday talking. One goal of therapy should therefore be to make clients aware of the necessity to reallocate attention during speech, and to practice listening to one’s own words in everyday situations.
Lateralization of the processing of verbal acoustic input is depending on attention (it is left-lateralized only during active listening; Poeppel et al.,1996; Rämä et al., 2012; Sabri et al., 2008), and particularly on attention to the lexical aspect of speech (attention to the prosodic or sound aspect draws processing to the right hemisphere; Hugdahl et al., 2003; Vingerhoets, Berckmoes, and Stroobant, 2003). What is true for the processing of external verbal input might also be true for the processing of auditory feedback. It is therefore important to listen not only to one’s own voice, but to one’s own words.
In this way, it should be possible to maintain a normal activation of the left-hemispheric speech network, and transcranial direct current stimulation seems to be a good means that can support this by promoting a change in brain structure.
[This post has been included in the main text; see Section 2.6.1.]