Some weeks ago, a new research article about the speech perception ability of people who stutter was published in the Journal of Fluency Disorders: “Backward masking of tones and speech in people who do and do not stutter” by Shriva Basu, Robert S. Schlauch, and Javanthi Sasisekaran.
They replicated a backward masking experiment previously conducted by Peter Howell and colleagues (Howell et al., 2000; Howell & Williams, 2004; Howell, Davis, & Williams, 2006), in which the detection threshold for a short probe tone in a quiet background was compared with that in the presence of masking noise which was presented immediately after the tone. The decibel difference in the thresholds for the two conditions is the amount of “backward” masking which can be taken as a measure of accuracy and rapidity in central auditory processing.
Howell and colleagues found a greater amount of masking on average in children who stutter as compared with normal fluent children and also in children who persisted in stuttering as compared with those who recovered. There was, however, some overlap between the groups, thus Howell et al. (2006) concluded that an elevated backward masking threshold is sufficient but not necessary for stuttering to persist.
Different from Howell’s experiments, Basu and colleagues used adults (8 who stutter, 8 who don’t) as participants, and, in addition to a tone, vowel-consonant (VC) syllables as stimuli in a further experimental condition: 45 nonsense syllables, with 15 different consonants in three vowel contexts: /a/, /i/, or /u/. All stimuli were presented with and without backward masking. In the tone detection tasks, participants pushed a button when they perceived the tone, in the syllable recognition tasks, they repeated aloud what they heard.
The study confirmed the results obtained by Howell and colleagues: Stuttering participants, as a group, were significantly poorer than the control group in the conditions for tones and speech with backward masking immediately following the stimuli
The surprising result of the study, however, was that the stuttering group performed significantly poorer than the controls in the syllable recognition task even without masking, that is, in the control condition with quiet background – and there was no overlap across the two groups (see Fig. 2 in the paper).The authors propose two possible explanations for this finding: Either people who stutter have indistinct phonemic categories, or consonants were masked by vowels in the VC syllables.
I prefer the latter explanation. The first one would point to a pure speech perception deficit not related to the findings with non-speech stimuli. And if people who stutter had indistinct phonemic categories, one would expect them to have difficulty in distinct articulation, but this is usually not the case.
The second explanation, by contrast, is coherent with the results of the authors’ and Howell’s backward masking experiments using non-speech stimuli, and further with findings of other studies of auditory abilities of people who stutter, all suggesting a subtle deficit in central auditory processing (e.g., Chang et al., 2009; Hampton & Weber-Fox, 2008; Kikuchi et al., 2011; Prestes et al.,2016; Saltuklaroglu et al., 2017).
Kikuchi et al. (2011), who used click sounds as stimuli in an MEG study, found adults who stutter to have a less effective auditory gating on the left hemisphere, that is, the processing of redundant acoustic information is less suppressed. Evidence for reduced auditory gating comes also from Saltuklaroglu et al. (2017) in an EEG study. Especially these findings of reduced auditory gating are well consistent with the idea that short voiceless consonants are masked by adjacent vowels in auditory processing in people who stutter.
Interestingly, poor discrimination of short consonants that were combined with vowels was already found in an earlier study by Neef et al. (2012). Stimuli were two stop consonant-vowel (CV) continua: /ba/–/pa/ and /da/–/ta/, that is, /ba/ gradually step by step changed into /pa/ and /da/ into /ta/. Participants – adults who stutter and controls – were asked to decide for every step whether they heard /ba/ or /pa/, /ba/ or /pa/. The stuttering group performed weaker and less stable over time in this experiment.
Basu and colleagues used VC syllables as stimuli, Neef and colleagues used CV syllables. If masking by adjacent vowels caused the weaker perception of consonants in the stuttering groups in both studies, then the first case would be a case of forward masking. The second case would be one of backward masking consistent with the findings regarding backward masking of non-speech stimuli reported above.
Basu and colleagues did not observe fatigue effects, participants responded with minimal delay in all trials. In the trials with speech stimuli, the need to aloud repeat the heard syllable may have helped maintaining the attentional focus. Neef and colleagues, who conducted five series of trials, found lower performance in the final series which was most pronounced in the stuttering group in the /da/–/ta/ continuum, and they assume that it could indicate reduced attention.
On the other hand, the performance in the stuttering group was similar to that of control participants in the third series (see Fig. 4 A and B in the paper). The authors wonder: “If one takes the optimal conditions in the laboratory into account, one sees that the problem is probably not an impossibility to perform but instead ready access to sufficient performance.” (285)
It is well known that attention modulates speech perception (see Section 2.3 in the main text), but there is little knowledge about the impact of attention particularly on phoneme perception or -categorization. However, Mattys, Barden, and Samuel (2014) showed that (in normal fluent speakers) perceptual sensitivity for phonemes decreased almost linearly with the effort involved in a concurrent, distracting visual task. Likewise, Mattys and Palmer (2015) came to the conclusion that “cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.”
The deficits and abnormalities in the perception and processing of acoustic stimuli suggest at least a considerable subgroup of stutterers to have an auditory processing disorder (APD). Stavrinos et al. (2018) describe APD as “characterised by normal peripheral hearing, but abnormal processing of auditory information within the central auditory nervous system and by deficits in sound-in-noise discrimination”. Investigating the relationship between APD and attention deficits in children in a pilot study, they found that 20 of 27 children with APD demonstrated underlying attention deficits.
In my theory of stuttering (Attention Allocation Theory) I have assumed that insufficient attention to the auditory channel during speech results in poor processing of auditory feedback, which causes invalid error signals in an automatic monitoring system in the brain. Poor processing of auditory feedback means that parts of feedback information are not completely transmitted or not completely kept in working memory, so that the internal monitor “believes” that a word or phrase has not completely been produced when the next one is already starting.
The results of Basu et al. (2018) and of Neef et al. (2012) point to a further possibility for invalid error signals: Not only gaps in the stream of auditory feedback may occur, but also errors in phoneme categorization. If poor auditory processing of speech feedback leads to phonemes to be not detected or wrong categorized, then short voiceless consonants might be most probable candidates for such processing errors.
In Section 3.4 in the main text (on consequences from the theory for therapy) I recommend listening to one’s own voice and words, particularly to the ends of words and to short, unstressed function words, and speaking in a powerful, sonorous voice in order to draw attention to the auditory channel and, in this way, to ameliorate the processing of auditory feedback. Given the new findings, it may also be helpful to attentively listen to short voiceless consonants. Doing so will automatically cause one to distinctly articulate them, which may prevent invalid error signals at these positions.
It is however not easy even for myself to follow my recommendations in some situations, especially when I need much attention for speech planning (e.g., when I’m engaged in explaining complicated things, or when a communication situation is ambiguous). As Mattys and colleagues showed, the perceptual sensitivity for phonemes decreases with growing cognitive load. It is therefore important to reduce the cognitive demands of speech planning whenever possible by pausing between clauses and between units of meaning.
In my post from January, I mentioned a surprising finding in a study by Chang et al. (2018): They investigated intrinsic connectivity networks in children who do and do not stutter and found, among others, anomalous, mainly reduced functional connectivity within the visual network (VN) and even more reduced connectivity between VN and dorsal attention network as well as between VN and default mode network in children who stutter, as compared with their normal fluent peers (see Fig. 4 in the study). I took this finding as indicating a general deficit in the involvement of sensory input in the control of behavior. .
This view seems now to be confirmed by the results of a study from Finland, published online this week, by Johanna Piispala, Tuomo Stark, Eira Jansson-Verkasalo, and Mika Kallio: “Decreasedoccipital alpha oscillation in children who stutter during a visualGo/Nogo task.”
I still read only the Abstract which however provides sufficient information for our purpose. The researchers investigated the main oscillations of the brain in 7-9 year old children who stutter and and in age-matched typically developed children in order to discover potential differences related to attention and inhibitory control. EEG data were collected during a visual Go/Nogo task. Stuttering children showed reduced inhibition of the visual cortex and information processing in the absence of visual stimuli, which, so the authors conclude, “may be related to problems in attentional gating. […] Our findings support the view of stuttering as part of a wide-ranging brain dysfunction most likely involving also attentional and inhibitory networks.”
So much about the paper by Piispala and colleagues. The Attention Allocation Theory of stuttering proposed in the main text describes a potential relation between attention regulation and a pathomechanism of stuttering, which can be summarized in the causal chain depicted below (it’s the same figure as in Section 2.3, but with some explanations).
The causal chain can be closed to a vicious circle when a child, after having experienced many instances of stuttering, begins to expect this trouble whenever he or she starts talking: Expectation of stuttering, anticipatory struggle, and fear then strongly contribute to the misallocation of attention that has caused the disorder. ‘Disrupted feedback’ in the figure below means that sensory, mainly auditory feedback is not sufficiently processed, not completely
For a long time I believed that this vicious circle is the way in which stuttering becomes persistent, but this is probably not correct: In an diffusion tensor imaging study, Chow and Chang (2017) found differences in the brain between children who eventually recovered from stuttering and those who persisted (see Chapter 5). These differences were already present in very young children, thus they can hardly be consequences of stuttering. So I now assume that persistence or recovery are predetermined in most cases, either genetically or by early brain development prior to the onset of childhood stuttering.
The ‘vicious circle’ might nevertheless exist, but influencing only the severity of stuttering including secondary behaviors. Getting out of that cycle, e.g., by desensitization might therefore be the first step of successful therapy.