2. The stuttering event

2.1. The immediate cause: invalid error signals

Levelt (1995) formulated the Main Interruption Rule as an operating principle of the self-monitoring system of speech: “Stop the flow of speech immediately upon detecting trouble” (p. 478). Trouble, here, means a mismatch between the expected and the perceived. In contrast to the conscious internal pre-articulatory monitoring in situations we must carefully avoid every mistake, the self-monitoring via the external feedback loop is a widely automatic, unconscious, and simple process: The monitor compares the phoneme sequence perceived with the expected one and interprets every mismatch as an error. The detection of an error leads to a ‘shift of context’ Kochendörfer: Cortical Linguistics, Section 2.5.4.), namely a shift from the mode ‘speaking as planned’ into the mode ‘error repair’. When an error has been detected, the monitor stops speech flow and directs the speaker’s attention to what he has just said. After the speaker has realized the error, he will accept the interruption and make a correction (read more).

However, a mismatch between the expected and the perceived phoneme sequence can also arise if the external feedback is suddenly temporarily disrupted (the cause of such disruptions will be discussed later). Put the case that a momentary disruption of feedback occurs: The perceived phoneme sequence arrives incompletely at the monitor, but the complete sequence is expected, thus the monitor detects a mismatch. The monitor is unable to distinguish between a mismatch due to an actual speech error and a mismatch due to a transmission error, thus the monitor behaves in the same way as if a speech error was made and interrupts speech flow(read more). This, in my view, is the immediate cause of stuttering: Speech flow is suddenly blocked without any reason identifiable for the speaker. Now, the speaker, who is unaware of an error, spontaneously tries to overcome the blockage and to continue talking (read more). This natural and automatic behavior is the cause of the observable core symptoms of stuttering (see below). In Figure 5, the behavior of the monitor in both cases is depicted: left, the response to a speech error; right, the response to a disruption of feedback.
 

Immediate cause of stuttering: invalid error signals

Figure 5: Detection of a speech error (left) and stuttering (right) base on the same mechanism: The monitor (in the circle) behaves equally in both cases.

In other words: If the monitor detects a mismatch because of a speech error, two processes are elicited: first, at the motor level, a blockage of speech flow and, second, at the mental level, the recognition of the error and a shift from the ‘speaking as planned’ mode into the ‘error repair’mode. The monitor’s response to a mismatch due to an invalid error signal, by contrast, elicits only the first process, because there is no speech error to find; thus the speaker, despite the blockage of the motor program, remains in the ‘speaking as planned’mode – and exactly this causes the stuttering symptoms.

We must, however, consider the following problem: For the detection of speech errprs, an expectations of the correct sound sequence of a speech units is needed, and this expectation is generated based on auditory feedback (see last section). From this, the question arises: How can this monitoring mechanism work if auditory feedback is disrupted? Nothing suggests that the detection of speech errors is anyway impaired in stutterers (read more). The answer is: The monitoring mechanism is quite able to work as long as the disruption of auditory feedback is not permanent – feedback information must be sufficient for the recognition (identification) of the words and phrases produced. For this purpose, in a rule, only the initial portions of speech units are needed (see last section). Therefore, we can assume that a disruptions of the auditory feedback of a produced speech unit does not impair the recognition of this unit and the prediction of its correct sound sequence – namely if the feedback is not disrupted at the initial portion of the unit but only at the end or in the middle. However, even if feedback is disrupted not at the end, but in the middle of a speech unit, the invalid error signal is generated at the end because the prediction of the correct soud sequence and the comparison with the perception take some time (read more).

Since the invalid error signal is generated at the end of a speech unit, the resulting interruption of speech flow affects the start of the subsequent unit, because the monitor needs a reaction time (read more). Thus the speaker can often still start the speaking program and can produce the initial sound(s) of a syllable or the first word of a phrase. After the blockage has become operative, either the speaking program tries to start repeatedly, but is blocked again and again at the same point, or the program gets caught at the point of blockage – with the result that either the sound is prolonged (if phonation is not interrupted) or speaking is completely blocked. In this way, the characteristic stuttering symptoms repetition, prolongation, and silent block are caused.

What kind of symptom occurs at a certain moment might mainly be depending on three factors: the phoneme affected (plosives, for instance, cannot be prolonged), the inclusion of breathing in the underlying motor blockage, and the degree of muscular tension (the more tension the greater the tendency towards tonic symptoms, e.g., “M(ə)-m(ə)-monica” turns into “Mmmonica”, “P(ə)-p(ə)-peter” turns into “P – – eter” because the lips are pressed together). If the symptom occurs at the onset or within a syllable (i.e., if not a whole syllable or word is repeated), then speech flow is mostly blocked on the core of the syllable, i.e., prior or on the vowel or diphthong. Cause may be that the blockage of a syllable is always executed in the same way, namely by blocking the vowel. The fact that blockages within syllables comply with certain rules suggests that stuttering is not a breakdown but a regular response of the control system (read more). – Summarizing, we get a preliminary definition of stuttering:

A stuttering event is the blockage of a speaking program by an automatic monitor becauuse of an invalid error signal. The invalid error signal is the result of a mismatch between the expected and the perceived phoneme sequence, caused by a temporary disruption of auditory feedback.

The definition includes features of stuttering not observable from the exterior, i.e., the definition is part of the proposed theory, and depends on it: If the theory is wrong, the definition is invalid. In addition, of course, the common definition of stuttering holds and is taken as a premise: Stuttering is, when someone, obviously against his own will and often with obvious physical effort, repeats words or parts of words, prolongs sounds, or gets stuck, apparently, because speaking is internally anyway blocked. In the proposed theoretical definition, the three core symptoms of stuttering – repetitions, prolongations, and silent blocks – are explained in the same way, namely as being caused by a blockage of a speaking program. The different kinds of observable symptoms result from the speaker’s spontaneous response to the internal blockage. This is in accordance with Dayalu et al. (2001) who postulated “that the production of core stuttering behaviors represents an attempt by the person who stutters to overcome the involuntary block, and regain forward flowing speech” (p. 111).

The proposed definition clearly distinguishes stuttering events from normal disfluencies caused by delays in speech planning – unintended speech pauses that can be filled with word repetitions or usual fillers like “hm”. In contrast to such normal speech disfluencies, a stuttering event, as defined above, never occurs because of a delay in speech planning. A stuttering event can only occur if a speaking program is just starting or has just started, thus the speaker is about to articulate a specific phoneme sequence, or has already begun to do it, and is blocked at this moment. Someone who is stuttering exactly knows what he/she wants to say and has no problem.with the formulation. The clear distinction between stuttering and normal speech disfluencies is supported by Jiang et al. (2012), who found different brain activity patterns for stuttering-typical and normal disfluencies, suggesting that they are caused in different ways (read more).

The assumption that a stuttering event is caused by an invalid error signal elicited at the end of the preceding speech unit allows to explain some typical features of the disorder: One of the most conspicuous properties of stuttering is that, in more than 90 % of cases, words are stuttered at their onset or on their first syllable (read more). Cause is that the monitoring system needs a reaction time, thus the speaking program of the subsequent unit is blocked at the start or in its initial part. If the speaking program of a word has been blocked, then the word is stuttered usually on the first syllable. If, however, the speaking program of a phrase has been blocked, then sometimes, the first word of the phrase, e.g., an article or a preposition, is repeated as a whole.

Sometimes, another than the first syllable of a word is stuttered, often a stressed syllable or a syllable beginning with a sound on which the speaker expects difficulty. In these cases, the speaker has probablly disassembled the speaking program of the word and has started the syllables separately, perhaps in the attempt to articulate them carefully. Bloodstein (1975) reported that fragmentation of words into smaller units could be observed in the speech of most young children when they attempted to produce utterances that they found motorically or linguistically demanding (cited in Brocklehurst, Lickley, & Corley, 2013). Suchlike fragmentation may be the cause of stuttering on others than the first syllable of a word. In these cases, not a speaking program of a word, but of a particular syllable is blocked because the end of the preceding syllable has not been detected by the monitor.

The contrary to the fragmentation of speaking programs is the production of a long speech passage controlled by only one speaking program as in reciting a poem from memory, or in playing a stage role (see Section 1.2). One reason that some stutterers speak fluently in these situations might be that they confide in their automatic speech control and do not fragment the learned and memorized program, but ‘reel it off’ as a whole. The ‘adaptation effect’ – stuttering is reduced if a text is repeatedly read (Johnson & Knott, 1937; Van Riper & Hull, 1955) – may be explicable in a similar way: The more familiar the text, the fewer words are fragmented, and the more words are bundled to phrases produced by only one motor program.

Up to now, there is no empirical evidence of the invalid error signals postulated to be the immediate cause of stuttering. However, a part of the brain which is involved in the detection and correction of motor errors on the basis of sensory feedback is the cerebellum (read more). Zheng et al. (2013) identified a brain network that appeared to encode an ‘error signal’ in reaction to distorted auditory feedback during articulation. The network included right angular gyrus, right supplementary motor area, and bilateral cerebellum.

In the light of those findings suggesting a role of the cerebellum in motor error processing, the following findings concerning cereballar activation in stuttering are of interest: Several brain imaging studies including two meta-analyses (Brown et al., 2005; Budde et al., 2014), showed that the cerebellum plays an important role in stuttering: During speech, the cerebellum was found to be overactive in stutterers, compared to normal fluent controls, and overactivity was positively correlated with stuttering severity (Fox et al., 2000; Ingham et al., 2004). More recently, Yang et al. (2016) found that resting-state functional connectivity within cerebellar circuits was significantly correlated with the severity of stuttering, and Kell et al. (2017b) report that, in individuals who spontaneously recovered from stuttering, activity in the superior cerebellum together with the left prefrontal cortex (BA47/12) appeared uncoupled from the rest of the speech production network.

On the other hand, an impairment of cerebellar function can cause stuttering to disappear. Bakheit (2011) has reported the case of a 54-year-old man who lost his lifelong stuttering after ischaemic infarct in the left side of the brain stem and in both hemispheres of the cerebellum. After the stroke, the patient showed an ataxic dysarthria characterized by slowed speech movements and breathy low volume voice, however, the dysarthria had improved significantly 12 weeks after the stroke onset, and there was no recurrence of stuttering. The author hypothesized, among other possibilities, that a lesion in the cerebellum could abolish stuttering by inhibiting excessive neuronal activation. Two earlier cases have been reported by Miller (1985), in which severe stuttering disappeared with progressive multiple sclerosis and associated bilateral cerebellar dysfunction. A further case is that of Martina P., 2012 in Germany; a scientific report is in progress but not yet published (read more).

Further evidence for the important role of the cerebellum cames from Wymbs et al. (2013). They used event-related fMRI in order to identify individual differences in the brain activation patterns of four stutterers during stuttered and fluent word production. They found many brain regions to be overactive during stuttered speech, however, across-subject agreement for activated regions was minimal The only region which was overactivated during stuttered words in all the four participants was the left cerebellum, lobule IV (see Table 2 in Wymbs et al., 2013). So, even if there is no evidence of error signals being the cause of stuttering – there are some findings supporting this assumption.

 

to the top

next page


Footnotes

Phonological monitoring

In our context, i.e., for the proposed theory, only phonological monitoring is relevant. Phonological monitoring is: to evaluate whether the syllables and sounds of a word or phrase have been spoken completely and in the correct order. Correspondingly, syntactical monitoring is to evaluate whether the constituents of a sentence have been produced completely and in the correct order. In both these kinds of monitoring, a structure perceived is compared to a structure expected; that is, these kinds of monitoring concern only the form (the structure), not the content of speech.

It was already mentioned that syntactic errors (phrase structure violations) are more quickly detected by the brain than semantic errors. Words or clauses that are formally correct, but make no sense or not the sense intended by the speaker can be detected as errors only after the contents of the words (produced by oneself or by someone else) have been perceived and comprehended. For the detection of a phonological error, by contrast, no comprehension of content is needed; only the acoustic word form, i.e., the word as a certain sound sequence must be recognized. Thus, phonological monitoring is the basic and the quickest part of the self-monitoring of speech. A neuronal network able to do this work can be patterned very simple – see the ‘watchdog’ network (tested in a computer simulation) in Kochendörfer: Cortical Linguistics, Part 5, p. 98, Fig. 5.4.1–4 (unfortunately, not yet in English available). (return)
 

Mismatch detection versus error detection

It is important to understand that, even if a real speech error is detected, speech flow is not interrupted because the speaker has noticed an error. Instead, speech flow is interrupted because an internal monitor, an unconsciously working neuronal network has detected a mismatch between the expected and the perceived. In addition to the interruption of speech flow, the detection of a mismatch may trigger an activation of the motor working memory in order to evaluate the words last actually spoken because the decision of whether actually an error was made can hardly depend on auditory feedback only.

Further, the decision of whether actually an error was made, in a rule, includes either semantic processing or the retrieval of syntactic or grammatic knowledge – all this is not required in the detection of a purely structural mismatch sufficient for the interruption of speech flow. In the case of a real error, an initially unconscious interaction between several parts of working memory – auditory and motor memory as well as the knowledge of language and the memory of the intended message – may finally let the speaker become aware of his mistake. But this interaction comes about only if actually an error has happened, but not in the case of a mismatch due to a disruption of auditory feedback.

Our belief in that we stop talking because we have just noticed an error may result from our bias to explain our behavior in a rational way, but also from the fact that time is needed not only for realizing the error, but also (i) for the interruption of speech flow (information transfer from the brain to the muscles) and (ii) for realizing the interruption (information transfer from the muscles to the brain). Therefore, it is not astonishing that we become aware of the interruption of speech flow after we noticed the error.

Both the processes – interrupting speech flow and becoming aware of the error – might run in parallel and influence each other, which may explain why the pattern of interruption in stuttering and in error repair are different In stuttering, mostly the production of a vowel (the nucleus of a syllable) is blocked, but this pattern is not observed in the interruption preceding error repairs. Levelt (1995) brings the example: “We can go straight on to the ye– , to the orange node.” (p. 479). (return)
 

Why does the speaker automatically try to overcome the blockage?

It may not only be because of the fact that the speaker is not aware of having made a speech error. Alm (2004, p. 331) reports: “Studies of monkeys have shown that neurons in different parts of the globus pallidus signal just before the end of a submovement in a well-learned and predictable motor sequence. It has been proposed that this signal is an internal cue that is generated by the basal ganglia to mark the end of a component in a movement sequence. This signal would be appropriate to serve as a trigger for the SMA to switch to the next movement in the sequence (Brotchie, Iansek, & Horne, 1991; Mushiake & Strick, 1995).” Alm then speculates whether stuttering could be the consequence of the basal ganglia to fail to produce these cues that marks the end of a speech unit and triggers the start of the next one. By contrast, I rather think that the basal ganglia do produce these cues correctly, and that’s just why the speaker automatically tries to continue even if he/she feels a blockage.

According to this model, a stuttering event usually consists of two components: (1) the blockage of a speaking program because of an invalid error signal and (2) the speaker’s attempt to continue. The first component probably comes from the cerebellum (see below in the main text), the second component comes from the basal ganglia. Since basal ganglia and SMA are part of a circuit that controls voluntary behavior, the second component of a stuttering event is influenceable by the will. This is utilized in therapy: A stutterer can learn to suppress the urge to continue and, instead, to stop at the moment when he/she feels a blockage. Then, speech will not be fluent but free of stuttering, as shown in this video.

That means, the activity of the basal ganglia can be modified by the speaker’s will in order to reduce overt stuttering behaviors. Interestingly, basal ganglia activity can be modified to reduce stuttering also in another way, namely by medication with D2-receptor-blockers like haloperidol. Alm (2004) concludes from the literature that the drug “exerts its main effect in reducing superfluous motor activation during stuttering, not in reducing the number of disruptions” (p. 337), that is, the effect is similar to that of voluntary suppression of the urge to continue speaking, These facts suggest that the basal ganglia are involved in the second component of a stuttering event, but not in the first component: They do not cause the interruptions of speech flow. Alm’s theory and the involvement of basal ganglia in stuttering is discussed more extensively in the next section, here. (return)
 

Speech error detection in stutterers

An important question is: Given, auditory feedback in stutterers is anyway disrupted – is the detection of speech errors necessarily impaired by that,? Empirical studies have shown that stutterers notice and correct their slips of the tongue as well as other people (Brocklehurst & Corley, 2011; Postma & Kolk, 1992). This fact was used as an argument against theories that claim disruptions of auditory feedback to cause stuttering. Therefore, suchlike theories have to explain why the detection of speech errors works well despite the disrupted auditory feedback.

It is not very probable that slips of the tongue are not noticed because of to the temporary disruptions of audtiory feedback assumed in the proposed theory. Auditory feedback can only be disrupted in the middle and at the end of words – otherwise self-monitoring cannot work at all because no expectations of the correct sound sequences can be generated (see Section 1.5). In speech errors, wrong phonemes mostly occur at the onset of words because slips of the tongue, typically, are mistaken words that are similar sounding at the first syllable, for example, “Bake my bike” instead of “Take my bike”; see also the footnote in Section 2.5.

Speech errors within or at the ends of words are less common, except in people who are uncertain about pronunciation, e.g., of loanwords, or in grammar. If a speaker does not correct and, apparently, not notice such errors, then rather because of incorrect or unclear expectations (poor knowledge of what is correct) than because of a disrupted auditory feedback. (return)
 

The time needed for generating expectations

The Analysis-by-Synthesis Model describes how words (acoustic word forms) are recognized (identified) by an interaction between expectations (i.e., hypotheses) and the successively incoming perceptions: With every phoneme further perceived, the expectation is either confirmed or corrected (Poeppel & Monaban, 2010). That means, however, that a mismatch between the expectation just generated and the perception – no matter, whether due to a speech error or due to a disrupted feedback – firstly, challenges the expectation, hence it becomes unstable (the system is unsure whether the expectation is correct). Only after the expectation has been re-stabilized, e.g., by taking context information into account (now, the system is sure that the expectation is correct) the deviating perception can clearly be evaluated as not matching the expectation, i.e., as a potential error. Briefly said, the system, first, needs enough input, i.e., enough time for recognizing the word and predicting its correct sound sequence, before the system is able to evaluate a perception differing from this sound sequence as a potential error. Therefore, we can assume that error signals, in a rule, are generated at the ends of words or phrases and affect only the start of the next following speech unit. (return)
 

The reaction time of the monitor

The time needed for the detection of a mismatch is important for the pathomechanism of stuttering assumed in the theory. In an MEG study, MacGregor et al. (2012) found the earliest difference between the brain response to a word and to a similar sounding nonword after 50–80ms, with the participants’ attention being distracted from the stimuli. A Mismatch Negativity – that is the event-related brain potential indicating the attention-independent response to a (mostly acoustic) stimulus differing from the expectation – usually has a latency of 100–150ms (Näätänen et al., 2007). I think, we can assume such latencies also for the detection of a mismatch due to a feedback disruption (because of the disrupted feedback at the end, the affected word may pften appear to the monitor like a similar sounding nonword). However, the time needed for interrupting speech flow must be added to the error detection latency to get the total reaction time of the speech control system. (return)
 

Stuttering within syllables

One can stutter “T(ə)–t(ə)–tom” or even “To–to–tom” (repetitions), and also “T – – om”,(block), but hardly “Tom(ə)–m(ə)–m”, or “Tommm”, that is, the initiation of the syllable seems to be the problem. Further, repetitions in stuttering mostly are repetitions of syllable-like phoneme sequences (however, not always syllables of the stuttered word: [tə] is a syllable, but is not a syllable of the stuttered word “Tom”). Such observations led to the hypothesis syllables would be the typical object of stuttering (see, e.g., the Syllable Initiation Theory by Packman, Code, & Onslow, 2007). By contrast, I assume that, usually, speaking programs of words or phrases are blocked. However, all words and phrases consist of syllables, that’s why a syllable are always affected if a word or phrase is stuttered.

But how to explain why stuttering always affects the onset or the anterior portion, but not the end of a syllable – or, more precisely, why is the speaker apparently unable to move on to the nucleus of the syllable (Wingate, 1988)? Given that stuttering does not result from a breakdown of speech control, but from the regular response to an (albeit invalid) error signal – then there should also be a regular way for the control system to execute the interruption. This way seems to be to inhibit or cancel the production of the vowel – probably because (i) almost every syllable contains a vowel, and (ii) all vowels can be blocked in the same way. The adventage is that the neuronal network responsible for the interruption does not need to know the particular speaking program just to stop.

There is a second simple interruption mechanism in addition to the inhibition of vowel production, namely the blockage after a word, which inhibits the start of the next word / speaking program. This mechanism acts in stuttering in word repetitions, but also in interruptions of speech flow due to real speech errors: Levelt (1995) stated that words were tendentially not broken in certain cases, e.g. in cases of late interruption (p. 481).
    I think, the mechanism that suddenly interrupts speech flow against the speaker’s will is very simple and rough-working. It emanates from the cerebellum (see next footnote) and directly impacts on the motor cortex, i.e., over the shortest path to the muscles, without circuit via pre-motor speech planning areas. Possibly, the mechanism developed from an archaic reflex of falling silent and freezing in fear, as known in animals facing danger. An error in an automatic motor sequence always includes a high risk of failure for the whole sequence; therefore, such a (sublimated) reflex makes sense here. (return)
 

The position of stuttering in words

92 % of all stuttering events occurring on the initial sound of words were found by Johnson and Brown (1935), 98 % by Hahn (1942), 96 % by Sheeban (1974), and 97 % by Taylor (1966) – all in adults who stutter. In 3–4 years old children who stutter, Egland (1955) found 91 % of all stuttering events on word initial position, 9 % on medial position, and non on final position. Natke et al. (2004) found that, in preschool children (mean time of 9 months since onset of stuttering). 97.8 % of stuttering events occurred on first syllables of words, 1,8 % on second syllables, and 0.4 % on further syllables (monosyllabic words included). For an overview, see Bloodstein and Bernstein Ratner (2008) or St. Louis (1979). (return)
 

Stuttering-like disfluencies and other disfluencies

According to the classification system of Johnson et al. (1959), Ambrose and Yairi (1999) define ‘stuttering-like disfluencies’ as part-word-repetitions, single-syllable word repetitions and disrhythmic phonation (prolongations, blocks and broken words). Interjections, revisions and multisyllable/phrase repetitions are categorized as ‘other disfluencies’. Whereas most types of stuttering-like disfluencies occur rarely in the speech of normal fluent children, there is a striking overlap regarding single-syllable word repetitions. However, there were found significant differences between stuttering and non-stuttering children with this kind of dysfluency: Stuttering children produce significantly more single-syllable word repetitions than normal fluent children and show a significantly higher number of iterations (non-stuttering children rarely repeat a word more than once). and stuttering children repeat single-syllable words faster than normal fluent children by producing shorter silent intervals (Ambrose &Yairi, 1995; van Ark et al. 2004; Niermann Throneburg & Yairi, 1994).

The differences suggest that many single-syllable word repetitions of stutterers are stuttering symptoms. From my own experience as a stutterer and from personal observation and communication, I know that even repetitions of two-syllabic words sometimes occur in stuttering – a speaker can assuredly distinguish between a stutter and a repetition for filling a pause. Repetitions of short words are well explicable by the proposed theory: When the speaking program of a phrase is blocked, an iteration of its first word can occur: The transition to the second word is impossible, but the speaker tries to continue, thus the program starts with the first word again and again until the blockage is overcome. (return)
 

“And suddenly, stuttering was gone.” Martina’s report

Martina P. had stuttered since the age of three. At age 50, she was diagnosed having a benign brain tumor left near the cerebellum. The tumor pressed the optic, the acoustic, and the vestibular nerve and had already grown right up close to the brainstem. In June 2012, the tumor was extirpated. Unfortunately, a brain bleeding occurred, and an emergency operation had to be done, because of which the patient was in a coma some weeks. The cerebellum was damaged by the complication. Martina P. reported: “When I waked up from the coma, something was altered in my speaking. […] Something was missing – the stutter. […] Stuttering stayed absent in speaking. I write ‘in speaking’ because it was still present in my thoughts, that is, I was waiting for the stutter at every word. But nothing came. Stuttering was […] so strongly linked to my emotions that, initially, it was hard for me to ‘accept’ the missing stutter.”

Martina P. has not stuttered since the operation till this day (May 2016). The weakness of cerebellar function manifests in a motor impairment, thus she needs to use a wheeled walker and has not yet been able to type with ten fingers, in what she was perfect prior the surgery. In the initial period after surgery, she also had difficulty speaking distinctly, which has, however, much bettered meanwhile (Sources: Der Kieselstein, magazine of the German stuttering association BVSS, issue 3/2013, p. 46, and phone conversations in January 2015 and May 2016). (return)
 

Cerebellar activation related to error detection and -correction

In an hand movement task, Blakemore, Frith, and Wolpert (2001) obtained results suggesting the cerebellum to be involved in signalling the sensory discrepancy between the predicted and actual sensory consequences of movements. In a motor learning/PET study, van Mier and Petersen (2002) found an area in the left lateral cerebellum showing practice-related decreases of activation, which were most likely related to a decrease in errors; in two of their experiments a highly significant correlation was found between the decrease in errors and the decrease in left cerebellar activation.

Grafton et al. (2008) investigated the neural correlates of visuomotor tracking with fMRI and found that activity in the cerebellum was strongly correlated with the magnitude of tracking errors and with motor corrections. Cerebellar correlations with perceived errors were also identified in visuomotor tracking requiring eye-hand coordination (Miall, Imamizu, & Miyauchi, 2000; Miall, Reckess, & Imamizu, 2001; Miall & Jenkinson, 2005) and with violations of expected sensory predictions in eye blink conditioning (Ramnani et al., 2000).

Activations in the cerebellar cortex were found for errors in a reaching task (Diedrichsen et al., 2005). Seidler, Noll, and Thiers (2004) and Ogawa, Inui, and Sugio (2006) investigated feedback control and found the cerebellum to be more active under feedback control conditions in both studies, supporting a role for the cerebellum in feedback processes of motor control (see also Seidler et al., 2014, for an overview). (return)
 

to the top

next page