The dynamics of phonological planning
This dissertation proposes a dynamical computational model of the timecourse of phonological parameter setting. In the model, phonological representations embracephonetic detail, with phonetic parameters represented as activation fields that evolveover time and determine the specific parameter settings of a planned utterance.Existing models of speech production assign little or no role to phonological features,and theories of phonological features lack the notion of the timecourse of how thosefeatures get set. One benefit of the model presented here is that it provides a formallink between speech perception and production, which has been notably missing in theliterature despite a longstanding debate on the topic (cf. Diehl, Lotto, & Holt, 2004).This dissertation capitalizes on the convergence of novel experimental andcomputational results to identify specific requirements of any model of the perception-productionlink, including a role for representations at the level of phonologicalfeatures and the computational principles of both excitation and inhibition.
Another benefit of this dynamical model is that it enables establishing formallinks between phonological processes and response time data. The model accounts forresponse times in a task in which speakers hear distractors as they are preparing toproduce utterances. Previous studies using this task (e.g., Galantucci, Fowler, &Goldstein 2009) have found that subjects produce an utterance more quickly whenthey perceive a distractor that is identical to a response being planned than when it is viiidifferent. The perception-production link is modeled here as the influence of aperceived distractor on the process of setting the phonological production parametersof a required utterance. Response time modulations are due to the effects of combining(in)compatible inputs to this planning process. The model predicts gradient effects onresponse times based on the degree of similarity between a distractor and a response,with responses being quickest when they are identical, slower when they differ on oneparameter (voicing or articulator), and slower still when they differ on more than oneparameter. These predictions are confirmed in two experiments that provide the firstclear evidence of perceptuo-motor effects of voicing and articulator.