Skip to main content

Sven Mattys’s Talk at RuCCs: Effects of cognitive load on speech perception

Abstract: Improving the validity of speech-perception models requires an understanding of how speech is processed in everyday life. Unlike natural listening conditions that involve a degradation of the signal (e.g., noise), conditions that do not alter the integrity of the signal have been under-studied (e.g., cognitive load, CL). Drawing upon behavioral and psychophysical methods, we found that CL increases listeners’ reliance on lexical and higher-order knowledge. However, the data also show that this bias is a cascaded effect of impoverished phonetic processing under CL, not a direct consequence of CL on lexical activation. Findings of elevated auditory sound thresholds and poorer discrimination of perceived loudness and duration under CL add further support to the case for an early locus of interference. The results not only constrain our understanding of the functional architecture of speech-perception models, they also invite an analysis of sound processing within a cognitive framework where individual differences can be considered.

Ed Flemming’s Talk at Rutgers Linguistics:A Generative Phonetic Analysis of the timing of L- Phrase Accents in English

Abstract:
The narrow goal of this research is to develop an analysis of the timing of the English low phrase accent (L-) in H*L-L% and H*L-H% melodies. This is challenging because L- is generally realized as an ‘elbow’ in the F0 trajectory – i.e. a point of inflection rather than a local maximum or minimum – and it is notoriously difficult to locate F0 elbows precisely. I argue that the proper approach to locating tonal targets involves an ‘analysis-by-synthesis’ approach: Given an explicit model of the mapping from tonal targets to F0 trajectories, we can infer the location of targets by fitting that model to observed F0 contours. So a broader goal is the development of a framework for grammars of tonal phonetics. The proposed model analyzes F0 trajectories as the response of a dynamical system to a control signal that consists of a sequence of step functions connected by linear ramps. Tone realization then involves selecting the control signal that yields the F0 trajectory that best satisfies constraints on the realization of tone targets.

This model is used to infer the location of L- and to analyze its distribution. Previous analyses have proposed either that L- occurs at a fixed interval after H*, or that it aligns to a landmark, such as the end of the accented word or the next stressed syllable. The results do not support any of these hypotheses: L- does not occur at a fixed interval after H*, instead it tends to occur earlier when the interval between H* and the first stressed syllable in the following word is shorter (e.g. ‘álien anníhilator’ vs. ‘mínimally manéuverable’), but L- also does not align to that stressed syllable, or any other landmark. This pattern of realization is analyzed as a compromise between two constraints, one enforcing a target duration for the fall from H* to L-, and a second, weaker constraint requiring the fall to be completed before the next stressed syllable, to avoid misinterpretation of L- as an L* pitch accent associated with that syllable (cf. Barnes et al 2010).

Dr. Holger Mitterer’s talk: The letters of speech: evidence from perceptual learning and selective adaptation

Abstract:

While every model of visual-word recognition for alphabetic scripts assumes that letters play an important role in mediating between the sensory input and lexical representations, no such clear consensus exists for spoken-word recognition. In this talk, I will provide an overview of recent developments in this unit-of-perception debate. Partly based on results from a perceptual learning paradigm there is at least a consensus that some form of intermediate unit is involved. However,  the form of this unit is still under debate, with phonological features, articulatory features,  allophones, and phonemes as most prominent contenders. I will argue that the same perceptual learning paradigm that led to the consensus that some form of intermediate unit is used can also be used to delineate the form of these units based on patterns of generalization or non-generalization of learning. A number of experiments using a variety of languages (Dutch, Korean, and German)  suggest that a grain size similar to an allophone might be a good candidate. This line of research was then questioned by Bowers, Kazanina, and Andermanse (Journal of Memory and Language, 2016) based on findings from a selective adaptation paradigm. However, when properly controlled,  selective-adaptation paradigms also support an allophonic type of unit that serves as an intermediary between acoustic input and lexical representations.