May 9, 2021

Spatial alignment between faces and voices improves selective attention to audio-visual speech

<p>The ability to see a talker's face has long been known to improve speech intelligibility in noise. This perceptual benefit depends on approximate temporal alignment between the auditory and visual speech components. However, the practical role that cross-modal spatial alignment plays in integrating audio-visual (AV) speech remains unresolved, particularly when competing talkers are present. In a series of online experiments, we investigated the importance of spatial alignment between corresponding faces and voices using a paradigm that featured both acoustic masking (speech-shaped noise) and attentional demands from a competing talker. Participants selectively attended a Target Talker's speech, then identified a word spoken by the Target Talker. In Exp. 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In Exp. 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance as predicted, but this effect could not fully explain the cost of AV spatial misalignment. Finally, in Exp. 3 and 4, we show that the effect of AV spatial alignment changes with noise level, but this was limited by a floor effect: due to the use of closed-set stimuli, participants were able to perform the task relatively well using lipreading alone. However, comparison between the results of Exp. 1 and Exp. 3 suggests that the cost of AV misalignment is larger at high noise levels. Overall, these results indicate that spatial alignment between corresponding faces and voices is important for AV speech integration in attentionally demanding communication settings.</p>
<p> bioRxiv Subject Collection: Neuroscience</p>
<p> <a href="">Read More</a></p>

Leave a Reply

%d bloggers like this: