New paper showing familiar voices are less effortful to understand
9th December 2025
Our new paper, ‘Voice familiarisation training improves speech intelligibility and reduces listening effort‘ has just been published online in Trends in Hearing.
In previous work, we’ve shown that training people to become familiar with new voices improves the intelligibility of those voices. In our new paper, we tested whether becoming familiar with a voice has advantages for making speech less effortful to understand. We used two different ways of assessing effort. First, we asked participants to self-rate the amount of effort they exerted to understand speech when competing speech was present. At the same time, we measured the diameter of their pupils using pupillometry, which is a physiological marker often associated with effort.
Our results showed that both measures were sensitive to familiarity with a voice. People self-rated the sentences as easier to understand, and also showed less pupil dilation, when they were listening to familiar voices compared to unfamiliar voices. Therefore, as well as improving the intelligibility of speech, being familiar with someone’s voice also means we don’t need to exert as much effort to understand what they’re saying.
You can read the full paper at the following link:
Baxter, F., Smith, H., & Holmes, E. (2025). Voice familiarisation training improves speech intelligibility and reduces listening effort. Trends in Hearing. https://doi.org/10.1177/23312165251401318
‘How we hear’ textbook published
18th November 2025
Over the past few years, Emma has been working on an introductory textbook on auditory perception, which has now been published by Oxford University Press—How we Hear: An Introduction to Auditory Perception.
The textbook is designed as a gentle introduction for readers who are new to the field of auditory perception—such as those studying for their first degree, or those at a higher level coming from a different field.
The book is available in print or as a fully-interactive e-book. It includes real-world examples and analogies to facilitate students’ understanding. The e-book takes learning to the next level with in-line audio demos, activities, and test-yourself questions.
If you teach and this book might be relevant to your teaching, you can order a free inspection copy from the OUP website.
The textbook contains the following chapters:
* How do we hear sounds? (A gentle introduction to sounds and the auditory pathway)
* Perceptual characteristics of sound (including loudness, location, and pitch)
* Perceiving multiple sounds (Perceptual organisation, auditory scene analysis, and how multiple sounds are represented in the brain)
* Perceiving speech (A gentle introduction to the speech signal, how people segment speech and recognise words, voices, prosody, and speech processing in the brain)
* Perceiving speech in noisy places (Types of ‘noise’, bottom-up and top-down factors affecting speech-in-noise perception, and types of attention)
* Perceiving music (Pitch, timing, and timbre in music, perceptual organisation of music, emotion, music and the brain, and individual differences including musical training, absolute pitch, music, and language background)
* Hearing and vision (Multisensory perception including sensory dominance, audio-visual integration, cross-modal plasticity, and synaesthesia)
* Hearing difficulties (Ways of measuring and assessing hearing, types of hearing loss and their consequences, interventions including hearing aids and cochlear implants, tinnitus, and hyperacusis)
We hope you find it useful!
New lab member
10th October 2025
This week, we welcomed Bindiya Patel to the group as a new PhD student. Bindiya has a background in audiology and previously worked at the UCL Ear Institute. Watch this space for some new exciting projects!
New paper showing benefits of online voice training in older adults
30th July 2025
The advance online publication of our new paper, “Computer-based voice familiarization, delivered remotely using an online platform, improves speech intelligibility for older and younger adults“, is now available on the Journal of Experimental Psychology: Applied website.
Our previous research has shown that speech is more intelligible when it is spoken by someone familiar (e.g., a friend or family member), compared with someone unfamiliar. In addition, we’ve previously shown that we can train new voices in the lab to become familiar and produce this intelligibility benefit for newly trained voices. In our new paper, we show that we can successfully train voices using remote, online platforms, in which participants complete voice training on their own computers, in the comfort of their own homes. In this study, we compared groups of older (55–73 years) and younger (18–34 years) participants. We found that both groups found trained voices more intelligible than unfamiliar voices. Therefore, these findings show that older adults can learn new voices as effectively as younger participants. This is useful for real-world applications of voice familiarisation, which may be particularly appealing to older adults who find it difficult to understand speech in noisy places.
You can read the full paper at the following link:
Zhu, W., & Holmes, E. (2025). Computer-based voice familiarization, delivered remotely using an online platform, improves speech intelligibility for older and younger adults. Journal of Experimental Psychology: Applied. https://doi.org/10.1037/xap0000536
Deadline extended for fully-funded PhD studentship
29th April 2025
The deadline has been extended for the fully-funded PhD opportunity in the Cognitive Hearing Lab! The application deadline is now Wednesday 7th May 2025.
The PhD project is funded by RNID, and covers tuition fees plus a ~£20k per year stipend. Due to the funding, the position is is only open to UK students and is not open to students who would pay international tuition fees. See the PhD studentship advert (document download, 48.0 KB) for more details and instructions on how to apply.
If you would like to discuss the position further, feel free to get in touch with Emma (emma.holmes@ucl.ac.uk).
New paper on figure-ground perception
7th March 2025
Xiaoxuan Guo led a paper that was published this week in Proceedings of the Royal Society B. This paper extends our previous work showing that figure-ground perception—a measure of non-linguistic auditory grouping ability—predicts speech-in-noise perception (Holmes & Griffiths, 2019).
In our previous work, we examined static figures and dynamic figures that changed frequency according to the formants in spoken sentences (Holmes & Griffiths, 2019). Whereas, in this new paper, we used static figures and dynamic figures that changed frequency according to the fundamental frequency of spoken sentences. In addition, the paper compared lower-frequency with higher-frequency figures. The results demonstrated relationships between all figure-ground measures and speech-in-noise perception, for both words-in-babble and sentences-in-babble. Thus, this paper solidifies the role of static and dynamic grouping processes in everyday speech perception.
You can read the full paper here:
Guo, X., Benzaquén, E., Sedley, W., Brühl, I., Holmes, E., Berger, J. I., Rushton, S., & Griffiths, T. D. (2025). Predicting speech-in-noise ability with static and dynamic auditory figure-ground analysis using structural equation modelling, 292(2042). Proceedings of the Royal Society B. https://doi.org/10.1098/rspb.2024.2503
ARO conference
25th February 2025
Emma, Harriet and Elin are currently at the ARO conference in Florida. Harriet and Elin will both be giving poster presentations this afternoon:
- [T149] How Does Hearing Loss Affect Cognitive Influences on Speech-In-Speech Perception?
- [T155] The Role of Pitch Variability in Recognition and Intelligibility of Trained Voices
If you’re at the conference, come and talk to them.
Funded PhD opportunity in the Cognitive Hearing Lab
27th January 2025
We’re currently advertising a funded PhD opportunity for someone who is interested in researching how auditory training can improve speech understanding in noisy environments for older adults who have age-related hearing loss. The project uses behaviour and pupillometry and is funded by RNID. The PhD will start in September 2025 and the deadline for applications is 28th April. Due to the funding, the position is is only open to UK students and is not open to students who would pay international tuition fees.
The PhD studentship advert (document download, 48.0 KB), provides more details and instructions on how to apply.
Feel free to get in touch with Emma (emma.holmes@ucl.ac.uk) if you have any questions about the position.
Talk at Cambridge Cognition and Brain Sciences Unit
24th January 2025
This week, Emma visited the Cognition and Brain Sciences Unit (CBU) in Cambridge to give a Chaucer Club talk and meet with colleagues. She enjoyed discussing our research and hearing about what others are working on at the CBU.
Speech in noise conference
10th January 2025
Emma and Rebecca have just been to the 2025 Speech in Noise conference in Lancaster, which was held on 9th and 10th January.
Rebecca presented a poster on ‘Development of a questionnaire to measure
social participation in noisy environments for people aged 60+ with hearing loss’. Rebecca presented her poster earlier today and it was very well attended.
In addition, Rongru Chen, a PhD student at UCL who collaborates with the group, also presented a poster entitled, ‘Listening effort is reduced with rapid adaptation to noise-vocoded speech under full and divided attention: evidence from pupil dilation and subjective rating’.
They are looking forward to disseminating their key take-aways from the conference to the rest of the Cognitive Hearing Lab next week.
New paper, just in time for Christmas
2nd January 2025
Check out our new paper in JASA Express Letters, that introduces a ‘British version of the Iowa Test of Consonant Perception’, led by Xiaoxuan Guo. The paper was published on 24th December 2024.
The paper presents a new speech corpus, named the ITCP-B, and reports validity measures. The ITCP-B demonstrated excellent test-retest reliability, cross-talker validity, and good convergent validity. We anticipate that the ITCP-B might help to facilitate studies that either seek to compare or to combine results from US and UK participants.
Here’s a link to the full paper:
Guo, X., Benzaquén, E., Holmes, E., Choi, I., McMurray, B., Bamiou, D., Berger, J. I., & Griffiths, T. D. (2024). British version of the Iowa Test of Consonant Perception. JASA Express Letters, 4, 124402. https://doi.org/10.1121/10.0034738
A desktop application for running the ITCP-B test and the MATLAB scripts for constructing your own versions of ITCP-B are both freely available on the Open Science Framework.
RNID celebration event
26th November 2024
This evening, RNID (the Royal National Institute for Deaf People) have been celebrating an impressive 25-year milestone of funding research into hearing. As part of their celebration event at the Royal Society, Emma was delighted to give an invited talk about how their funding has contributed to her career as a researcher. Emma’s first funding from RNID was a travel grant to attend one of the largest international conferences in hearing research during my PhD. She later received a fellowship from RNID that supported her transition from working as a postdoc to becoming a member of faculty at UCL.
As part of the event, Emma got to hear about the experience that Max Barker has had living with tinnitus, and about RNID’s partnership with the BioIndustry Association. She also enjoyed talking with RNID’s supporters about their experience of hearing loss. Our research on listening in noisy places seemed to strike a chord with many of them!
If you’re interested in finding out more about the impressive hearing research that RNID has funded, you can read RNID’s 25-year impact report. Looking forward to the future, we’re excited to see what’s in store for the next 25 years of hearing research, and have the opportunity to contribute to future breakthroughs that help improve the lives of people with hearing loss.
New paper on spatial attention in older adults
1st November 2024
Our new paper: ‘Spatial selective auditory attention is preserved in older age but is degraded by peripheral hearing loss‘ has been published in Scientific Reports.
When listening to speech in a noisy place, we know that listeners without hearing loss use knowledge of the location of a talker to help them attend to a voice of interest. While early-onset hearing loss seems to affect voluntary spatial attention (see Holmes et al., 2017), here we examined how age-related hearing loss affects these process in adults aged 55 and above.
We recruited older and younger participants with natural variability in hearing thresholds. They were cued to report sentences from a target location (left/right) while they heard competing speech at other locations.
We found that older and younger groups got a similar benefit to performance from advance information about the talker location. Thus, ageing by itself does not affect voluntary spatial attention in this task.
However, the benefit progressively reduced with greater age-related hearing loss. In other words, older participants with hearing loss used knowledge of location to a lesser extent than those without hearing loss. The effects of hearing loss were graded, such that the benefit was lower even in older adults who would not meet clinical criteria for hearing loss. Thus, changes to spatial attention could account for challenges listening in noisy places for many people.
Interestingly, these changes did not correlate with spatial acuity, so may be due to changes to top-down processes that are unrelated to spatial acuity. We discuss some possible mechanisms in the paper.
Here’s a link to the full paper:
Caso, A., Griffiths, T.D. & Holmes, E. (2024). Spatial selective auditory attention is preserved in older age but is degraded by peripheral hearing loss. Scientific Reports, 14, 26243. https://doi.org/10.1038/s41598-024-77102-5
Upcoming presentations at Auditory Science Meeting
23rd September 2024
On Thursday and Friday, Emma, Harriet and Elin will be attending the UK Auditory Science Meeting in Cambridge. Harriet Smith will be giving a talk on some of our recent work on “The differential role of pitch variability in recognition and intelligibility of trained voices” on Thursday at 4pm. On Friday, Elin Bonyadi will be presenting a poster on her PhD work on how hearing loss affects cognitive influences on speech-in-speech perception (poster number 24). We hope to see you there!
International workshop on Active Inference (IWAI)
9th September 2024
Emma is in Oxford for the 2024 International workshop on Active Inference (IWAI). It’s great to see so many people interested in Active Inference in the same room. The workshop has kicked off with some excellent tutorials on discrete and continuous Active Inference. Emma will be giving the first keynote talk this afternoon on our work modelling selective attention in challenging listening environments. We heard that there was a long waiting list for this year’s conference, so you’d better get in quick if you’re interested in attending next year’s conference!
Talk at VoiceID conference
30th August 2024
Over the past few days, Emma has been at the international VoiceID conference in Marburg, Germany. Despite a battle to get there with the German trains, she has found it to be an excellent conference. The conference was well-attended and covered a variety of topics on voice identity. Earlier today, Emma gave a talk on our work about how voice familiarity affects speech intelligibility in challenging listening environments. During the conference, Emma particularly enjoyed the discussions with other conference attendees and reflecting upon theories of speech and voice perception.
Virtual Conference on Computational Audiology (VCCA)
17th June 2024
Later this week, members of the group will be attending the 2024 Virtual Conference on Computational Audiology. The two main themes of the conference are AI and hearing and the changing landscape of audiology. Emma is chairing a session on cognitive and neuro-audiology and, within the session, she will also be speaking about our work on how hearing loss across the lifespan affects selective attention to speech. The conference is free to attend, and talks will be available after the conference. Come along and join us!
ASA Talk
17th May 2024
Emma has been to the Acoustical Society of America conference in Ottawa to give a talk. As a first-time ASA attendee, she found it great to see the breadth of research and meet some new (as well as some familiar) people. Her personal highlight was the symposium on ‘Interactions between voice and speech perception’, which she helped to organise with Etienne Gaudrain and Jens Kreitewolf. There were so many great talks in the session and it was also really interesting to chat about our work with those who attended. She also enjoyed walking around the afternoon poster session.
EPS Talk
10th April 2024
Emma has been at this year’s Experimental Psychology Society (EPS) meeting in Nottingham, giving a talk at the symposium accompanying Nadine Lavan’s EPS Prize Lecture. The symposium focuses on voice perception (and also includes some work on face perception), so Emma was speaking about some of our work on how voice familiarity affects speech intelligibility. She has enjoyed hearing talks about new research on audio-visual integration from researchers at Nottingham Trent University and beyond.
ARO Conference
5th February 2024
Emma is currently in LA for the ARO annual conference, and she’s been enjoying the meeting so far. This year, she has particularly appreciated the opportunity to reconnect with colleagues and have discussions about ongoing projects. Today, Emma is presenting ‘Voice Familiarisation Delivered Online Improves Speech Intelligibility for Older and Younger Adults’ (M118), a project led by a previous MSc student, Wansu Zhu. Feel free to come and say hello to Emma if you’re at the ARO meeting too.
New lab members
20th October 2023
It’s been a great month for the lab—we have 3 new lab members!
Harriet Smith has joined the lab as a postdoc, after completing her PhD with Matt Davis at the Cognition and Brain Sciences Unit (CBU) at the University of Cambridge.
Elin Bonyadi and Rebecca Bright have joined as new PhD students. Elin has a background in cognitive neuroscience and Rebecca has a background in speech and language therapy.
We’re looking forward to some exciting projects ahead!
Postdoc advert now live!
5th June 2023
We’re pleased to announce that the advert to join the lab as a postdoc is now live too! This is a position on a new Wellcome-funded grant, which seeks to examine how central cognitive pathways interact with hearing loss. The appointed candidate will lead a series of experiments that examine how young adults with and without mild-to-moderate hearing loss use auditory cognition to understand speech in noisy settings. In particular, the appointed candidate will lead work using 7-Tesla MRI, which will allow us to estimate laminar-specific cortical responses in humans. The post is available for 3 years in the first instance, with the possibility of an extension. The preferred start date is September 2023 and the deadline for applications is 23rd June.
The following link provides more details and instructions on how to apply:
Feel free to get in touch with Emma (emma.holmes@ucl.ac.uk) if you have any questions about the position.
RA/PhD advert now live!
23rd May 2023
We’re currently advertising a funded PhD opportunity for someone who is interested in researching how central cognitive pathways interact with hearing loss, using behaviour and pupillometry. The preferred start date is 25th September 2023 and the deadline for applications is 8th June. The position is funded by a new Wellcome grant.
The following link provides more details and instructions on how to apply:
Feel free to get in touch with Emma (emma.holmes@ucl.ac.uk) if you have any questions about the position.
New Wellcome funding
1st May
Emma has been awarded a Wellcome Career Development Award to investigate how cognition interacts with hearing loss during speech perception. The grant will run for 8 years and will combine measures of behaviour and brain responses with computational modelling. She’ll soon be advertising for postdoc and PhD positions with start dates in September 2023: Watch this space!
New modelling paper
23rd April 2023
Check out our new paper in Neuropsychologia on ‘Cognitive effort and active inference’, led by Thomas Parr.
This paper offers a formalisation of ‘cognitive effort’ under the active inference framework. In this work, effort is formulated as a deviation from prior beliefs about a mental (covert) action—in other words, effort is exerted to overcome a mental habit. To illustrate this, Thomas developed a model of the visual Stroop task. The idea is that, in the Stroop task, participants must suppress the impulse to read a colour word and instead report the colour of the text of the word. The Stroop task is characteristically effortful. In addition to reproducing the basic Stroop effect, our simulations also produced behaviour consistent with the established congruency sequence effect and the speed-accuracy trade-off that is ubiquitous in the cognitive control literature.
You can find the paper here:
Parr, T., Holmes, E., Friston, K. J., & Pezzulo, G. (2023). Cognitive effort and active inference. Neuropsychologia, 184, 108562. https://doi.org/10.1016/j.neuropsychologia.2023.108562
New paper on voice familiarity
21st March 2023
Our new paper, ‘Intelligibility benefit for familiar voices is not accompanied by better discrimination of fundamental frequency or vocal tract length’ has been published in Hearing Research.
Most theories of speech perception predict better discrimination of voice attributes for familiar than unfamiliar voices. In this study, we sought to test this prediction, and whether better discrimination underlies the familiar-voice benefit to speech intelligibility.
We recruited pairs of friends who were naturally familiar with each other’s voices, and recorded them speaking various sentences. We then measured their discrimination thresholds for modifications to the mean pitch (correlate of F0) and formant spacing ratio (correlate of VTL)—both for their friend’s voice and for two unfamiliar voices (who were the friends of other participants in the study). We also asked them to report sentences from their friend and the unfamiliar talkers when a competing talker was present, both for the original sentences and also when we manipulated the pitch and formant spacing ratio to the participant’s 90% discrimination thresholds.
As expected, familiar voices were more intelligible than unfamiliar voices. Interestingly, the familiar-voice intelligibility benefit was just as large following perceptible manipulations to pitch and VTL-timbre than for the unmodified sentences. We also found that discrimination thresholds were no smaller for familiar voices than for unfamiliar voices. Also, discrimination thresholds didn’t correlate with the intelligibility benefit across participants. Based on our results, it seems unlikely that better representations of pitch or VTL-timbre underlie the familiar-voice benefit to intelligibility. The results are more consistent with cognitive accounts than traditional accounts that predict better discrimination.
Here’s a link to the full paper:
Holmes, E. & Johnsrude, I. S. (2023). Intelligibility benefit for familiar voices is not accompanied by better discrimination of fundamental frequency or vocal tract length. Hearing Research, 429, 108704. https://doi.org/10.1016/j.heares.2023.108704
ARO
12th February 2023
Emma is currently in Florida for the ARO annual conference, and the sun is shining! Today, Emma is presenting ‘Relationships Between Hearing, Cognition and Social Activity for Older Adults without Dementia’ (SU118), by talented MRes student Rebecca Bright, and ‘How do MEG signatures of spatial attention vary among older adults?’ (SU6), which is part of an RNID-funded project. Emma is also on the Young Investigators Panel on Tuesday, where you can ask all of your pressing career-related questions.
New paper on pitch discrimination
1st July 2022
In a new paper—just published in JASA—we asked if natural familiarity for particular timbres improves pitch discrimination for sounds with those timbres. This one has been a long time coming, so it’s good to see it out!
We measured pitch discrimination thresholds for flute tones, violin tones, trumpet tones, and artifical flat-spectrum complex tones. We tested non-musicians and musicians who were trained to play each of the three instruments. If familiarity with timbre improves pitch discrimination, we should have found the best performance for natural instrument timbres. Instead, we found musicians had better thresholds for artificial flat-spectrum complex tones (and no difference among timbres for non-musicians).
Separating the musician group into those trained on different instruments, we found a significant interaction between group and timbre, but not in the expected direction… For all 3 musician groups, thresholds were no better for the timbre the musician was trained to produce than other timbres. Thus, even extensive experience listening to—and learning to produce—sounds of a particular timbre, doesn’t appear to improve pitch thresholds. Instead, the interaction reflected better thresholds for artificial flat-spectrum complex tones in flautists and trumpeters, but not in violinists (who showed a non-significant trend in the same direction).
Our results imply that acoustics affect pitch discrimination more than does familiarity with particular timbres. Perhaps, familiarity with particular timbres helps people to perform other tasks, but our results imply it doesn’t help with pitch discrimination.
You can read the paper here:
Holmes, E., Kinghorn, E., McGarry, L., Busari, E., Griffiths, T. D., & Johnsrude, I. S. (2022). Pitch discrimination is better for synthetic timbre than natural musical instrument timbres, despite familiarity. JASA, 152(31), 31–42. https://doi.org/10.1121/10.0011918
This week’s talks
27th June 2022
After a long journey (that ended up re-routed via Finland), Emma has arrived in Leipzig for the IMPRS NeuroCom Summer School. She’s looking forward to meeting everyone at the MPI and giving her talk tomorrow in a session on models of cognition.
On Thursday, Emma is giving a talk at the 3rd Virtual Conference on Computational Audiology. Even though the conference is fully online and she won’t be meeting colleagues in person, she’s excited to discuss predictive coding in a session with Bernhard Englitz and Floris de Lange. The conference is free to register—check out the program on the VCCA website.
RNID staff summit
10th March 2022
Emma has been presenting an update on our research at the RNID staff summit, and talking to a variety of people. The summit was held at High Leigh Conference Centre in Hoddeston, which turned out to be a very scenic (and sunny) location.
APS Rising Star award
23rd February 2022
Emma has been named as a Rising Star by the Association for Psychological Science. Thanks to everyone involved and congratulations to the other Rising Stars.
ARO Young Investigator award
5th February 2022
It’s the first day of the annual ARO conference and, once again, the conference is taking place online this year. Emma will be attending and she’s looking forward to the presidential symposium this morning, and also to speaking about her work over the next few days. Tomorrow, Emma will be presenting some new results from an RNID-funded project, which examines how age and audiometric thresholds affect spatial attention. On Monday, she’ll be receiving the 2022 Geraldine Dietz Fox Young Investigator Award and giving a short overview of her previous research in the awards ceremony. On Tuesday, she’ll be presenting a study showing that pitch discrimination is better for synthetic timbre than natural musical instrument timbres, despite familiarity (which is available as a preprint on PsyArXiv). Hopefully, we’ll all be able to meet in person again in Florida in 2023!
EPS Small Grant
25th January 2022
Today, Emma found out that her application for an EPS Small Grant was successful and will receive funding! We’ll be starting the project later this year. The project will delve deeper into the benefits of training a voice to become familiar, following on from some research she did in collaboration with Ingrid Johnsrude (Holmes, To & Johnsrude, 2021). Watch this space!
New paper on modelling selective attention during cocktail party listening
10th November 2021
Our new paper, “Active inference, selective attention, and the cocktail party problem“, has just come out in Neuroscience & Biobehavioral Reviews. We aimed to model selective attention during a simplified cocktail party paradigm, in which a listener hears two voices speaking pairs of words and is cued to listen to the voice on their left or right side.
We first created a generative model under which a synthetic agent could perform the task accurately. We treated cocktail party listening as a Bayesian inference problem, based on active inference. Equipped with an appropriate generative model, our Bayesian agent scored 100% correct on the task. Next, we created ‘synthetic lesions’ in the generative model (by changing the precision in different parts of the model), and showed that the synthetic agent made human-like errors following some lesions, and not others. Specifically, we found that the sort of errors exhibited by human listeners occur when precision for words on the non-cued side is only marginally lower than the precision for words on the attended side—in which case, words from the unattended side can ‘break through’. This means that, for these types of errors to occur, attention doesn’t necessarily need to be misallocated (i.e., accidentally attending to the non-cued location)—but, instead, it’s simply not allocated ‘strongly’ enough (i.e., with sufficiently high precision).
We then adjusted the model to investigate hypotheses about preparatory attention. Specifically, we aimed to examine two findings from empirical studies: first, reaction times for reporting words on the cued side improve when the cue is presented longer in advance; second, spatial cueing is associated with a ramping of EEG activity, resembling the contingent negative variation (CNV), before the speech begins. We tested different hypotheses for these effects and, in brief, we found that the two sets of findings—which, on the surface, appear to be strongly linked—were in fact simulated through different manipulations to the model. Time-dependent changes in precision weren’t needed to explain faster RTs with longer preparatory intervals, but they were necessary to explain the ramping of EEG activity. Therefore, these two sets of findings may be underpinned by distinct processes.
We hope that this model will be useful for modelling selective attention in future work. It generates quantitative predictions for both behaviour and neural responses, and could be modified for a variety of different purposes.
If you’d like to find out more about this work, the paper is available here:
Holmes, E., Parr, T., Griffiths, T. D., & Friston, K. J. (2021). Active inference, selective attention, and the cocktail party problem. Neuroscience and Biobehavioral Reviews, 131, 1288–1304. https://doi.org/10.1016/j.neubiorev.2021.09.038
Talk at SCAN
29th October 2021
The Symposium on Cognitive Auditory Neuroscience (SCAN) was originally intended to take place in Pittsburgh in 2020—but COVID-19 had other ideas for 2020! The organisers, therefore, split the planned day into four smaller sessions, to be interspersed and held online throughout 2021. Three of the sessions have already taken place: Emma attended them all and thoroughly enjoyed every session. She was honored to be invited to speak at the final session, which will take place later today. In her talk, Emma will be discussing theories of speech perception and how they account for—or, in many cases, fail to account for—our results showing how voice familiarity improves speech intelligibility. The sessions are free to register for, and there’s an Early Career discussion a week today. Hope to see you there!
New paper on melodic predictability
30th August 2021
Check out David Quiroga-Martinez’s new paper on how our brains respond to melodic deviations while listening to simple melodies. He conducted the DCM analyses while he was visiting us at UCL.
You can read the paper here:
Quiroga-Martinez, D. R., Hansen, N. C., Højlund, A., Pearce, M., Brattico, E., Holmes, E., Friston, K., & Vuust, P. (2021). Musicianship and melodic predictability enhance neural gain in auditory cortex during pitch deviance detection. Human Brain Mapping. https://doi.org/10.1002/hbm.25638
Moved to SHaPS UCL
23rd August 2021
Today is Emma’s first day as a lecturer in the Department of Speech Hearing and Phonetic Sciences at UCL. She’s just moved 10 minutes down the road from Queen’s Square to her new office in Chandler House. She’s looking forward to her new role, and building her group here.