
TALKS
2025-2026
The group addresses questions, such as what’s the difference between a posed and a spontaneous expression? how fast can we perceive a face or an emotional expression?, what strategies do radiologists employ to detect breast cancer and is this skill trainable? how do clinical conditions, such as depression, autism, affect face recognition? To address these questions, researchers in the collaborative employ a variety of empirical techniques involving psychophysics, cognitive experiments, eye tracking, neural imaging (fMRI, EEG), and computer modeling.
Catch up on the 2025-26 DMC talk season here!
​
______________________________________________________________________________________
September 10th, 2025
​
Speaker: Oscar Solis (Evans Lab)
Title: Neural Correlates of Visual Complexity and Image Memorability​
​
Abstract: Humans have a remarkable ability to remember thousands of images (Standing, 1973). Not only does the semantic category depicted in the image contribute to this ability but also the idiosyncratic details that make exemplars of the same category distinct from each other (Evans & Baddeley, 2018). Previous work has shown that images that are perceived as more complex tend to be more memorable (Kyle-Davidson et al., 2025), suggesting that visual complexity could contribute to the detail component of visual long-term memory. Here I present work done up to date in our investigation of the neural correlates of this relationship using the Natural Scenes Dataset (Allen et al., 2022). This dataset consists of high-resolution functional magnetic resonance imaging (fMRI) scans from 8 participants who viewed thousands of photorealistic images during a continuous recognition memory task. We have run whole-brain correlational analyses between single-trial betas and metrics derived from these images: complexity and memorability scores generated from human data (still to come) as well as predictions made by a range of computational models. We hypothesize that activity in medial temporal lobe and ventral visual areas will correlate with memorability scores (Bainbridge et al., 2017). We also predict that activity in early visual cortex and higher visual areas such as parahippocampal place area will be correlated with complexity scores (Zhou et al., 2023). By inspecting the overlap in correlation maps between these two features, we will identify the possible neural substrates that underlie the behavioural relationship between complexity and memorability.
​​
Link to Zoom recording of Oscar's Talk
​
​​
___________________________________________________________________________________
October 8th, 2025
​
Speaker: Quoc Vuong
Title: A hierarchical model for affect judgments of natural images: Exploring the role of local, global and category visual features​
​
Abstract: Natural images capture real-world scenarios, portraying people and objects situated in a scene. Importantly, observers can experience emotional responses and show corresponding neural and physiological changes when looking at images. These responses can be characterised along an arousal (activation) and valence (pleasantness) dimension, often referred to as the circumflex model. Images can thus serve as strong affective stimuli that moderate arousal and valence levels in observers. The human visual system has a hierarchical structure that is very effective for many visual tasks including affect judgments. Here I present a (preliminary) deep neural network and some pilot data to explore: (1) what visual features related to visual categories (objects, faces and scenes) can predict human arousal/valence ratings; and (2) how “global” valence ratings of full images relate to “local” valence ratings of image regions. Finally, I discuss the need to extend these models to other naturalistic stimuli such sounds and videos.
​​
Link to Zoom recording of Quoc's Talk
___________________________________________________________________________________
​​
November 12th, 2025
​
Speaker: Ivette Colón (Rogers Lab)
Title: Neural representations of faces are widely distributed and highly individualized​
Abstract: As many in the DMC know, there is a historical assumption that our brains are a modular system– where discrete areas handle particular aspects of processing, with perhaps the most famous example being the FFA (fusiform face area) for face processing. While there is no doubt that certain areas are involved in certain processes, there is increasing evidence that there may be more information in other parts of the brain that contribute to those same processes. In this talk, I will present a (very) recently completed study in which we gather neural representations for a set of highly-controlled faces, places, and objects from 20 people in a slow-event related fMRI design. Then, using a multivariate decoding technique called Iterated LASSO, we find— separately for each participant, and across two scans per person— anatomically distributed patterns of voxels that carry reliable signals about whether a stimulus is a face, place, or object. Finally, we used these signal-carrying voxels as candidate stimulation areas in a transcranial magnetic stimulation (TMS) task, which allows us to examine whether brain areas outside of canonical face processing areas causally contribute to participants’ face processing. I will show that a) information about faces is encoded throughout the cortex, b) where exactly is variable across individuals, c) these differences are stable over time, and d) stimulation of these areas shows similar behavioral patterns as stimulation of established face processing areas. Together these results suggest that neural representations of faces may be more widely distributed and individualized than previously thought.
​​
Link to Zoom recording of Ivette's Talk
___________________________________________________________________________________
​
December 10th, 2025
​
Speaker: Heida Maria Sigurdardottir
Title: What does your mind’s eye actually see? Introducing a visually anchored measure of imagery​
​
Abstract: We often talk about having a “vivid imagination” or a “blind mind’s eye,” but we rarely ask what those experiences actually look like. Most imagery research still relies on verbal questionnaires such as the Vividness of Visual Imagery (VVIQ), where people rate vague prompts (“a friend’s face,” “a shop front”) on a 1-5 vividness scale. That makes it hard to know whether a “2” for one person resembles a “2” for anyone else, and whether we are really measuring how imagery appears rather than how people talk about it.
In this talk, I introduce the Visual Imagery Visually Anchored Scale (VIVAS), a new tool that lets participants reconstruct their mental images instead of just rating them. On each trial, people briefly see an object (faces, animals, buildings, food, manmade objects, or novel objects), then imagine it, and adjust three dimensions (opacity, color saturation, and sharpness) until an on-screen image matches what they see in their mind’s eye. I will argue that tools like VIVAS can move imagery research beyond vague vividness ratings toward concrete, appearance-based measurement that is more sensitive to both dimensional and content-specific differences in visual imagination.
​​
Link to Zoom recording of Talk: Coming soon!
___________________________________________________________________________________
January 14th, 2026
​
Speaker: Geraldine Jeckeln (O'Toole Lab)
Title: TBA​
Abstract: TBA
​​
Link to Zoom recording of Talk: Coming soon!
___________________________________________________________________________________
​​
February 11th, 2026
​
Speaker: TBA
Title: TBA​
Abstract: TBA
​
Link to Zoom recording of Talk: Coming soon!
___________________________________________________________________________________
​
March 11th, 2026
​
Speaker: DeGutis Lab
Title: TBA​
Abstract: TBA
​​
Link to Zoom recording of Talk: Coming soon!
​​​
___________________________________________________________________________________
​
June 10th, 2026
​
Speaker: Oruc Lab
Title: TBA​
Abstract: TBA
​​
Link to Zoom recording of Talk: Coming soon!
___________________________________________________________________________________
​
July 8th, 2026
​
Speaker: Greene Lab
Title: TBA​
Abstract: TBA
​​
Link to Zoom recording of Talk: Coming soon!
__________________________________________________________