top of page

TALKS
2022-2023

The group addresses questions, such as what’s the difference between a posed and a spontaneous expression? how fast can we perceive a face or an emotional expression?, what strategies do radiologists employ to detect breast cancer and is this skill trainable? how do clinical conditions, such as depression, autism, affect face recognition? To address these questions, researchers in the collaborative employ a variety of empirical techniques involving psychophysics, cognitive experiments, eye tracking, neural imaging (fMRI, EEG), and computer modeling.  
​
______________________________________________________________________________________
 
September 14th, 2022 

Jim Tanaka 

Steep and Shallow Category Learning​

​

Abstract: 
What is the connection between category learning and category retention? In category training studies, it is assumed that once participants are trained to a specified criterion, they are equivalent in their categorization abilities. However, little work has examined how individual differences in the rate of category acquisition might affect the retention of category knowledge in the post-acquisition phase. In this study, we investigate the initial trainability of participants as a predictor of their ability to retain category knowledge. In Phase 1 of the study, participants were trained to classify images of warblers into one of four species categories (Magnolia, Cape May, Prairie, Townsend). Training continued until participants achieved a 90% accuracy criterion. Multilevel modeling was used to characterize participants as having either a “steep” or “shallow” acquisition slope. The slopes were computed based on the learner’s initial baseline performance in the first block of learning trials and the number of trials required to reach the 90% criterion. Subsequently, in Phase 2 of the study, participants were asked to categorize new images of the four warbler species. Their category abilities were tested at three time points: immediately, 1 day and 2 days after category training. The key result was that relative to the original 90% training criterion, steep learners improved their categorization performance to 97%. In contrast, the category performance of the shallow learners declined to 80% - below the original training criterion of 90%. Results show that category learning and category retention are intimately linked, where the rate by which someone acquires a perceptual category predicts how they will retain category knowledge over time. Adopting an individual differences perspective, these findings suggest that visual category learning may be a stable, trait-like characteristic that measures a person’s ability to acquire and retain category knowledge.

 

Link to Zoom Recording of Jims Talk

​

___________________________________________________________________________________

October 12th, 2022 

Benjamin Balas 

Likeness and Learning to Recognize Faces

​

Abstract: 

 

Faces of the same person can look remarkably different from one another depending on a wide range of intrinsic and extrinsic sources of variability. One manifestation of that variability is that some faces are judged to be better likenesses than others: You might think that your passport photo doesn't do you justice, for example, or alternatively that a favorite photo of a loved one captures something important about how they look. What consequences does variability in likeness have for establishing representations of a person's face? I'll describe two studies in which we attempted to explore (1) How face learning may be affected by face likeness and (2) How likeness is affected by familiarity and how it relates to face prototypes. I think both studies suggest that should examine some new ideas about how variability in appearance contributes to summary descriptions of familiar faces used for recognition.

​

Link to Zoom recording of Ben's talk.

​

___________________________________________________________________________________

November 9th, 2022 

Cameron Kyle-Davidson 

Characterizing and Dissecting Human Perception of Scene Complexity

​

Abstract: 

 

Humans can effortlessly assess the complexity of the visual stimuli they encounter. However, our understanding of how we do this, and the relevant factors that result in our perception of scene complexity remain unclear; especially for the natural scenes in which we are constantly immersed. We introduce several new datasets to further understanding of human perception of scene complexity. Our first dataset (VISC-C) contains 800 scenes and 800 corresponding two-dimensional complexity annotations gathered from human observers, allowing exploration for how complexity perception varies across a scene. Our second dataset, (VISC-CI) consists of inverted scenes (reflection on the horizontal axis) with corresponding complexity maps, collected from human observers. Inverting images in this fashion is associated with destruction of semantic scene characteristics when viewed by humans, and hence allows analysis of the impact of semantics on perceptual complexity. We analysed perceptual complexity from both a single-score and a two-dimensional perspective, by evaluating a set of calculable and observable perceptual features based upon grounded psychological research (clutter, symmetry, entropy and openness). We considered these factors' relationship to complexity via hierarchical regressions analyses, tested the efficacy of various neural models against our datasets, and validated our perceptual features against a large and varied complexity dataset consisting of nearly 5000 images. Our results indicate that both global image properties and semantic features are important for complexity perception. We further verified this by combining identified perceptual features with the output of a neural network predictor capable of extracting semantics, and found that we could increase the amount of explained human variance in complexity beyond that of low-level measures alone. Finally, we dissect our best performing prediction network, determining that artificial neurons learn to extract both global image properties and semantic details from scenes for complexity prediction. Based on our experimental results, we propose the "dual information" framework of complexity perception, hypothesising that humans rely on both low-level image features and high-level semantic content to evaluate the complexity of images.

​

___________________________________________________________________________________

January 11, 2023 

Quoc Vuong

Temporal adaptation in multi-sensory perception

​

Abstract: 

 

When we watch a dubbed film, the asynchrony between the actors’ facial movements and their dubbed speech may initially be jarring. After some time, however, we adapt to this asynchrony so that the facial movements and speech appear synchronous. This illustrates the potential importance of adaptation over time in perception. In this talk, I present behavioural and neural data on temporal adaptation using a new continuous-judgment paradigm. In this paradigm, participants continuously judge over several tens of seconds whether an auditory-visual stimulus is synchronous or not. Using long stimulus durations allowed us to systematically test how participants adapt to different asynchronous auditory-visual stimuli over time. We used a similar paradigm in an fMRI study to investigate underlying brain mechanisms. Given that speech is one of the strongest case for auditory-visual integration, I will also discuss some ideas to extend our stimuli to faces and speech.

​

Link to Zoom Recording of Quoc's Talk.

​

___________________________________________________________________________________

December 14th, 2022 

Tim Rogers 

Characterizing individual differences in cognitive and neural representations

​

Abstract: 

 

In both cognition and neuroscience, efforts to understand the structure of representation often rely, implicitly or explicitly, on an assumption of universality--that all or most healthy adult members of a given population possess similar mental representations in a given domain, and that those representations are encoded in similar patterns of neural activity across similarly-localized brain systems. This assumption justifies many of the most common statistical techniques in our discipline via cross-subject data aggregation, an approach that can obscure individual differences. I will discuss some new techniques for estimating mental and neural structure that relax universality assumptions, and will describe some brain imaging work suggesting that, when such techniques are deployed, neural representations appear much more widely distributed and variable across individuals than previously suspected.

​

BACKGROUND READING.

​

Link to Zoom Recording of Tim's Talk.

​

___________________________________________________________________________________

Feb 8, 2023 

Ipek Oruc

Discovering and learning to recognize new retinal features with the help of AI

Abstract: 

Medical images are a rich source of information regarding health. Diagnosticians are trained to sift through them to detect subtle signs of pathological processes, and to ignore vast variations unrelated to pathology. Retinal images are routinely used in the diagnosis and management of ocular diseases. Might there be signs of pathology in a retinal image, beyond eye diseases, that are hiding in plain sight, but currently overlooked? Convolutional neural networks (CNN) trained on retinal fundus images can classify patient sex, a trait that is invisible to the diagnostician (e.g. ophthalmologist) in this modality. Recent work in the interpretation of a CNN model trained for sex classification has elucidated features within fundus images that were relevant to this task (Delavari et al., 2023). Using patient sex as a case study, we investigated whether human observers can be trained to recognize “invisible” patient traits from fundoscopic images. We examined a group of diagnosticians (Expert, N=23) and a comparison group (Non-expert, N=31). In the pre-training phase, baseline sex recognition was assessed via a 2-alternative forced-choice (2-AFC) task without feedback. This was followed by a training phase and practice trials with feedback. Finally, a post-training 2-AFC sex recognition test and a novel object memory test (NOMT) (Richler, Wilmer & Gauthier, 2017) to assess general object recognition ability were completed. Results for the pre-test are consistent with chance-level performance, M=52% for Experts, and M=52% for Non-experts, as expected. Post-test performance was significantly improved for Experts with M=66.1% (d= 2.38, p<<0.01) and for Non-experts M=66.2% (d=1.67, p<<0.01). Performance on the NOMT test was not related to improvement in fundus classification. Together, these results demonstrate that diagnosticians can be trained to recognize novel retinal features suggested by artificial intelligence. Future work with this approach can be extended to discover signs of systemic and neurodegenerative disease in retinal images.

___________________________________________________________________________________

Mar 8, 2023 

Liz Miller (Dawel Lab)

How do people respond to CG and AI faces?

Abstract: 

Computer-generated (CG) faces are replacing humans in our everyday and scientific lives. However, whether people respond to CG faces the same as humans is poorly understood - which is concerning given the significant increase in articles using CG faces as proxies for human ones in science. We conduct meta-analyses comparing people’s responses to CG and human faces, revealing that responses are often impoverished for CG compared to human faces. This review came before the emergence of highly realistic AI faces though, which a recent influential study found are indistinguishable from, and perceived as more trustworthy than, human faces (Nightingale & Farid, 2022). Our re-analysis of Nightingale & Farid’s data and a new replication study (N=124) shows that White AI faces are actually misreported as human, with high confidence. We also replicate Nightingale & Farid’s finding that AI faces are perceived as more trustworthy than human faces, but find mind attribution (e.g., curiosity) does not differ for AI versus human faces. These findings raise concerns about potential misuse of overly convincing AI faces (e.g., revenge porn, misinformation, cyber warfare).

___________________________________________________________________________________

bottom of page