top of page

TALKS
2023-2024

The group addresses questions, such as what’s the difference between a posed and a spontaneous expression? how fast can we perceive a face or an emotional expression?, what strategies do radiologists employ to detect breast cancer and is this skill trainable? how do clinical conditions, such as depression, autism, affect face recognition? To address these questions, researchers in the collaborative employ a variety of empirical techniques involving psychophysics, cognitive experiments, eye tracking, neural imaging (fMRI, EEG), and computer modeling.  
​​
Catch up on the 2023-24 season of the DMC here!
______________________________________________________________________________________
 
September 13th, 2023 
​

Speaker: Ben Balas 

Title: Mind Mysteries: Community-Based Visual Cognition Research​

​

Abstract: Designing and executing experiments that reveal properties of school-age children's visual abilities can be challenging for a number of reasons. The tasks that we tend to use to probe adults' visual recognition abilities can be fiendishly boring for grown-ups, which makes them downright intolerable for young kids. This often means that differentiating between errors that are the result of true task failures and errors that are the result of inattention or contrariness is challenging, limiting our ability to draw firm conclusions about developing visual abilities. That presumes that we can even get kids in the lab in the first place, which is a challenge in its own right! Besides posing practical challenges in terms of the speed of one's research, running tasks in the lab also imposes a selection bias on participation that many of us would like to avoid: Who has the time and resources to set aside time for a lab visit during the working week? How does this affect which members of our community end up participating in our research? Partly as an adaptation to pandemic-era testing and partly to address some of the aforementioned issues that have concerned me for a while, the Balas Lab has begun working with the Fargo Public Library to implement a series of take-home experiments distributed through the library Children's Rooms. These rely on coloring, drawing, and other generative methodologies, and concern a wide range of problem domains in visual cognition. In this talk, I'll describe our approach to creating these "Mind Mysteries" packets in collaboration with our IRB and the FPL, discuss the nature of the data we're able to collect this way, and show you some of our preliminary results. This is all very much work-in-progress, but is a new direction for our lab that I'm excited about and interested in hearing your perspective on.

​

Link to Zoom recording of Ben's Talk​

___________________________________________________________________________________

October 11th, 2023
 

Speaker: Eric Mah - Tanaka Lab

Title: Examining individual differences in psychological representations using PsiZ​

 

Abstract: The measurement of internal psychological representations is fundamental to the study of concepts, categories, similarity, learning and perception, among others. Typically, researchers have used multidimensional scaling (MDS) to map out 2D representations of internal psychological spaces using pairwise similarity rating data. However, MDS has several limitations, particularly when trying to measure the representations of an individual (e.g., many pairwise trials are needed). Recently, Roads and colleagues (Roads & Mozer, 2019; Roads & Love, 2021) developed the PsiZ program as a new way of inferring representational spaces. PsiZ uses behavioural similarity judgements to create multidimensional similarity

spaces called psychological embeddings. PsiZ leverages novel trial formats and algorithms and may be a more powerful alternative to MDS, especially when it comes to obtaining individual participants’ embeddings. The ability to map an individual’s semantic memory is tantalising, not only because group-level aggregate similarity spaces may obscure important individual differences, but also because they could also provide valuable insight into categorisation strategies that might differentiate individuals. For example, one individual might adopt a primarily perceptual strategy, categorising faces according to their featural similarity. Another individual might adopt a primarily conceptual strategy, categorising faces according to things like gender, group membership, etc. And it is likely that knowledge/expertise influence the strategies that people use. Our current work aims to test the a) the viability of PsiZ for obtaining interpretable individual-level embeddings and b) quantitative methods for differentiating individuals based on judgement strategies. In this talk, I will provide a brief introduction to PsiZ, discuss a preliminary study using faces of basketball players in which we found meaningful differences between the embeddings of basketball experts, fans, and novices, and our next planned experiment comparing the embeddings of kanji characters for Japanese- and non-Japanese readers.

​

Link to Zoom recording of Eric's Talk

​

___________________________________________________________________________________

​

November 8th, 2023
​

Speaker: Quoc Vuong

Title: Temporal adaptation in multi-sensory speech perception: A follow-up​

​

Abstract: We often see the flash of a lightning before hearing the corresponding thunder. The asynchrony between the visual and the auditory input for the same distant event is due to the speed of light and sound. The asynchrony between the different sensory inputs can also arise for closer events due to various physical, bio-physical and neural mechanisms, but people are much less aware of these differences. Why? There is accumulating evidence that people can adapt to auditory-visual asynchrony at different time scales and for different stimulus categories. Previously, I presented behavioural and neural data on temporal adaptation using a continuous-judgment paradigm developed by my PhD student. In this paradigm, participants continuously judge over several tens of seconds whether an auditory-visual (non-speech) stimulus is synchronous or not. Speech is one of the strongest case for auditory-visual integration, as demonstrated by multi-sensory illusions like the McGurk-McDonald and Ventriloquist effects. Here, we extend the continuous-judgement paradigm to auditory-visual speech. The preliminary results will be compared to results from our non-speech stimuli, and related to existing work on multi-sensory integration across different stimulus categories.​

​

Link to Zoom recording of Quoc's Talk

​

___________________________________________________________________________________

​

December 13th, 2023
​

Speakers: Kami Koldewyn & Heida Sigurdardottir

Title: Introducing Drs. Koldewyn & Sigurdardottir!

​

See the People page for bios.​

​

Link to Zoom recording of Kami Koldewyn's Talk

​

Link to Zoom recording of Heida Sigurdardottir's Talk

​

___________________________________________________________________________________

​

January 10th, 2024
​

Speaker: Matt Hill 

Title: Deep Convolutional Neural Network Encoding of Face Shape and Reflectance in Synthetic Face Images​

​

Abstract: Deep Convolutional Neural Networks (DCNNs) trained for face identification recognize faces across a wide range of imaging and appearance variations including illumination, viewpoint, and expression. In the first part of this dissertation, I showed that identity-trained DCNNs retain non-identity information in their top-level face representations, and that this information is hierarchically organized in this representation (Hill et al., 2019). Specifically, the similarity space was separated into two large clusters by gender, identities formed sub-clusters within gender, illumination conditions clustered within identity, and viewpoints clustered within illumination conditions. In the second part of this dissertation, I further examined the representations generated by face identification DCNNs by separating face identity into its constituent signals of “shape” and “reflectance”. Object classification DCNNs demonstrate a bias for “texture” over “shape” information, whereas humans show the opposite bias (Geirhos et al., 2018). No studies comparing “shape” and “texture” information have yet been performed on DCNNs trained for face identification. Here, I used a 3D Morphable Model (3DMM, Li, Bolkart, Black, Li, and Romero 2017) to determine the extent to which face identification DCNNs encode the shape and/or spectral reflectance information in a face. I also investigated the presence of illumination, expression, and viewpoint information in the top-level representations of face images generated by DCNNs. Synthetic face stimuli were generated using a 3DMM with separate components for a face shape’s “identity” and “facial expression”, as well as spectral reflectance information in the form of a “texture map”. The dataset comprised ten randomized levels each of face shape, reflectance, and expression, with three levels of illumination (spotlight, ambient, 3 point), three levels of viewpoint pitch (-30°, 0°, 30°), and five levels of viewpoint yaw (0°, 15°, 30°, 45°, 60°) in a complete factorial design for a total of 45,000 images. All analyses were conducted with an Inception ResNet V1-based network (Szegedy, Ioffe, Vanhoucke, & Alemi, 2017) trained on the VGGFace2 dataset (Cao, Shen, Xie, Parkhi, & Zisserman, 2018) and replicated with a ResNet-101-based network (He, Zhang, Ren, & Sun, 2016) trained on University of Maryland’s Universe dataset (Bansal, Castillo, Ranjan, & Chellappa, 2017; Bansal, Nanduri, Castillo, Ranjan, & Chellappa, 2017; Guo, Zhang, Hu, He, & Gao, 2016). Area Under the Receiver Operating Characteristic Curve (AUC) was used as a measure of information for each variable in the top-level representation and t-distributed Stochastic Neighbor Embedding (Van der Maaten & Hinton, 2008) was used to visualize the similarity space of top-level representations. The results showed that both shape and reflectance information were encoded in the top-level representation, and both signals were required for optimal performance. Shape-reflectance bias was mediated by illumination such that the network showed a reflectance bias in ambient and 3 point (photography style) illumination environments, whereas no bias was found under spotlight illumination. Consistent with Hill et al. (2019), we found information about all non-identity variables (illumination, expression, pitch, yaw) in the top-level representation, although each of these signals was weakly encoded.

​

Link to Zoom recording of Matt's Talk

___________________________________________________________________________________

​

February 14th, 2024
​

Speaker: Amy Dawel

Title: Quantifying perceiver and target (stimulus) contributions to face perception

 

Abstract: Facial expression perception is a complex interaction between how a face looks and various attributes that shape perceivers’ interpretations, such as their cultural background, social skills, and personal experiences. To date, research has focused on the what aspects of this interaction. For example, what physical information makes a smile appear genuinely happy versus merely polite? Or what is the relationship between social anxiety and face perception? While these are important questions, they miss a need to quantify how much stimulus and perceiver factors contribute to perception overall and relative to one another. This quantification is important for identifying the most fruitful avenues for research. For example, if quantification reveals that the perception of happiness is driven almost entirely by facial appearance, it may not be productive to investigate perceiver differences. Here, to address the quantification question for emotion perception, we look to an approach pioneered by Hehman and colleagues for facial first impressions (e.g., for trustworthiness), using ICCs generated by cross-classified MLMs. However, variations in our findings across different experimental designs raise questions about how to best apply and interpret results. This talk will present the ICC approach and our initial data, aiming to generate discussion about how to use this approach to understand face perception.​

​

Link to Zoom recording of Amy's Talk

​

___________________________________________________________________________________

​

March 13th, 2024
​

Speaker: Ipek Oruc

Title: Complex patterns: from encoding to recognition

​

Abstract: How are complex visual patterns recognized? This question has been an enduring interest shaping my research efforts in the past. In this talk, I will review some highlights from both my own work and others in the field. My research, which, over the years, have mostly centered on the recognition and processing of faces, along with other high-level visual stimuli such as letters, words, houses, and more recently, retinal images, have prompted a shift in my interests, from a focus on the mechanisms of recognition to the learning aspect of this task. How are patterns encoded? What type of visual experience is needed to facilitate this process? What specifically constitutes visual experience? How can we define it and measure it? I will end my talk by inviting everyone to an interactive group discussion on these questions, which are pivotal in guiding future research directions in this area.

​

Link to Zoom recording of Ipek's Talk

​

___________________________________________________________________________________

​

June 12th, 2024
​

Speaker: Elena Geangu

Title: Tools for monitoring infants’ natural statistics relevant for social development. Preliminary findings from their implementation.

​

Abstract: We greatly depend on understanding other people’s emotions to interact with them. The predominant view is that we rely on the overt facial, body and vocal expressions people naturally make to infer how they feel, as well as on information about the context in which these cues appear.

A large part of my research has focused on understanding how children develop during the first years of life the ability to infer other people’s emotions. While we have learned important aspects about this development, one question that still puzzled us is: how does this happen? How do infants acquire in a relatively short period of time the ability to infer other people’s emotion from the expression of their face, body and voice? Surprisingly, despite decades of developmental research, we do not yet have a good answer to this critical question. One proposal for this slow progress is that researchers often failed to record what infants see and hear during their everyday lives, from their own perspective and largely missed the social context in which emotional expressions are embedded in the real world.

To a good extent, this state-of-art in understanding how infants develop the ability to infer other people’s emotions is due to a lack of adequate methods. In this talk I will present recent work from my lab aimed at closing this methodological gap, as well as preliminary findings from an ongoing longitudinal study that rely on these new methodological developments.

​

Link to Zoom recording of Elena's Talk: Coming soon!

​

___________________________________________________________________________________

​

July 10th, 2024
​

Speaker: Anna Lawrance - Tanaka Lab

Title: Visuo-perceptual and Semantic Category Restructuring in Undergraduate Geology Students​

​

Abstract: To the untrained eye, rocks offer little perceptual information to aid in accurate categorizations. Given this, geology serves as an ideal domain to observe the emergence of category knowledge. This study examined the formation and reorganization of perceived rock-type categories in students (N=48) enrolled in a post-secondary introductory-level geology course. Through this work, we addressed three key questions: 1) How do categories for rock knowledge develop? 2) How does category knowledge of rocks change following conceptual and perceptual learning? and 3) Is the trajectory of category learning indicative of academic performance? In this study, shifts in category structure were assessed using PsiZ, a machine learning package that generates a multi-dimensional category representation (i.e., psychological embedding) based on the participant’s similarity judgments. On each trial, participants were presented with a visual array of nine images and were asked to select the two most similar peripherally presented reference images to the central query image. Using the similarity ranking trial data, category structure was inferred at two time points: at the beginning of the course before formal instruction began and later, towards the end of the course, after participants received instruction in course content. Our analysis encompassed both perceptual and conceptual dimensions. Similarity judgments on images of rocks assessed the structure of participants’ perceptual categories, while judgments on rock type labels (e.g., “basalt” and “granite”) assessed the structure of their conceptual categories. To investigate the relationship between perceived category structure and performance on formal academic assessments, participants’ grades on lab tests assessing rock identification skills were obtained. In this talk, I will detail how category structures shifted pre- to post-instruction among the top 25% and bottom 25% of students, as determined by their lab test performance.

​

Link to Zoom recording of Anna's Talk: Coming soon!

​

___________________________________________________________________________________

bottom of page