Towards the Neuroethology of Vocal Communication in the Mongolian Gerbil

C.E. Credits: P.A.C.E. CE | Florida CE
Speaker
  • Alex Williams, PhD

    Assistant Professor, Center for Neural Science at NYU, Associate Research Scientist and Project Leader at the Flatiron Institute
    BIOGRAPHY

Abstract

Social animals congregate in groups and communicate with vocalizations. To study the dynamics of natural vocal communication and their neural basis, one must characterize signals used for communication and determine the sender and receiver of the signal [1]. To this end, we established two complementary approaches for (1) quantifying vocal repertoire using a variational autoencoder (VAE) with longitudinal audio recordings in a naturalistic social environment, and (2) vocal call attribution using a deep neural network. We pursued this research by establishing a unique and favorable model organism — the Mongolian gerbil — which has a sophisticated vocal repertoire and complex social hierarchy, including pair bond formation [2]. Here, we made continuous acoustic recordings of three separate gerbil families for 20 days each, and used a VAE for unsupervised representation learning of acoustic features to show that gerbil families have family specific vocal repertoires. Although this result positions the gerbil as an intriguing model of social vocal interactions, the inability to attribute vocalizations to individuals in a group limits interpretability of these family vocal differences, and remains a persistent problem for others in the field. We have therefore developed (1) a supervised deep learning framework with calibrated uncertainty estimates that achieves state-of-the-art sound source localization performance, (2) novel hardware solutions to generate benchmark datasets for training/evaluating sound source localization models across labs, and (3) curated and released the first large-scale benchmark datasets for vocal call localization in social rodents.

Learning Objectives: 

1. Demonstrate a model for natural vocal communication.

2. Identify vocalizations to individuals.

3. Generate scalable computational analyses.


You May Also Like
Loading Comments...